diff --git "a/title_30K/test_title_long_2404.16461v2.json" "b/title_30K/test_title_long_2404.16461v2.json" new file mode 100644--- /dev/null +++ "b/title_30K/test_title_long_2404.16461v2.json" @@ -0,0 +1,102 @@ +{ + "url": "http://arxiv.org/abs/2404.16461v2", + "title": "Large Language Models Perform on Par with Experts Identifying Mental Health Factors in Adolescent Online Forums", + "abstract": "Mental health in children and adolescents has been steadily deteriorating\nover the past few years. The recent advent of Large Language Models (LLMs)\noffers much hope for cost and time efficient scaling of monitoring and\nintervention, yet despite specifically prevalent issues such as school bullying\nand eating disorders, previous studies on have not investigated performance in\nthis domain or for open information extraction where the set of answers is not\npredetermined. We create a new dataset of Reddit posts from adolescents aged\n12-19 annotated by expert psychiatrists for the following categories: TRAUMA,\nPRECARITY, CONDITION, SYMPTOMS, SUICIDALITY and TREATMENT and compare expert\nlabels to annotations from two top performing LLMs (GPT3.5 and GPT4). In\naddition, we create two synthetic datasets to assess whether LLMs perform\nbetter when annotating data as they generate it. We find GPT4 to be on par with\nhuman inter-annotator agreement and performance on synthetic data to be\nsubstantially higher, however we find the model still occasionally errs on\nissues of negation and factuality and higher performance on synthetic data is\ndriven by greater complexity of real data rather than inherent advantage.", + "authors": "Isabelle Lorge, Dan W. Joyce, Andrey Kormilitzin", + "published": "2024-04-25", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Large Language Models Perform on Par with Experts Identifying Mental Health Factors in Adolescent Online Forums", + "main_content": "Introduction The recent development of powerful Large Language Models such as GPT3.5 [2] and GPT4 [3] able to perform tasks in a zero-shot manner (i.e., without having been specifically trained or finetuned to do so) by being simply prompted with natural language instructions shows much promise for healthcare applications and the domain of mental health. Indeed, these models display more impressive general natural language processing abilities than their predecessors and excel at tasks such as Question Answering and Named Entity Recognition [4, 5, 6, 7]. Models with the ability to process social media content for indicators of mental health issues have the potential to become invaluable cost-effective tools for applications such as public health monitoring [8] and online moderation or intervention systems [9]. In addition, synthetic data produced by LLMs can be a cost effective and privacy-preserving tool for training task specific models [10]. There have been several studies aimed at assessing the abilities of LLMs to perform a range of tasks related to mental health on datasets derived from social media. Yang et al. [11] conducted a comprehensive assessment of ChatGPT (gpt-3.5-turbo), InstructGPT3 and LlaMA7B and 13B [12] arXiv:2404.16461v2 [cs.CL] 26 Apr 2024 \fon 11 different datasets and 5 tasks (mental health condition binary/multiclass detection, cause/factor detection, emotion detection and causal emotion entailment, i.e. determining the cause of a described emotion). They find that while the LLMs perform well (0.46-0.86 F1 depending on task), with ChatGPT substantially outperforming both LLaMA 7B and 13B, they still underperform smaller models specifically fine-tuned for each task (e.g., RoBERTa). Xu et al. [13] find similar results for Alpaca [14], FLAN-T5 [15] and LLaMA2 [16], with only fine-tuned LLMs able to perform on par with smaller, task-specific models such as RoBERTa [17, 18]. However, we find that previous studies suffer from the following shortcomings: 1. They focus on adult mental health 2. They focus on tasks with a closed (or finite) set of answers, where the model is asked to perform each task in turn 3. They do not investigate how LLMs perform on synthetic data, i.e., text they are asked to simultaneously generate and label There is growing consensus that we are facing a child mental health crisis [1]. Before the COVID-19 pandemic there was already increasing incidence of mental health conditions in children and young people (CYP), such as depression, anxiety and eating disorders [19] as well as rising rates of self-harm and suicidal ideation [20] and cyberbullying strongly linked to adverse mental health outcomes [21]. The advent of the pandemic accelerated this already precarious situation and created additional challenges [22, 23] such as discontinuity of healthcare service provision in addition to interruption to young people\u2019s usual engagement in education and their social lives. This age range is particularly vulnerable to onset of mental health issues, with half of conditions appearing by early adolescence and 10-20% of children and young people experiencing at least one mental health condition [24]. Females, those with low socioeconomic backgrounds, trauma, abuse or having witnessed violence [25] are at heightened risk. On the other hand, social media now forms an important part of children and adolescents\u2019 daily lives, whose impact on mental health is debated, with potential benefits (stress reduction and support networks [26]) as well as potential risks (sleep disturbance, self esteem issues and cyberbullying [27]). Regardless of their detrimental or protective impact, social media may contribute valuable insights into CYP\u2019s mental health, with opportunities for monitoring and intervention, for example identifying those at risk of depression and mood disorders [28]. Given the mental health of CYP is a particularly pressing public health concern, we wished to investigate how LLMs perform on extracting mental health factors when faced with social media content generated by young people aged 12-19. Indeed, several issues related to mental health either exclusively apply to children and adolescents (such as school bullying and ongoing family abuse) or are particularly prevalent in this age range (such as eating disorders [29] and self-harm [30]), making both content type and factors of interest distinct from those found in adult social media posts. In addition, previous studies focused on tasks which had either a binary or closed sets of answers (e.g., choosing between several given conditions or between several given causal factors). In contrast, we wish to examine how LLMs perform on a task of open information extraction, where they are given categories of information and asked to extract any which are found in the text (e.g., asked to detect whether there is any mental health condition indicated in the text). Furthermore, in previous studies the models were tested with each task in turn (e.g., asked to detect depression in one dataset, then detect suicidality in another dataset), whereas we gather and annotate our own dataset in order to be able to ask the LLMs to extract all categories simultaneously (e.g, extract all conditions and symptoms in a given sentence). Finally, to our knowledge there has been no investigation on how LLM performance compares when asked to annotate text as they generate it, i.e., how their performance on synthetic data compares with their performance on real data. There is growing interest in synthetic data for healthcare [31]. Given the potential for training models and running simulations and digital twin experiments with the benefit of reduced issues of data scarcity and privacy, we believe that our work will contribute to better understanding of limitations and benefits of using synthetic data for real-world tasks. 2 \f2 Aims In summary, we aim to: 1. Generate and annotate with high-quality expert annotations a novel dataset of social media posts which allows extraction of a wide range of mental health factors simultaneously. 2. Investigate performance of two top-performing LLMs (GPT3.5 and GPT4) on extracting mental health factors in adolescent social media posts to verify whether they can be on par with expert annotators. 3. Investigate how these LLMs perform on synthetic data, i.e., when asked to annotate text as they generate it, with the aim of assessing utility of these data in training task specific models 3 Method 3.1 Reddit dataset We use Python\u2019s PRAW library to collect post from the Reddit website (www.reddit.com) over the last year, including posts from specific forum subthemes (\u2018subreddits\u2019) dedicated to mental health topics: r/anxiety, r/depression, r/mentalhealth, r/bipolarreddit, r/bipolar, r/BPD, r/schizophrenia, r/PTSD, r/autism, r/trau-matoolbox, r/socialanxiety, r/dbtselfhelp, r/offmychest and r/mmfb. The distribution of subreddits in the dataset can be found in Figure 1. As in previous works [32], we use heuristics to obtain posts from our target age range (e.g, posts containing expression such as I am 16/just turned 16/etc.) We gather 1000 posts written by 950 unique users. To optimise the annotation process, we select the most relevant sentences to be annotated by embedding a set of mental health keywords with Python\u2019s sentence-transformers library [33] calculating the cosine similarity with post sentences, choosing a threshold of 0.2 cosine similarity after trial and error. We keep the post index for each sentence to provide context. The resulting dataset contains 6500 sentences. 3.2 Ethical considerations In conducting this research, we recognised the importance of respecting the autonomy and privacy of the Reddit users whose posts were included in our dataset. While Reddit data is publicly available and was obtained from open online forums, we acknowledge that users may not have anticipated their contributions being used for research purposes and will therefore make the data available only on demand. The verbatim example sentences given in later sections have been modified to prevent full-text searching strategies to infer the post author\u2019s immediate identity on reddit. To protect the confidentiality of participants, we did not provide usernames or other identifying information to our annotators. Annotators were psychiatrists who were warned that the content of the posts was highly sensitive with potentially triggering topics such as self-harm and child abuse. Reddit\u2019s data sharing and research policy allows academic researchers to access certain Reddit data for the purposes of research, subject to the platform\u2019s terms and conditions. They require researchers to obtain approval through their data access request process before using the API. The policy outlines requirements around protecting user privacy, obtaining consent, and properly attributing the data source in any published work. They reserve the right to deny data access requests or revoke access if the research is deemed to violate Reddit\u2019s policies. Researchers must also agree to Reddit\u2019s standard data use agreement when accessing the data. Our research aims to contribute to the understanding of mental health discourse from adolescents on social media platforms. We believe the potential benefits of this work, in terms of insights that could improve mental health support and resources, outweigh the minimal risks to participants. However, we remain aware of the ethical complexities involved in using public social media data, and encourage further discussion and guidance in this emerging area of study. 3 \f3.3 Synthetic dataset In addition to the real dataset, we generate two synthetic datasets of 500 sentences each by prompting GPT3.5 (gpt-3.5-turbo-0125) and GPT4 (gpt-4-0125-preview) to create and label Reddit-like posts of 5 sentences (temperature 0, all other parameters set to default). The instructions given were made as similar as possible to those given to annotators, and the model was expliclity told to only label factors which applied to the author of the post (e.g., not to label My friend has depression with CONDITION). The prompt used can be found in Appendix A. Figure 1: Distribution of subreddits 3.4 Annotation schema Given our goal is to obtain a wide range of relevant annotations for each sentence in order to test the LLMs\u2019 ability to generalise and perform open information extraction, and the previously mentioned important factors related to trauma [34] and precarity [35], we create the following six categories in consultation with a clinical psychiatrist: \u2022 TRAUMA (sexual abuse, physical abuse, emotional abuse, school bullying, death, accident, etc.) \u2022 PRECARITY (socioeconomic, parental conflict, parental illness, etc.) \u2022 SYMPTOM (self-harm, low self-esteem, anhedonia, panic attack, flashback, psychosis, insomnia, etc.) 4 \f\u2022 CONDITION (eating disorder, depression, bipolar, bpd, anxiety, ptsd, adhd, substance abuse/addiction, etc.) \u2022 SUICIDALITY (no subcategories) \u2022 TREATMENT (no subcategories) Nineteen expert annotators were contacted and asked to annotate 500 sentences each for a fixed compensation of \u00a3120 (\u2248\u00a360/hour). These were UK-trained psychiatrists, all of whom had obtained Membership of the Royal College of Psychiatrists by post-graduate experience and formal examinations. Thirteen annotators annotated the Reddit dataset, two annotators annotated the synthetic datasets and four annotators re-annotated samples from the Reddit and synthetic datasets for inter-annotator agreement computation (100 sentences from each dataset, 1500 sentences in total). Annotators were given the above subcategory examples but allowed to use new subcategories when appropriate (no closed set of answers). They were given the post indices to provide context (i.e., so as to be aware which sentences belonged to the same post). They were asked to annotate only school bullying as bullying, and other instances (e.g., sibling harassment) as emotional abuse. Anxiety was to be annotated as a symptom rather than condition unless specifically described as a disorder. Experts performed the annotation by filling in the relevant columns in an Excel sheet with each sentence as a row. Importantly, given the known limitations of language models with negation [36], we wished to annotate both POSITIVE and NEGATIVE evidence in order to test LLMs\u2019 ability to handle both polarities (e.g., I am not feeling suicidal as negative suicidality or We don\u2019t have any money issues as negative socioeconomic precarity). For this purpose, annotators were asked to use the prefixes P and N (e.g., P(adhd) in the CONDITION column or N(socioeconomic) in the PRECARITY column). 3.5 Data processing and dataset statistics In order to compare expert annotations with LLM annotations despite the wide variety of subcategories and terms used by annotators we create dictionaries mapping each term found in the dataset to a standard equivalent (e.g., p(emotional) to p(emotional abuse), p(physical violence) to p(physical abuse), p(gun violence) and p(school shooting) to p(violence), p(rape) to p(sexual abuse), p(financial burden) and p(poor) to p(socioeconomic precarity), p(divorce) to p(family conflict), p(self hatred) to p(low self esteem), etc.). Parental substance abuse is considered family illness and any underspecified subcategories are marked as \u2018unspecified\u2019 (e.g., p(trauma unspecified)). The distribution of subcategories for each category can be found in figures 2, 3, 4 and 5 in Appendix B. The most frequent subcategory in TRAUMA is emotional abuse, which occurs twice as often as physical abuse and death in the dataset. The most frequent form of PRECARITY is family conflict, then family illness (including parental substance abuse) and socioeconomic precarity. The most frequent CONDITIONS are depressive disorders, followed by substance abuse/addiction and ADHD. The most frequent SYMPTOMS are anxiety, low self-esteem, self-harm and low mood. Interestingly, the distribution of subcategories differs quite substantially in the synthetic datasets (distributions for the GPT3.5 and GPT4 generated datasets can be found in Appendix B). Overall, the number of subcategories is reduced, indicating less diversity (however, these are smaller datasets). The top trauma subcategories are sexual abuse for GPT3.5 and school bullying for GPT4, both of which were much less prevalent in real data. The second most prevalent condition for both GPT3.5 and GPT4 is eating disorders, whereas these ranked in 8th place in real data. Finally, unlike in real data, flashbacks and panic attacks are the 3d and 4th most frequent symptoms for both GPT3.5 and GPT4-generated data, whereas self-harm ranks much lower than in real data. Given many of these subcategories were given as examples in the annotator guidelines and LLM prompt, it is likely that the LLMs used them in a more homogenous manner for generation than the distribution which would be found in real data. However, the distribution is not entirely homogenous, which suggests the LLMs did leverage some of the biases learned from their training data. 4 Results Once both human and LLM annotations are standardised, we conduct analyses to assess performance. We provide precision, recall and F1 at the category level and accuracy at the subcategory level 5 \fcollapsed across subcategories (given their high number). We compute category performance in two ways: Positive or Negative, where a point is awarded if the category contains an annotation in both human and LLM annotations, regardless of polarity (i.e., the annotator considered there was relevant information concerning the category TRAUMA) and Positive Only metrics, where negative annotations are counted as no annotations. The difference between the two metrics can be seen clearly in Table 1 (GPT3.5 results), where precision increases but recall diminishes for Positive Only. The increase in precision is due to the fact that GPT3.5 outputs a substantial number of negative annotations in cases where human annotators did not consider it relevant to mention the category. The reduction in recall, on the other hand, results from the fact that LLMs often confuse positive and negative annotations and will occasionally output a negative annotation for a positive one. For real data (Tables 1 and 2), GPT3.5\u2019s performance at the category level is average, with better performance in the Positive Only metrics (0.57). GPT4 performs better, especially in Positive Only metrics (0.63) and subcategory accuracy (0.48 vs. 0.39). In general, recall is higher than precision, indicating LLMs may be overpredicting labels. The performance for synthetic data (Tables 3 and 4) is substantially better, with no gap between the Positive or Negative and Positive Only metrics, suggesting less irrelevant negative annotations. Here again, GPT4 outperforms GPT3.5, both at the category level (0.75 vs 0.70 and 0.73 vs 0.68) and more particularly at the subcategory level, where GPT4 reaches an impressive accuracy of 0.72 (vs 0.42). The gap between recall and precision is reduced for GPT4, whereas GPT3.5 displays higher precision than recall here. In order to assess the upper bound of human performance, we calculate inter-annotator agreement for both real and synthetic datasets using Cohen\u2019s Kappa. Values can be found in Table 5. Interestingly, while performance at the category level in real data is lower (GPT3.5) or similar (GPT4) compared to humans, GPT4 displays a substantially higher accuracy at the subcategory level (0.47 vs 0.35). For synthetic data, GPT3.5 still underperforms human agreement on all three metrics, while GPT4 is on par with humans for the Positive Only and subcategory metrics and only underperforms in the Positive and Negative metric. Category Positive or Negative Positive Only Subcategory Precision Recall F1-Score Precision Recall F1-Score Accuracy TRAUMA 0.38 0.78 0.51 0.56 0.65 0.60 0.39 PRECARITY 0.26 0.43 0.33 0.45 0.31 0.37 0.22 CONDITION 0.33 0.85 0.48 0.54 0.72 0.62 0.55 SYMPTOMS 0.39 0.62 0.48 0.46 0.58 0.52 0.31 SUICIDALITY 0.44 0.79 0.56 0.80 0.68 0.73 / TREATMENT 0.48 0.72 0.58 0.72 0.58 0.64 / ALL 0.37 0.70 0.49 0.55 0.60 0.57 0.39 Table 1: GPT3.5 (real data). Positive or Negative: counting annotation in category regardless of polarity (category level); Positive Only: counting negative annotations as NaN (category level); Subcategory: accuracy at the subcategory level 5 Error analysis We examine some of the sentences annotated by the LLMs in order to perform error analysis and extract the following findings (as mentioned previously some words have been paraphrased to preclude full-text search allowing user identification): \u2022 Both GPT3.5 and GPT4 produce infelicitous negations, i.e., negative annotations which would seem irrelevant to humans, e.g., (I have amazing people around me =>negative parental death or The internet is my one only coping mechanism =>trauma unspecified) \u2022 Despite being specifically prompted to only annotate factors related to the writer/speaker, LLMs (including GPT4) do not always comply, e.g., She comes from what is, honestly, a horrific family situation =>emotional abuse) 6 \fCategory Positive or Negative Positive Only Subcategory Precision Recall F1-Score Precision Recall F1-Score Accuracy TRAUMA 0.44 0.89 0.59 0.57 0.84 0.68 0.57 PRECARITY 0.31 0.52 0.39 0.50 0.46 0.48 0.36 CONDITION 0.46 0.81 0.59 0.61 0.77 0.68 0.57 SYMPTOMS 0.35 0.78 0.49 0.45 0.73 0.56 0.41 SUICIDALITY 0.36 0.93 0.51 0.70 0.87 0.77 / TREATMENT 0.39 0.87 0.54 0.64 0.81 0.71 / ALL 0.39 0.80 0.52 0.55 0.75 0.63 0.48 Table 2: GPT4 (real data). Positive or Negative: counting annotation in category regardless of polarity (category level); Positive Only: counting negative annotations as NaN (category level); Subcategory: accuracy at the subcategory level Category Positive or Negative Positive Only Subcategory Precision Recall F1-Score Precision Recall F1-Score Accuracy TRAUMA 0.90 0.49 0.64 0.90 0.49 0.64 0.38 PRECARITY 0.84 0.69 0.76 0.86 0.69 0.76 0.54 CONDITION 0.44 0.67 0.53 0.47 0.67 0.55 0.59 SYMPTOMS 0.85 0.59 0.70 0.84 0.59 0.69 0.36 SUICIDALITY 0.75 1.00 0.85 0.77 0.90 0.83 / TREATMENT 0.68 0.84 0.75 0.76 0.57 0.65 / ALL 0.74 0.65 0.70 0.77 0.61 0.68 0.42 Table 3: GPT3.5 (synthetic data). Positive or Negative: counting annotation in category regardless of polarity (category level); Positive Only: counting negative annotations as NaN (category level); Subcategory: accuracy at the subcategory level Category Positive or Negative Positive Only Subcategory Precision Recall F1-Score Precision Recall F1-Score Accuracy TRAUMA 0.84 0.95 0.89 0.86 0.92 0.89 0.82 PRECARITY 0.85 0.84 0.85 0.91 0.82 0.86 0.80 CONDITION 0.61 0.67 0.64 0.60 0.67 0.63 0.67 SYMPTOMS 0.49 0.78 0.60 0.53 0.80 0.64 0.69 SUICIDALITY 0.81 0.94 0.87 0.78 0.82 0.80 / TREATMENT 0.85 0.89 0.87 0.87 0.78 0.82 / ALL 0.69 0.83 0.75 0.69 0.79 0.73 0.72 Table 4: GPT4 (synthetic data). Positive or Negative: counting annotation in category regardless of polarity (category level); Positive Only: counting negative annotations as NaN (category level); Subcategory: accuracy at the subcategory level \u2022 Even GPT4 makes errors regarding negation (e.g., I\u2019ve read about people with autism getting temper tantrums/meltdowns, however, that has never really been a problem for me=>negative autism or i had in my head that something inside was very wrong, but i never felt completely depressed all the time so i never took bipolar seriously =>negative bipolar disorder) \u2022 Despite being prompted to annotate suicidality in a separate category, LLMs often annotate it in the SYMPTOM rather than SUICIDALITY category \u2022 GPT3.5 especially often outputs irrelevant/spurious/incorrect labels (e.g., \u2018unemployed\u2019 as condition, \u2018ambition\u2019 as symptom, labelling physical conditions instead of mental conditions only, etc.) 7 \fPositive and Negative Positive Only Subcategory Annotator vs. Annotator (real data) 0.60 0.59 0.35 GPT3 vs. Annotator (real data) 0.39 0.52 0.37 GPT4 vs. Annotator (real data) 0.43 0.58 0.47 Annotator vs. Annotator (synthetic data) 0.77 0.71 0.68 GPT3 vs. Annotator (synthetic data) 0.64 0.63 0.40 GPT4 vs. Annotator (synthetic data) 0.70 0.69 0.71 Table 5: Inter-annotator agreement (Cohen\u2019s Kappa) \u2022 Even GPT4 makes errors regarding factuality (e.g., It was around my second year in junior high school when my father tried to take his life =>positive death) However, in many cases the assessment is not entirely fair, as the LLMs (particularly GPT4) often catch annotations which human annotators missed, or the difference in subcategories is subjective and open to debate (e.g., school bullying vs emotional abuse, emotional abuse vs abuse unspecified, etc.). Thus it is possible that LLMs, or most likely GPT4, in fact outperformed experts on this task. 6 Discussion The results obtained from our comparison of LLM annotations with human annotations on both real and synthetic data allow us to make a few conclusions and recommendations. Overall, both LLMs perform well. Inter-annotator agreement and performance indicate that GPT4 performs on par with human annotators. In fact, error analysis and manual examination of annotations suggest the LLMs potentially outperform human annotators in terms of recall (sensitivity), catching annotations which have been missed. However, while recall might be improved in LLMs versus human annotators, precision may suffer in unexpected ways, for example through errors in the use of negation and factuality, even in the case of GPT4. LLMs display a particular tendency to overpredict labels and produce negative annotations in infelicitous contexts, i.e., when humans would deem them irrelevant, creating an amount of noise. However, these negative annotations are not technically incorrect. While accuracy errors could be found in the LLM output, the experts\u2019 outputs were not entirely free of them, and previous work by [37] suggests LLMs may both be more complete AND more accurate than medical experts. There may still be a difference in the type of accuracy errors produced by LLMs, which will have to be investigated in future research. In terms of accuracy at the subcategory level, we were surprised to find GPT4 outperformed human agreement by a large margin in real data (0.47 vs 0.35). We hypothesise this is due to the fact that human annotators display higher subjectivity in their style of annotation at the subcategory level (given the lack of predetermined subcategories) and diverge more between them. LLMs are likely to be more \u2018standard\u2019 and generic and thus potentially more in agreement with any given human annotator. More specifically, LLMs tend to be consistent from one annotation to the other with higher recall whereas human annotators showed less consistency. Therefore, if a sentence mentions physical, sexual and emotional abuse, annotators might only mention two out of three but when mentioning all three an LLM is more likely to be in agreement than another annotator, i.e., the LLM will catch more of the perfectly recalled annotations than the second annotator. The better performance demonstrated on synthetic data doesn\u2019t seem due to LLMs performing better on data they are generating, but rather to the synthetic data being less complex and diverse and thus easier to annotate for both LLMs and humans, as evidenced by GPT4 reaching similar inter-annotator agreement scores to humans (with agreement both in humans and LLM/human 10% higher for synthetic data). This better performance could still warrant using synthetic data for e.g., training machine learning models (given more reliable labels) but only in cases where the potential loss in diversity is compensated by the increase in label reliability. This will likely depend on the specific application. 8 \f7 Conclusion We presented the results of a study examining human and Large Language Models (GPT3.5 and GPT4) performance in extracting mental health factors from adolescent social media data. We performed analyses both on real and synthetic data and found GPT4 performance to be on par with human inter-annotator agreement for both datasets, with substantially better performance on the synthetic dataset. However, we find GPT4 still performing non-human errors in negation and factuality, and synthetic data to be much less diverse and differently distributed than real data. The potential for future applications in healthcare will have to be determined by weighing these factors against the substantial reductions in time and cost achieved through the use of LLMs. Acknowledgment I.L., D.W.J., and A.K. are partially supported by the National Institute for Health and Care Research (NIHR) AI Award grant (AI_AWARD02183) which explicitly examines the use of AI technology in mental health care provision. A.K. declare a research grant from GlaxoSmithKline (unrelated to this work). This research project is supported by the NIHR Oxford Health Biomedical Research Centre (grant NIHR203316). The views expressed are those of the authors and not necessarily those of the UK National Health Service, the NIHR or the UK Department of Health and Social Care.", + "additional_info": [ + { + "url": "http://arxiv.org/abs/2404.14544v1", + "title": "WangLab at MEDIQA-CORR 2024: Optimized LLM-based Programs for Medical Error Detection and Correction", + "abstract": "Medical errors in clinical text pose significant risks to patient safety. The\nMEDIQA-CORR 2024 shared task focuses on detecting and correcting these errors\nacross three subtasks: identifying the presence of an error, extracting the\nerroneous sentence, and generating a corrected sentence. In this paper, we\npresent our approach that achieved top performance in all three subtasks. For\nthe MS dataset, which contains subtle errors, we developed a retrieval-based\nsystem leveraging external medical question-answering datasets. For the UW\ndataset, reflecting more realistic clinical notes, we created a pipeline of\nmodules to detect, localize, and correct errors. Both approaches utilized the\nDSPy framework for optimizing prompts and few-shot examples in large language\nmodel (LLM) based programs. Our results demonstrate the effectiveness of LLM\nbased programs for medical error correction. However, our approach has\nlimitations in addressing the full diversity of potential errors in medical\ndocumentation. We discuss the implications of our work and highlight future\nresearch directions to advance the robustness and applicability of medical\nerror detection and correction systems.", + "authors": "Augustin Toma, Ronald Xie, Steven Palayew, Patrick R. Lawler, Bo Wang", + "published": "2024-04-22", + "updated": "2024-04-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "WangLab at MEDIQA-CORR 2024: Optimized LLM-based Programs for Medical Error Detection and Correction", + "main_content": "Introduction Medical errors pose a significant threat to patient safety and can have severe consequences, including increased morbidity, mortality, and healthcare costs. Detecting and correcting these errors in clinical text is crucial for ensuring accurate medical documentation and facilitating effective communication among healthcare professionals. One of the fastest-growing use cases for artificial intelligence (AI) in healthcare is clinical note generation, often from transcriptions of physician-patient dialogues. However, assessing the quality and accuracy of these notes is challenging, and automated detection and correction of errors could have a significant impact on patient care. The reliability of large language models (LLMs) in critical applications, such as healthcare, is a major concern due to the potential for hallucinations (generating false or nonsensical information) and inconsistencies. Robust solutions to the question of error detection and correction are essential for addressing these concerns and enabling the safe and effective use of LLMs in medical contexts. The MEDIQA-CORR 2024 (Ben Abacha et al., 2024a) shared task focuses on identifying and correcting medical errors in clinical notes. Each text is either correct or contains a single error. The task involves three subtasks: (1) detecting the presence of an error, (2) extracting the erroneous sentence, and (3) generating a corrected sentence for flagged texts. In this paper, we present our approach, which achieved the top performance across all three subtasks in the MEDIQA-CORR 2024 competition. We develop a series of LLM-based programs using DSPy, a framework for optimizing prompts and few-shot examples. We provide a detailed description of our methodology and results, followed by a discussion of the implications of our work and future directions in the field of medical error detection and correction. 2 Related Work The use of large language models (LLMs) in medicine has attracted considerable attention in recent years. The release of LLMs such as GPT-4 has led to intensive research in the medical community (Nori et al., 2023), particularly in clinical note generation. The MEDIQA-Chat 2023 (Ben Abacha et al., 2023) competition showcased the performance of automated note generation solutions (Giorgi et al., 2023), and further work has demonstrated that LLMs can sometimes outperform humans on clinical text summarization tasks 1 arXiv:2404.14544v1 [cs.CL] 22 Apr 2024 \f(Van Veen et al., 2024). However, there has been limited research focusing on granular audits of these clinical notes with respect to accuracy and error correction. The MEDIQA-CORR 2024 shared task addresses this gap by providing a platform for researchers to develop and evaluate novel approaches to error detection and correction in clinical text, ultimately contributing to the development of more reliable AI systems in healthcare. 3 Task Description The MEDIQA-CORR 2024 shared task provides two distinct datasets: MS and UW (Ben Abacha et al., 2024b). The MS dataset consists of a Training Set containing 2,189 clinical texts and a Validation Set (#1) containing 574 clinical texts. The UW dataset, on the other hand, consists solely of a Validation Set (#2) containing 160 clinical texts. The test set for the shared task includes clinical texts from both the MS and UW collections. The evaluation metrics for the MEDIQA-CORR 2024 shared task vary across the three subtasks: \u2022 Subtask 1 (Error Flag Prediction): Evaluated using Accuracy. \u2022 Subtask 2 (Error Sentence Detection): Evaluated using Accuracy. \u2022 Subtask 3 (Sentence Correction): Evaluated using ROUGE (Lin, 2004), BERTScore (Zhang et al., 2020), BLEURT (Sellam et al., 2020), Aggregate-Score (mean of ROUGE-1F, BERTScore, BLEURT-20), and Composite Scores. The Composite Score for each text in Subtask 3 is calculated as follows: 1. Assign 1 point if both the system correction and the reference correction are \"NA\" 2. Assign 0 points if only one of the system correction or the reference correction is \"NA\" 3. Calculate the score based on metrics (ROUGE, BERTScore, BLEURT and the AggregateScore) within the range of [0, 1] if both the system correction and reference correction are non-\"NA\" sentences. 4 Approach 4.1 Overview Upon reviewing the MS and UW datasets, it became apparent that these two datasets presented distinct challenges. The errors in the MS dataset were often extremely subtle, to the point that many errors did not actually seem like errors, and in fact, clinicians on our team often couldn\u2019t identify the presence of an error within the text. However, when reviewing corrected text from the training set, it became clear that corrections were often \u2019optimal\u2019 completions. For example, consider the following error and its correction: Error sentence: After reviewing imaging, the causal pathogen was determined to be Haemophilus influenzae. (Ben Abacha et al., 2024b) Corrected sentence: After reviewing imaging, the causal pathogen was determined to be Streptococcus pneumoniae. (Ben Abacha et al., 2024b) These types of errors are subtle and seem akin to multiple-choice questions, where often multiple answers could independently be seen as correct completions, but only in the context of one another would you deem one answer wrong. On the other hand, the UW dataset appeared to reflect realistic clinical notes, and the errors were more apparent. For example, consider the following error and its correction: Error sentence: Hypokalemia based on laboratory findings patient has hypervalinemia. (Ben Abacha et al., 2024b) Corrected sentence: Hypokalemia based on laboratory findings patient has hypokalemia. (Ben Abacha et al., 2024b) In this case, the error involves a nonsensical term (hypervalinemia, a rare metabolic condition) when the context makes it clear that the patient has hypokalemia (low potassium levels). These are errors that a clinician can identify from the text alone. The distinct characteristics of the MS and UW datasets prompted us to develop a two-pronged approach to the MEDIQA-CORR 2024 shared task. For the MS dataset, we employed a retrieval-based system to identify similar questions from external medical question-answering datasets and leverage the knowledge contained in these datasets to detect 2 \fand correct errors. For the UW dataset, we created a series of modules to detect, localize, and correct errors in clinical text snippets. Both approaches were built on DSPy (Khattab et al., 2023), a novel framework for systematically optimizing prompts and few-shot examples in LLM based programs. 4.2 Approach for MS Dataset Our approach to the MS dataset involves a multistep process that leverages retrieval-based methods and the DSPy framework, as illustrated in Figures 1, 2, and 3. In all of our experiments, we utilized GPT-4-0125-preview as the underlying large language model, using default generation parameters (temperature of 1.0, top_p of 1) with the exception of a max tokens value of 4096. 4.2.1 Retrieval of Similar Questions First, we employ a retrieval-based approach to identify similar questions from the MedQA dataset (Jin et al., 2020). MedQA is a medical questionanswering dataset that contains multiple-choice questions, each with a set of answer options and a correct answer. By leveraging the knowledge contained in this external dataset, we aim to detect and correct errors in the MS dataset. We use TFIDF (Sparck Jones, 1972) to calculate the similarity between the given question in the MS dataset and the questions in MedQA, retrieving the most similar questions along with their answer options and correct answers for further analysis. 4.2.2 Identifying Answer Choices within Query Text To identify the implicit answer choice within the query text, we employ a two-step process using DSPy programs. First, we send both the query text and the identified similar multiple-choice question to a DSPy module that utilizes chain of thought (Wei et al., 2023) and the BootstrapFewShotWithRandomSearch teleprompter (Khattab et al., 2023). This teleprompter generates 20 few-shot examples by sampling from the training set and testing the module\u2019s performance on the validation set. The module aims to extract the answer choice that appears to be present in the query text. The output from this module is then passed to a second DSPy module, which also leverages the BootstrapFewShotWithRandomSearch teleprompter. This module creates multiple fewshot examples that compare the extracted answer against the true answer from the multiple-choice Figure 1: Predicting the presence of an error through a comparison to the retrieved question Figure 2: Identifying the error sentence question, as shown in Figure 1. We simultaneously bootstrap these two steps, optimizing the entire pipeline based on the accuracy of the overall error flag prediction. The result of this bootstrapping process is a compiled program with optimized multi-step chain of thought prompts based on the module\u2019s performance on error detection accuracy. This approach allows us to effectively identify the presence of errors in the query text by leveraging the knowledge from external medical question-answering datasets. 4.2.3 Localizing Errors within Query Text After detecting an error in the query text, we use a DSPy module to identify the specific line containing the error, as illustrated in Figure 2. This module takes the extracted answer choice and the preprocessed query text as inputs and then an LLM call is done to determine which line most closely matches the erroneous answer choice. Our experiments showed that GPT-4\u2019s performance was high enough that we did not need to compile the program or bootstrap few-shot prompts via a DSPy teleprompter. The module outputs the line number where the error is located, which is crucial for the subsequent error correction step, as it allows for targeted correction of the relevant text. 4.2.4 Error Correction with DSPy After identifying the error location within the query text, we use a final DSPy module to generate a corrected version of the text, as illustrated in Figure 3. This module takes three inputs: the error line, the extracted answer choice, and the correct answer 3 \fFigure 3: Generating the corrected sentence derived from the most similar retrieved multiplechoice question. The error correction module utilizes a chain of thought prompt along with 20 few-shot examples generated by the BootstrapFewShotWithRandomSearch teleprompter. This teleprompter samples examples from the training set and generates intermediate labels, such as rationales for the chain of thought, to provide additional context and guidance for the language model during the error correction process. The teleprompter optimizes the selection of few-shot prompts based on their performance on the validation set, using the ROUGE-L score as the metric. The selected few-shot examples, accompanied by the generated intermediate labels, demonstrate how to modify the error line based on the extracted answer choice and the correct answer, serving as a reference for the model to learn from and adapt to the specific error correction task. The module outputs the corrected version of the query text, with the error line revised based on the correct answer derived from the most similar multiple-choice question. This corrected text represents the final output of our retrieval-based approach for the MS dataset, addressing the subtle errors present in the clinical text. 4.3 Approach for UW Dataset Our approach for the UW dataset involves optimizing a series of DSPy modules to accomplish all three subtasks sequentially, as illustrated in Figure 4. In all of our experiments, we utilized GPT4-0125-preview as the underlying large language model, using default generation parameters (temperature of 1.0, top_p of 1) with the exception of a max tokens value of 4096. 4.3.1 Error Detection with DSPy For the UW dataset, we first employ a DSPy program to identify whether an error exists in the given clinical text snippet. This program is optimized using the Multi-prompt Instruction Proposal Optimizer (MIPRO) teleprompter, which generates Figure 4: Overview of the UW dataset pipeline, consisting of three main stages: error detection, error localization, and error correction. Each stage is implemented using a DSPy module optimized with the MIPRO teleprompter (Khattab et al., 2023) The pipeline also includes a quality control step based on the ROUGE-L score between the original erroneous text and the corrected version. and optimizes both the base prompts and few-shot examples. MIPRO optimizes the prompts and fewshot examples to maximize performance on the validation set, which we created by dividing the UW training collection (160 examples) into 80 training examples, 40 validation examples, and 40 test examples. The optimizer uses error flag accuracy as the metric to optimize and generates 20 examples. We also incorporate chain of thought reasoning into the DSPy module. 4.3.2 Error Localization If an error is detected in the clinical text snippet, we use another DSPy module to identify the specific line containing the error. This module is also optimized using MIPRO, which generates 20 bootstrap examples that include chain of thought rationales. Using a separate DSPy module for error localization allows us to precisely identify the source of the error and facilitate targeted corrections. The exact match of the error line is used as the metric for optimization, and this module is trained only on a subset of the training samples that contain errors. 4.3.3 Error Correction After identifying the error line, we use a third DSPy module to generate a corrected version of the erroneous text. This module is also optimized using MIPRO, following the same process as the previous modules. The error correction module takes the erroneous text as input and generates a corrected version based on the optimized prompts and weights. MIPRO uses the ROUGE-L score against the known correct sentence as the metric to optimize, and this module is trained only on a subset of the training samples that contain errors. 4 \fRank Team Error Flags Accuracy 1 WangLab 86.5% 2 MediFact 73.7% 3 knowlab_AIMed 69.4% 4 EM_Mixers 68.0% 5 IKIM 67.8% 6 IryoNLP 67.1% 7 Edinburgh Clinical NLP 66.9% 8 hyeonhwang 63.5% 9 PromptMind 62.2% 10 CLD-MEC 56.6% Table 1: Top 10 teams\u2019 performance on Task 1 (Error Flags Accuracy) 4.3.4 Quality Control with ROUGE-L To ensure the quality of the generated corrections, we calculate the ROUGE-L score between the original erroneous text and the corrected version. If the ROUGE-L score is below a threshold of 0.7, which we set as an arbitrary estimate for quality, we reject the correction and use the original erroneous text instead. This fallback mechanism is based on the observation that the ROUGE-L score of the erroneous text tends to be quite high since the error is only a small portion of the sentence. However, this fallback is more of a contest-metric-focused feature rather than something that significantly improves performance. 5 Results and Discussion 5.1 Overall Performance in the MEDIQA-CORR 2024 Shared Task Our approach achieved top performance in the MEDIQA-CORR 2024 shared task across all three subtasks. Tables 1, 2, and 3 present the performance of the top 10 teams in each subtask. 5.2 Performance on Subtask 1 Error Prediction In the official contest results for binary error prediction, our approach achieved an accuracy of 86.5%, ranking first among all participating teams. Table 1 shows the top 10 teams\u2019 performance on Task 1. 5.3 Performance on Subtask 2 Error Sentence Detection For error sentence detection, we obtained an accuracy of 83.6%, ranking first among all teams. Table 2 presents the top 10 teams\u2019 performance. These results demonstrate the effectiveness of our few-shot learning and CoT-based approach in Rank Team Error Sentence Detection Accuracy 1 WangLab 83.6% 2 EM_Mixers 64.0% 3 knowlab_AIMed 61.9% 4 hyeonhwang 61.5% 5 Edinburgh Clinical NLP 61.1% 6 IryoNLP 61.0% 7 PromptMind 60.9% 8 MediFact 60.0% 9 IKIM 59.0% 10 HSE NLP 52.0% Table 2: Top 10 teams\u2019 performance on Task 2 (Error Sentence Detection Accuracy) detecting the presence of errors and localizing the specific sentences containing the errors. 5.4 Performance on Subtask 3 Sentence Correction For subtask C (Sentence Correction), the official contest results show that our approach achieved an Aggregate-Score of 0.789, which is the mean of ROUGE-1-F (0.776), BERTScore (0.809), and BLEURT (0.783). This was the highest score among the participating teams for the sentence correction task. Table 3 displays the top 10 teams\u2019 performance on Task 3. The official contest results highlight the competitive performance of our approach across all three subtasks of the MEDIQA-CORR 2024 shared task, demonstrating its effectiveness in detecting, localizing, and correcting medical errors in clinical text for both the MS and UW datasets. 5.5 Implications and Limitations of the Approach Our work contributes to the ongoing efforts in improving the accuracy and reliability of medical information in clinical text. The automated detection and correction of certain types of errors could ensure the quality and consistency of medical documentation, ultimately supporting patient safety and quality of care. The development and integration of more advanced systems could help alleviate the burden of manual error checking for the specific error types addressed, allowing healthcare providers to allocate more time and resources to delivering high-quality patient care. However, it is important to acknowledge the limitations of our approach in the context of the diverse nature of errors in medical documentation. While our system demonstrates strong performance on the MS and UW datasets, it focuses on a specific subset of errors and has not been shown to be effec5 \fRank Team AggregateScore R1F BERTSCORE BLEURT AggregateCR 1 WangLab 0.789 0.776 0.809 0.783 0.775 2 PromptMind 0.787 0.807 0.806 0.747 0.574 3 HSE NLP 0.781 0.779 0.806 0.756 0.512 4 hyeonhwang 0.734 0.729 0.767 0.705 0.571 5 Maven 0.733 0.703 0.744 0.752 0.524 6 Edinburgh Clinical NLP 0.711 0.678 0.744 0.711 0.563 7 knowlab_AIMed 0.658 0.643 0.677 0.654 0.573 8 EM_Mixers 0.587 0.571 0.595 0.596 0.548 9 IryoNLP 0.581 0.561 0.592 0.591 0.528 10 IKIM 0.559 0.523 0.564 0.588 0.550 Table 3: Top 10 teams\u2019 performance on Task 3 (Aggregate Score and its components) tive in addressing the wide diversity of errors that can occur in medical documentation. For instance, our approach does not currently address errors that are propagated through multiple notes when a physician references prior documents containing inaccuracies, such as incorrect medical history. Such errors can be particularly challenging to identify and correct, as they may require a comprehensive understanding of the patient\u2019s medical history, the context of the referenced documents, and the resolution of conflicting statements across documents. Our system has not been designed or evaluated for handling these types of errors. Moreover, our approach does not cover errors that originate from sources beyond the scope of our training data, such as poor transcriptions, entries in the wrong medical record, or errors in decision making. These types of errors may necessitate different strategies and techniques for detection and correction, and our current approach has not been developed to handle them. Additionally, the reliance on external datasets for the retrieval-based approach in the MS dataset limits the generalizability of our method to other medical domains or datasets. In fact, we believe that an approach used in the MS dataset might actually create further errors if used on real clinical text, as real clinical practice does not always reflect optimal or most likely completions. The effectiveness of our approach in detecting and correcting errors may vary depending on the specific characteristics and error types present in different medical contexts, and further evaluation would be necessary to assess its performance in diverse settings. 5.5.1 Impact of Different LLMs and Compilation After the competition ended, we performed additional experiments to compare the performance of our approach when using GPT-4 and GPT-3.5 as the underlying language models for the DSPy modules, as well as the impact of using compiled and uncompiled DSPy programs. Table 4 presents the results of the ablation study for error flag accuracy (Task 1), error sentence detection accuracy (Task 2), and various metrics for Task 3. The results show that using GPT-4 as the underlying LLM consistently yields better performance compared to GPT-3.5 across all tasks. For Task 1, the compiled GPT-4 model achieves the highest accuracy of 97.3% (0.1%), while for Task 2, it achieves an accuracy of 97.0% (0.1%). The compiled DSPy programs outperform their uncompiled counterparts for both GPT-3.5 and GPT-4. In Task 3, the compiled GPT-4 model consistently outperforms the other models across all metrics, with the highest AggregateC score of 0.878 (0.002). Moreover, the results demonstrate that using compiled DSPy programs consistently outperforms the uncompiled approach across all tasks and datasets, emphasizing the significance of systematic optimization techniques in enhancing the performance of our error detection and correction system. It is important to note that we did not isolate the impact of retrieval in our post-competition experiments, as it was a fundamental component of all the modules in our approach. Removing the retrieval component would require the development of a new solution. However, the strong performance of our uncompiled GPT-3.5 solution suggests that a significant portion of the performance could be attributed to the retrieval process itself. Future work should 6 \fError Flags Accuracy (Task 1) GPT-3.5 Compiled GPT-3.5 Uncompiled GPT-4 Compiled GPT-4 Uncompiled Error Flags Accuracy 94.0% (0.4%) 81.2% (0.7%) 97.3% (0.1%) 88.9% (0.5%) Error Sentence Detection Accuracy (Task 2) GPT-3.5 Compiled GPT-3.5 Uncompiled GPT-4 Compiled GPT-4 Uncompiled Error Sentence Detection Accuracy 92.8% (0.5%) 78.5% (0.8%) 97.0% (0.1%) 88.0% (0.8%) Task 3 Metrics Metric GPT-3.5 Compiled GPT-3.5 Uncompiled GPT-4 Compiled GPT-4 Uncompiled aggregate_subset_check 0.853 (0.001) 0.809 (0.011) 0.824 (0.003) 0.827 (0.003) R1F_subset_check 0.827 (0.003) 0.778 (0.017) 0.789 (0.003) 0.792 (0.003) BERTSCORE_subset_check 0.874 (0.001) 0.827 (0.013) 0.856 (0.003) 0.857 (0.002) BLEURT_subset_check 0.859 (0.000) 0.824 (0.006) 0.827 (0.002) 0.832 (0.003) AggregateC 0.864 (0.004) 0.736 (0.010) 0.878 (0.002) 0.792 (0.005) Table 4: Ablation studies for error flag accuracy (Task 1), error sentence detection accuracy (Task 2), and Task 3 metrics. Numbers in parentheses represent standard deviations. explore the impact of different retrieval strategies on the performance of error detection and correction in clinical text. 5.6 Future Research Directions Although our approach has demonstrated competitive performance in the MEDIQA-CORR 2024 shared task, there are several potential avenues for future research that could further improve the effectiveness and applicability of our system. One area for future investigation is the finetuning of open access models specifically for clinical notes (Toma et al., 2023). While fine-tuning may lead to higher performance, we focused on working with DSPy in the current study and did not have the computational resources to maintain the necessary throughput and latency during initial experimentation. Future studies could examine the trade-offs between fine-tuning and using off-theshelf models with prompt optimization techniques, taking into account factors such as performance, efficiency, and scalability. Another direction for future research is the expansion of the benchmark dataset to include a broader range of errors, such as those spanning multiple documents or involving suboptimal clinical decisions. Broadening the scope of the dataset would enhance the robustness of error detection and correction systems and extend their applicability to more complex clinical scenarios. Integrating domain-specific knowledge, such as medical ontologies or expert-curated rules, into our approach could improve the system\u2019s ability to handle complex medical cases and make more informed decisions. This would be particularly relevant if the errors include suboptimal clinical decisions, as the system could provide more comprehensive support to healthcare professionals. Lastly, developing more comprehensive and robust methods for measuring and correcting errors is an area with significant potential. This could involve creating standardized evaluation metrics and datasets that better capture the intricacies of medical errors and developing more advanced error correction techniques that can handle a wider range of error types and contexts. 6 Conclusion The approach presented in this paper, which combines retrieval-based methods, few-shot learning, and systematic prompt optimization, demonstrates the potential of AI-assisted tools for detecting and correcting medical errors in clinical text. The strong performance achieved across all three subtasks of the MEDIQA-CORR 2024 shared task highlights the effectiveness of our methods in addressing the specific challenges posed by different datasets and error types. However, further research is necessary to extend the applicability of our approach to a wider range of medical contexts, incorporate domain-specific knowledge, and integrate with existing clinical systems. As the field of AI-assisted medical error detection and correction continues to evolve, collaboration between AI researchers and healthcare professionals will be crucial to develop solutions that effectively augment and support clinical decision-making processes, ultimately contributing to improved patient safety and healthcare quality. 7" + }, + { + "url": "http://arxiv.org/abs/2404.15848v2", + "title": "Detecting Conceptual Abstraction in LLMs", + "abstract": "We present a novel approach to detecting noun abstraction within a large\nlanguage model (LLM). Starting from a psychologically motivated set of noun\npairs in taxonomic relationships, we instantiate surface patterns indicating\nhypernymy and analyze the attention matrices produced by BERT. We compare the\nresults to two sets of counterfactuals and show that we can detect hypernymy in\nthe abstraction mechanism, which cannot solely be related to the distributional\nsimilarity of noun pairs. Our findings are a first step towards the\nexplainability of conceptual abstraction in LLMs.", + "authors": "Michaela Regneri, Alhassan Abdelhalim, S\u00f6ren Laue", + "published": "2024-04-24", + "updated": "2024-04-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Detecting Conceptual Abstraction in LLMs", + "main_content": "Introduction Large Language Models (LLMs) have emerged as a powerful tool for a plethora of applications. State-of-the-art LLMs are based on the transformer architecture (Vaswani et al., 2017) that can directly generate text sequences (like chatbots), translate texts, or lend their outcomes to other downstream tasks. Due to their versatile functionality, LLMs are often distributed as pre-trained black-box models, which can then be fine-tuned to specific needs. While LLMs surpass the performance of simpler models, they are far less explainable due to the intransparent nature of their complex architecture. More explainability can be crucial in multiple applications, e.g., if models must adhere to some governance to prevent bias or build more data-efficient models. Especially in the context of trustworthy AI, one central open research question is how these models\u2019 excellent output is achieved and whether the mechanisms internally employed in LLMs reassemble those present in humans. We provide an analysis examining whether simple linguistic abstraction mechanisms are present in a large language model. For humans, relations like hypernymy (ravens are birds) are essential for linguistic understanding and generalization. LLMs also necessarily employ some kind of abstraction and generalization, but most likely not exactly in the same way as humans do. With our experiments, we add one more step toward representing hypernym relationships within large language models and, thus, their capacity to use humanlike abstraction mechanisms for generalization. Specifically, we test BERT (Devlin et al., 2019) for its attention patterns related to taxonomic hypernyms and compare this to unrelated noun pairs with either high or low semantic similarity. We draw our test data from a psychologically motivated data set of human associations, which lends itself to examining hypernym pairs with high cognitive saliency. Our results show that BERT represents this kind of abstraction within its attention module. Our main contribution consists of clear evidence that LLMs infer linguistic abstraction and that this inference goes beyond semantic similarity. For this, we provide both a method and a dataset to show the attention patterns of LLMs for semantic hypernymy and separate them from counterfactuals matched by semantic similarity and abstraction level. 2. Background and Related Work In the past years, the capabilities of LLMs have been enhanced tremendously. With transformer models (Vaswani et al., 2017) as an architectural basis, LLMs are trained on vast amounts of text and optimized to predict the next word in a sequence or the following sentence in a discourse. The resulting models have many applications and can model many arXiv:2404.15848v2 [cs.CL] 25 Apr 2024 \f#No Pattern 1 [hypo]s are [hyper]s. 2 That [hypo] is [a(n)] [hyper]. 3 I like [hypo]s and other [hyper]s. 4 The [hypo], which was the largest [hyper] among them, stood out. 5 I like [hypo]s, particularly because they are [hyper]s. Table 1: Hypernymy patterns, with [hypo] and [hyper] as slots for target and the feature concepts respectively. Plurals are indicated with s and [a(n)] is a determiner. linguistic phenomena known to be crucial for human language (Manning et al., 2020). When analyzing the emergence of linguistic phenomena, a particular focus lies on the self-attention mechanism of transformers. Self-attention is a step in the encoder part of these language models. It maps the input sequence to a weighted representation of itself and thus, intuitively speaking, reveals the sequence\u2019s focal points relevant to generating its follow-up. Selfattention consists of multiple so-called heads, which act in parallel on the sequence and are multiplied in several layers (see Vaswani et al. (2017) for details). The grid of attention heads with the individual scores they attach to the sequence is often treated as a proxy for the information encoded in the transformer. For some discussion on how far this is possible, see, e.g., Jain and Wallace (2019) and Wiegreffe and Pinter (2019). There are two types of approaches that recover linguistic structure in LLMs: One performs end-to-end evaluation by disabling or manipulating single attention heads and evaluating the performance change for different tasks (Kovaleva et al., 2019, e.g.). Others look directly into the attention patterns, which we also do. Baroni (2020) shows an overview of abstraction and compositionality in artificial neural networks. Many approaches use artificial languages and small models (Lake and Baroni, 2018; Hupkes et al., 2020, e.g.), others also test pretrained LLMs like BERT (Devlin et al., 2019). See Sajjad et al. (2023) for an overview. Several approaches have found evidence for linguistic knowledge within BERT. For instance, Chen et al. (2023) prompt the model with correct and counterfactual data and then infer BERT\u2019s abstraction capabilities. Only a few approaches show results for deep semantic knowledge directly within the attention mechanism. Dalvi et al. (2019b) try to discover latent concepts in BERT, which are essentially hypernyms and their derivable hyponyms. In our approach, we focus on hypernym-hyponym relations between nouns as one central linguistic abstraction phenomenon. For collecting hypernyms by prompting, Hanna and Mare\u02c7 cek (2021) present an experiment in which BERT outperforms other unsupervised algorithms in the collection of common hypernyms, which suggest that the model at least has the capacity to user hypernymy. This raises the questions on whether and how this is also internally represented in the trained model. To the best of our knowledge, there is no approach that characterizes attention patterns for generic hypernyms, especially no approach that distinguishes taxonomic relationships from pure semantic similarity. We take another step towards understanding conceptual abstraction in LLMs, evaluate the attention patterns related to true and counterfactual hypernyms, and show that the effects must be related to abstraction rather than similarity. 3. Data We create a data set of noun pairs that are in a hypernymy relationship, and two sets of counterfactual pairs (which are not hypernyms). In order to construct example sentences, we manually create patterns that typically express hypernymy in their surface form and instantiate them with the noun pairs. 3.1. Positive Examples We extract hyponym-hypernym pairs from McRae\u2019s feature norms (McRae et al., 2005). The feature norms contain pairs of concepts (originally stimuli) and features (human associations), annotated with semantic relationships. The pre-selection gives us more salient pairs of terms than a full-fledged taxonomy and should also be recognizable as strongly related by a large language model. For our data of valid noun pairs, we select all pairs of concepts and features labeled with a \u201csuperordi\fnate\u201d relationship in the feature norms. These concept-feature pairs have the target concept as a hyponym and the feature concept as a hypernym (e.g., raven and bird). The dataset can conatin multiple hypernyms for a concept with different levels of abstraction, e.g raven and bird as well as raven and animal. We include all such pairs in the dataset and balance them later with counterfactuals with similar degree of abstraction. 3.2. Creating Counterfactuals We create two counterfactual sets of noun pairs, which are not in a hypernymy relationship and thus will produce invalid sentences within our patterns. Using WordNet (Fellbaum, 1998), we generate the pairs by either sister terms of the feature concept from the positive examples (negative examples), which are matched by the level of abstraction of the feature concepts, or terms which share a hypernym with the target concepts (sister terms), which approximately match the level of similarity of the positive examples. With those two sets, we want to exclude spurious effects from just measuring semantic similarity or differences in concept abstraction level (and thus indirectly also frequency). Negative Examples For the first set of counterfactuals, we elicit noun pairs in which the feature concept is on the same level of generality as the hypernym in the first set. For instance, for the positive example raven \u2013 animal, we might choose raven \u2013 person. If there are multiple hypernyms for the same concept, we select an appropriate counterfactual for each individual hypernym. In detail, we proceed as follows: 1. We map each positive example to the WordNet synsets by extracting those synset pairs that contain the respective lemmas and stand in a (direct or inherited) hypernymy relationship in WordNet. 2. For each feature synset, we select a sister term (a synset sharing a parent node), which is no hypernym of the target concept (in the example, we pick a sister term of the alligator synset). To avoid effects from low-frequency words, we select the most frequent lemma from those sister synsets as a counterfactual (in the example person). Sister Terms Our second counterfactual set controls for the level of semantic similarity within the positive examples. Hyponyms and their hypernyms are often distributionally very similar (Pad\u00f3 and Lapata, 2007), especially salient ones. To measure whether we really find differences related to violations of taxonomic rules or just effects due to high semantic similarity, we pick a sister term in WordNet for each of the original target concepts (e.g., raven \u2013 crow, which are both hyponyms of bird). As for the negative examples, we choose the most frequent sister term lemma. Sister terms usually share many contexts, so we expect effects due to semantic similarity to be shared between the positive examples and the sister terms. 3.3. Creating Test Sentences As input for the LLM, we create test sentences that express a taxonomic relationship directly or indirectly. First, we manually create a set of five sentence patterns that exhibit hypernymy relationships, partially inspired by the patterns used by Hearst (1992) to extract hyponymhypernym pairs automatically from large text corpora. We vary the simplicity and saliency of the patterns to control for those effects. Table 1 shows the set of patterns. We instantiate our patterns with the noun pairs from all three sets, resulting in 3425 examples per set. The results are sentences like I like ravens and other animals (positive example), and I like ravens and other people (negative example) and I like ravens and other crows (sister term). We provide all data sets for reference. 4. Hypernymy within BERT We analyze whether or not hypernymy has a correlation with BERT\u2019s attention mechanism. After visualizing the attention for all datasets, we separate them via a linear classifier. For all \f(a) Forward positive (b) Forward negative (c) Forward sister Figure 1: Attention maps for hyponyms and hypernyms averaged across all patterns. experiments, we use BERT-large in the monolingual English version. 4.1. Attention Matrices and Clustering For each sentence, we extract the selfattention values from BERT. We restrict our analysis to the forward-looking attention between our target and feature tokens. Each sentence is represented by a 12x12 attention matrix (with 12 layers of 12 attention heads per layer). BERT\u2019s tokenizer breaks up some of our pluralized tokens, which makes the attention between a split token and a complete token incomparable. For the sake of simplicity, we discard all examples in which one of our tokens in focus is split up. Figure 1 gives a high-level overview of the results for each data set. For this visualization, we average the attention between target and feature concepts (resp. their sequence position) over all examples. Each cell in a heatmap corresponds to one attention head (x-axis) in one layer (y-axis). Dark colors indicate a high average activation of the attention head. Intuitively, we see that the three sets differ, so concept clashes in the negative set and the sister terms do expose different attention patterns than the salient hypernyms. Further, the overall attention seems lower in the positive setting than in the other two control settings. This suggests that higher attention here denotes some form of surprise for unexpected semantic constructions. To validate how well the three sets are separable, we employ logistic regression to recover the three test sets automatically. Each data point is the attention matrix of one example sentence, flattened into a 144-dimensional vector. For classification, we use the standard implementation of logistic regression from scikit-learn (Pedregosa et al., 2011) with all default parameter settings, setting the number of iterations to 1000 and the regularization parameter C to 1. We also perform a pairwise comparison of the different sets to understand how similar levels of abstraction (sister terms vs. negative) or similar levels of semantic similarity (positive vs. sister terms) of the test tokens influence the separability of the examples. 4.2. Results We find a prediction accuracy of 0.75 on the test sets for our overall comparison and similar scores for the pairwise separation (Table 2). Our three sets are equal-sized, so a random baseline would return an accuracy of about 0.33. All scores indicate a substantial differSets Acc. All three 0.75 Pos. vs. Neg. 0.88 Pos. vs. Sisters 0.84 Neg. vs. Sisters 0.85 Table 2: Accuracy for predicting the test sets. ence in attention patterns in the three sets. The positive and the negative examples are well separated. Here, we see the semantic type clash for non-hypernyms in a hypernymy pattern and the low semantic similarity of the target and feature concept. The sister terms are equally well distinguishable from both positive and negative examples, but the set is less well recoverable than the positive examples. \fThis means that the differences we see between positive and negative examples must be due to something different than semantic similarity because the sister terms are distributionally very similar to their matched positive examples. Further, the attention seems to represent the subtle difference between the two sets of counterfactuals internally, which points to interesting research questions on the level of abstraction within the transformer models. 4.3. Limitations Our approach takes a first step towards understanding linguistic abstraction in transformer models. Our experiments have several technical limitations and limitations in the interpretability of the results. First, we restrict ourselves to taxonomic hypernymy of nouns, which is only a small part of abstraction. Within this theoretical limitation, our dataset is also limited to the hyponymhypernym pairs from the feature norms we used as our source. The restriction to a dictionary-based definition of abstraction also affects our dataset. When assembling the counterfactuals fitted to the input data, we found that some of our counterfactuals are strictly speaking no hypernyms, but colloquially still treated as such, e.g., spatula \u2013 tool, or barn \u2013 shelter. We leave those examples in the dataset, which might have influenced our results. Further limitations of our input data result from our handling of tokenized words. We filter all words that are split up by the BERT tokenizer. There are several approaches that recombine subword tokens into whole words. Unfortunately, no standard approach fits all applications, so in future work, the most suitable way to retrieve whole words from subwords should be tested and applied. Lastly, like for every probing approach, the interpretability of our results is debatable. We have shown that words in a hypernymy relationship give rise to attention matrices that are well distinguishable from counterfactuals, which are semantically wrong assertions. One can argue that the results on our dataset mainly show that we can distinguish salient sentences from absurd ones. We think that the least our results show is that there is something that is regularly attached to hypernymy that the transformer learned. Otherwise, we would not be able to separate the two sets of counterfactuals, which both consist of unlikely sentences and which are not distinguishable in their degree of oddity (I like ravens and other crows is about as wrong as I like ravens and other people). The only nuance between those counterfactuals is the degree of abstraction in the target words. Moreover, they both are well separable from the correct sentences when matched on the level of abstraction. So, while we cannot (and did not) claim that we found the attention pattern that completely explains how taxonomic abstraction works in transformers, we can claim that there is more than semantic similarity and reasonable content that makes the differences we measure. 5. Summary and Future Work Our experiments show an initial indicator for linguistic, conceptual abstraction in the attention mechanism of LLMs. Based on sentence patterns that imply hyponymy relations of noun pairs, we showed that we can separate sentences with salient hyponym-hypernym pairs from counterfactuals in which target and feature concepts do not stand in a taxonomic abstraction relationship. Our setting shows that the level of abstraction in the counterfactual and the semantic similarity of target and feature concepts give rise to different patterns. Our approach can only give a limited first explanation of the presence of linguistic abstraction within transformers. Firstly, we restrict ourselves to noun pairs and hyponymy, while abstraction comprises many more types of words, relations, and complex mechanisms like frames or scenarios. Further, our experiments cannot explain how the differences in the attention mechanism arise and what they imply. To shed more light on these questions, further research is required, which should analyze both the mathematical theory of the abstraction mechanism and the statistical properties of the input word embeddings. This would make the mechanisms of conceptual abstraction within the transformer architecture more transparent. \fBibliographical" + }, + { + "url": "http://arxiv.org/abs/2404.15458v1", + "title": "Can Large Language Models Learn the Physics of Metamaterials? An Empirical Study with ChatGPT", + "abstract": "Large language models (LLMs) such as ChatGPT, Gemini, LlaMa, and Claude are\ntrained on massive quantities of text parsed from the internet and have shown a\nremarkable ability to respond to complex prompts in a manner often\nindistinguishable from humans. We present a LLM fine-tuned on up to 40,000 data\nthat can predict electromagnetic spectra over a range of frequencies given a\ntext prompt that only specifies the metasurface geometry. Results are compared\nto conventional machine learning approaches including feed-forward neural\nnetworks, random forest, linear regression, and K-nearest neighbor (KNN).\nRemarkably, the fine-tuned LLM (FT-LLM) achieves a lower error across all\ndataset sizes explored compared to all machine learning approaches including a\ndeep neural network. We also demonstrate the LLM's ability to solve inverse\nproblems by providing the geometry necessary to achieve a desired spectrum.\nLLMs possess some advantages over humans that may give them benefits for\nresearch, including the ability to process enormous amounts of data, find\nhidden patterns in data, and operate in higher-dimensional spaces. We propose\nthat fine-tuning LLMs on large datasets specific to a field allows them to\ngrasp the nuances of that domain, making them valuable tools for research and\nanalysis.", + "authors": "Darui Lu, Yang Deng, Jordan M. Malof, Willie J. Padilla", + "published": "2024-04-23", + "updated": "2024-04-23", + "primary_cat": "physics.optics", + "cats": [ + "physics.optics", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Can Large Language Models Learn the Physics of Metamaterials? An Empirical Study with ChatGPT", + "main_content": "Introduction Deep learning, and particularly deep neural networks (DNNs), have recently emerged as a valuable tool in the field of metamaterials research and have produced many novel results. [1, 2, 3] This data-driven approach to metamaterial design has profound capabilities for both forward [4] and inverse processes. [5, 6] Once trained, DNNs can accelerate the simulation of material systems by orders of magnitude compared to traditional numerical simulations [7, 8, 9, 10], enabling faster prototyping and exploration. Similarly, in inverse design, these models have successfully discovered state-of-the-art solutions that push the boundaries of what is achievable with metamaterials [11, 12, 13, 14]. Despite these advances, the implementation of DNNs still faces several challenges [1, 2, 3]. As a data-driven method, DNNs necessitate large datasets for training to achieve high accuracy and generalizability. [15] This so-called \"data bottleneck\" issue is compounded by the question of interpretability\u2014understanding, and explaining model predictions remains a significant hurdle. [16, 17] This has led to the pursuit of models that are capable of learning effectively from smaller datasets, leading to the development of techniques such as transfer learning, [18, 19, 20, 21] and physicsinformed/driven DNNs [22, 9, 23, 24, 25, 26, 27]. \u2217Citation: Authors. Title. Pages.... DOI:000000/11111. arXiv:2404.15458v1 [physics.optics] 23 Apr 2024 \fFoundational models are sophisticated, large-scale DNNs trained on extensive, diverse datasets. [28] This training enables them to generalize knowledge across various domains without the need for substantial task-specific data. [28, 29] However, the extravagant cost of acquiring large, diverse datasets for training a physics foundational model is prohibitive for academic researchers. [30] In this work we hypothesize that with LLMs, such as ChatGPT, we may be able to leverage their broad domain capabilities to reason about physical systems with far less training data than existing DNN-based models. Rather than building a foundational physics model from scratch, we explore the potential of repurposing existing foundational models to address problems in metamaterial design and show proming results. [29, 31] Large language models (LLMs) like generative pre-trained transformers (GPTs) have recently emerged as a foundational model primarily designed to handle natural language processing tasks. [29] By harnessing vast amounts of text data, these models learn to predict the next word in a sentence, thus acquiring an ability to construct coherent and contextually relevant text. Their design incorporates a deep understanding of language structure and encapsulates broad knowledge across diverse domains, enabling them to perform reasoning tasks. [32] For instance, LLMs can engage in conversations, translate languages, summarize texts, and even generate content that mimics human writing styles. [29, 31] The multifaceted capabilities of LLMs are rooted in their extensive training on diverse datasets. [33] First, they integrate a vast range of information from their training sets, making them repositories of wide-reaching knowledge about much the world. This allows them to recall and leverage facts, concepts, and relationships when generating responses. Secondly, LLMs can perform reasoning tasks based on the information they have been trained on, enabling them to handle queries that require logical deductions, problem-solving, or creative generation. This can range from solving mathematical problems to crafting detailed narratives or technical explanations. Lastly, the ability of LLMs to explain their reasoning process adds a layer of interpretability, often allowing users to understand the steps the model took to arrive at a conclusion or response, thus providing insights into the model\u2019s thought process. In the case of metamaterial design, LLMs such as GPT could revolutionize how physical systems are modeled and understood with minimal training data. [34] The diverse, extensive textual training of LLMs encompasses fundamental physics concepts, which are crucial for understanding the dynamics of metamaterials. By internalizing multiple laws of physics and their applications, LLMs can potentially extrapolate and make accurate predictions about new metamaterial scenarios. Such potential could lead to a more efficient process in predicting metamaterial properties and behaviors by leveraging learned physical laws rather than relying solely on extensive empirical data typical of traditional DNN approaches. LLMs could significantly speed up the design and simulation processes in metamaterial engineering. In light of this, our study aims to initiate the field by examining the feasibility of employing currently available LLMs to tackle a metamaterials challenge. Building on recent findings that suggest LLMs\u2019 proficiency in scientific regression and classification tasks [35], we explore their potential in predicting the electromagnetic spectra of metamaterials \u2013 a problem expressible in textual terms. This adaptability of LLMs has already been demonstrated in chemistry [36, 37], optics [38], and mechanics[39], signaling their versatility across various scientific fields. Our investigation looks explicitly at all-dielectric metasurfaces, comparing the capabilities of LLMs with established machine learning models. To our knowledge, there has yet to be a study of the use of LLMs on regression tasks with high-dimensional outputs, such as those often encountered in metamaterial problems. We find that a fine-tuned LLM (FT-LLM) achieves a lower error metric on all dataset sizes than several machine learning based approaches, including a deep neural network. We also probe the capability of the FT-LLM to generate physically meaningful insights on ADMs and compare results to an out-of-the-box LLM. Through this work, we highlight the potential of LLMs as powerful tools in scientific exploration, potentially broadening the horizons for future innovations and discoveries in metamaterials and beyond. 2 Related Work In this section, we will focus mainly on previous research work on deep learning in metamaterials and the application of LLM in the science field. Deep Learning for Metamaterial Simulator Machine learning approaches offer an effective solution to the above challenges and have gained significant attention over the last decade. Due to their strong generalizability, machine learning and deep learning methods can discover a general mapping from geometry to electromagnetic spectra. There are many options for representation of the metamaterial geometry as input to the DNN, which directly influences the deep learning architecture. Most research efforts in metamaterial employ a one-dimensional (1D) vector to represent metamaterial structures; thus, they apply fully connected deep neural networks (DNNs). [40] This approach has been explored in various. [13, 41, 7] In other work, a transformer was used to perform forward prediction and achieved superior prediction accuracy compared to standard MLP models. [42] On the other hand, some papers conceptualized 2 \fmetamaterials as two-dimensional (2D) arrays, where binary values (0 and 1) represent the different materials, effectively mapping geometry. This mapping aligns well with the convolutional neural network (CNN) architecture, and good results have been shown. [43, 44, 45] For example the developed CNN models could make accurate predictions of the dispersion relation for a given structural configuration. Advancements in Large Language Models for Science With the appearance of GPT3.5[29], LLMs have seen rapid development. Indeed, various models, such as GPT-3.5 [29] and LLaMA [31], have shown great capabilities in language understanding and text generation. [33] These advancements have extended beyond traditional text-based applications, and LLMs are increasingly being used in various scientific endeavors, such as knowledge extraction [46] and automating experimental procedures [34, 47]. Recently, LLMs have been shown to posses good capabilities for regression and classification tasks. For example, the so-called language-interfaced fine-tuning for non-language machine learning tasks (LIFT) was shown to transform tabular data into text sentences for fine-tuning of GPT-J without altering the structure and loss of the model. [35] Subsequent studies have further explored the utility of LLMs in specific scientific domains. A survey on GPT-3 performance for chemistry tasks was undertaken which demonstrated that LLMs have better data efficiency than a base machine learning model for classification tasks. [36] One study used content learning to predict the compressive strength of concrete, [37] while another found that material structures can be encoded as linear text descriptions to fine-tune Llama2 [39] However, the application of LLMs for metamaterials research remains underexplored. In addition, previous works have focused on regression problems with relatively small output dimensions and limited data sets. Our study seeks to bridge this gap by examining the performance of LLMs in addressing the multidimensional regression challenges inherent in the simulation of electromagnetic metamaterials. Our objective is to elucidate the potential of LLMs to advance the field of metamaterials. 3 Methodology In this study, we employed LIFT [35] allowing for the adaptation of LLMs for metamaterial regression tasks without the need for architectural modifications or alterations to the loss function. The workflow of our method is shown in Figure 1. It contains three parts: data transformation, fine-tuning, and inference. 3. Fine Tuning Large Luanuage Model 2. Data Transform 1. Simulation Text Description Text Prediction Abosorptance Metamaterial Figure 1: The figure illustrates our comprehensive workflow, which begins with simulation to acquire geometry-spectrum data sets. Subsequently, these numerical geometric vectors are transformed into textual descriptions. we fine-tune GPT 3.5 by OPENAI\u2019s API. For the inference phase, we employ the fine-tuned model to predict the absorptance. Data Transformation Our first step is to transform the numerical data into a suitable format for input to the LLM. The all-dielectric metamaterial we explore is a unit-cell consisting of four elliptical resonators, and therefore has 14 geometrical parameters of height, periodicity, semi-major axis, semi-minor axis, and rotations angles \u2013 see Figure 1. 3 \fThese 14 parameters are denoted as: (h, p, rma1, rmi1, rma2, rmi2, rma3,rmi3,rma4, rmi4, \u03b81, \u03b82, \u03b83, \u03b84). We employ a gene vector to encode the geometry of the metamaterial. [39] This encoding uses a series of numbers to define the geometry, and follows the template: \"The All-dielectric metasurface suspend in free space is: Get the absorptivity\". For the output spectrum, we maintain three-decimal point precision, generating completions such as \u2019[0.001, \u00b7 \u00b7 \u00b7 , 0.678 ]\u2019. Model Fine-Tuning and Inference The generated sentence data is then used to fine-tune the large language model, here GPT-3.5. Since GPT-3.5 is a black-box model, we use OPENAI\u2019s API for fine-tuning. The fine-tuning phase adapts the model to the output structure of our dataset, focusing on generating 50-length vectors to represent absorptivity spectra. During inference, the fine-tuned model outputs a 50-length vector as the absorptivity curve, such as \u2019[0.001, \u00b7 \u00b7 \u00b7 , 0.864, ...]\u2019. We convert this text list back to a number for comparison with ground truth data. It\u2019s important to note that the output of a LLM may not match the expected output length because LLMs are generative models. For a given geometry g, the prediction spectra sp produced by LLM would be a vector of length La, while the length of the ground truth st is Lb. It may happen that a \u0338= b. To solve this, we implemented an alignment strategy that compares only the first min(a, b) elements of the predicted and true spectra vectors. Notably, we find that the most common invalid value of a is 51. Also, we may fail to convert LLM\u2019s textual output to numerical values. For example, an output such as \"0.0.13\" that cannot be directly converted to a valid numeric value is identified as an anomaly. In these cases, the output is flagged for regeneration to ensure that the model predictions are in the expected format. 4 Experimental Design and Resources Dataset Our study harnesses the dataset initially introduced and benchmarked in previous studies[13, 40], chosen for its relevance to the understanding of the capability of LLM in designing metasurfaces. This dataset is open-access, and the structured vector formats of its geometrical inputs and spectral outputs offer an advantageous footing for prompt engineering in LLMs. Such characteristics are helpful for physically meaningful data manipulation and model training, enabling a higher degree of interpretability of feature importance. Below, we delineate the metasurface geometry and spectra in the dataset. The all-dielectric metasurface is fashioned from silicon carbide and operates within the 150-500 THz frequency range. This complex structure is defined by a supercell comprising four elliptical resonators, each positioned at the center of a subdivided quadrant within a square supercell. This metasurface\u2019s geometric configuration is given as a 14-dimensional vector:[h, p, rma1, rmi1, rma2, rmi2, rma3, rmi3, rma4, rmi4, \u03b81, \u03b82, \u03b83, \u03b84]. The periodicity p parameter specifies the side length of the supercell, setting the foundational operating range for the resonator array, with the height h parameter establishing the uniform height of all resonators. The dimensions of each elliptical resonator along the x and y axes are proportionally scaled to the supercell\u2019s periodicity through the radius x-axis ratio rma,i and radius y-axis ratio rmi,i for each resonator, respectively. Additionally, the orientation of each elliptical resonator is adjusted through a rotational angle \u03b8i, measured in radians, about the x-axis. All parameters are integral for defining the electromagnetic response of the metasurface, with units in \u00b5m. Given the challenges associated with processing high-dimensional data by LLMs, the spectrum output was manipulated by first downsampling from 2000 frequency points to 100 frequency points. Then, we only select 50 points from the 150 \u2212350THz frequency range, aiming to refine the LLM\u2019s predictive accuracy and computational efficiency within the expansive operational bandwidth. Data Handling To ensure experimental integrity, we divided the dataset into three distinct, independently sampled sets: training, validation, and test. The training set was randomly selected for each dataset size\u2019s training session, whereas the validation and test sets comprising 1,000 samples each were specified prior to the experiments. Optimal model selection was based on validation set performance, with the test set used for final evaluation. This approach guarantees the reliability and reproducibility of our comparison between baseline and large language models for metamaterials research. Scoring Metrics This section outlines the metrics employed for evaluating the performance of both baseline and LLM models. It is pertinent to acknowledge that the selected baseline models all utilize Mean Squared Error (MSE) as a training criterion, which could inherently bias the evaluation in their favor as opposed to LLMs, which are predominantly trained using a variant of cross-entropy loss. Despite this discrepancy, MSE was selected for the regressive metamaterials problem as it is best-suited for regression tasks training due to its robustness in quantifying the average squared difference between predicted and actual values. In predicting electromagnetic spectra for metamaterials, the MSE is modified to be: 4 \fMSE = 1 n n X i=1 1 f f X j=1 (Si,j \u2212\u02c6 Si,j)2, (1) where n is the number of samples, f is the number of frequency in the spectrum, Si,j represents the absorptivity value for ith sample at jth frequency point, and \u02c6 Si,j denotes the model predicted value. For a thorough evaluation of all regression models in the study, we benchmark their performance using two additional regression task metrics: Mean Absolute Error (MAE) and Mean Absolute Relative Error (MARE). Specifically for the metasurface problem, these metrics have been adapted as follows: MAE = 1 n n X i=1 1 f f X j=1 |Si,j \u2212\u02c6 Si,j|, (2) MARE = 1 n n X i=1 1 f f X j=1 |Si,j \u2212\u02c6 Si,j| |Si,j| , (3) Moreover, the relevance of cross-entropy loss, particularly for LLMs, cannot be overlooked. This loss function is defined as: Cross Entropy Loss = \u2212 M X c=1 yo,c log(po,c), (4) where M represents the number of classes, yo,c is a binary indicator of whether class label c is the correct classification for observation o, and po,c is the predicted probability of observation o being in class c. To ensure a fair and uniform assessment across all models, performance metrics are exclusively reported in terms of MSE, MARE and MAE on the test set. This standardized evaluation criterion facilitates a straightforward comparison across different models applied on the all-dielectric metasurface problem. Baseline Models To benchmark the performance of LLMs, we incorporate four other machine learning algorithms: Feed-forward Neural Networks (NN), Random Forests (RF), K Nearest Neighbors (KNN), and Linear Regression (LR). The selection of the NN was motivated by its published efficacy in various practical applications, particularly within the domain of metamaterials research, where its ability to model complex, nonlinear relationships is highly valued[1, 2, 3]. Conversely, RF, KNN, and LR represent classical machine learning algorithms designed to address regression challenges. Given that the design of metamaterials predominantly poses regression problems, these algorithms were deemed especially suitable for benchmarking against the metasurface design challenges alongside LLMs. Our choice of algorithms aims to encompass both the cutting-edge capabilities of neural networks and the robust, wellestablished methodologies of classical machine learning, ensuring a comprehensive and fair evaluation of LLM applicability and performance in the all-dielectric metasurface design. Prior to model training, our dataset undergoes preprocessing steps to ensure optimal model performance. The geometry inputs are normalized to a range of [-1,1], promoting convergence in the NN training process. The absorptivity spectra, already within the range of [0,1], require no additional preprocessing. We allow 30 iterations of Bayesian optimization to fine-tune each model\u2019s hyperparameters. The selection of optimal hyperparameters is determined based on performance metrics evaluated on the validation set, with the final model performance reported against the test set. Experimental Resources and accessibility For the training and inference of NNs, we utilize NVIDIA GTX 3090 GPUs, employing the PyTorch library to facilitate our computations. The execution of RF, KNN, and LR models is conducted on an Intel\u00ae Xeon\u00ae Gold 6226 CPU, leveraging their computational efficiency for these specific algorithms. 5 Experimental Results and Discussion In this section, our primary focus is investigating the performance of large-language models and comparing them with other baseline models (Sect. 5.1). We also examine the impact of temperature (Sect. 5.2) and the influence of the prompt template (Sect. 5.3). Additionally, we explore model performance on inverse design (Sect. 5.4) and interpretability (Sect. 5.5). 5 \f5.1 Data Size Influence We trained our Large Language Model (LLM) and other baseline models using varying sizes of training data and evaluated their performance on a consistent test set comprising 1000 samples. Figure 2 illustrates the performance of the models in different data size scenarios. We applied the temperature as 0.5 while testing the GPT model. A more detailed discussion of the impact of temperature on performance can be found in Section 5.2 MSE MARE 0.03 0.02 0.01 0.2 0.4 0.6 0.8 102 103 104 Linear KNN RF NN GPT (a) (b) 102 103 104 Dataset Size Figure 2: Evaluations of model performance with varying dataset sizes. (a) MARE and (b) MSE trends for baseline models and the fine-tuned GPT model as dataset size increases. All the results presented are averages from three models. However, the GPT model results at the 10,000, 20,000, and 40,000 data points are exceptions, as computational constraints limited these to single trials. Error bars indicate the standard deviation of the three models. Regarding to MARE, Fig. 2(a) shows the superior performance of the GPT in all data scenarios. Particularly in relatively low-data(1000-10,000) environments, GPT outperforms neural network models a lot. With training samples greater than 10000, the performance of the NN is close to the GPT. However, the analysis based on MSE offers a different perspective. Based on Fig. 2(b), we observed that fine-tuned GPT 3.5 performs the poorest among all models in the low-data scenario (\u22641000 samples). However, as the size of the data set increases, the fine-tuned GPT shows a remarkable improvement in performance, outpacing some baseline models. In particular, for 40,000 training samples, the fine-tuned GPT 3.5 emerges as the top-performing model, with its MSE only slightly inferior to that of neural networks. This improvement underscores the LLM\u2019s capacity to identify and apply complex patterns from extensive datasets. Even with 40,000 training samples, the performance of GPT 3.5 has not yet reached convergence. Furthermore, the slope of GPT 3.5 is much higher than that of other models. We anticipate that with more data input, the performance of our GPT model will narrow the gap with neural networks and may even surpass them. The observed differences in model performance between the MSE and MARE evaluations can be attributed to the sensitivity to the error types of these metrics. MSE penalizes large deviations in absolute values between predicted and actual values. However, MARE provides a normalized error measure that reflects the accuracy of the model relative to the magnitude of the actual values. This metric focuses on the proportionality of the error. Given that our baseline models are optimized towards minimizing MSE, they inherently focus more on reducing large absolute value errors. However, the GPT model is fine-tuned on a cross-entropy loss, which maintains a consistent performance across all ranges of values. 5.2 Temperature Influence Temperature plays a crucial role in LLM output by influencing the level of randomness in the generated results. In the low-temperature setting, the model tends to produce the most probable results based on its training data. Conversely, a high-temperature setting increases the randomness and diversity of the model\u2019s output, making it more likely to generate low probable values. Our tests explored the effects of varying temperature settings, specifically [0, 0.25, 0.5, 0.75, 1], on model performance, as shown in Fig.3(a). Our findings indicate that the impact of temperature is related to the size of the data set. Specifically, for small data sets (\u226410000), the setting Tmeperature = 0 leads to poor predictions, indicative of an overreliance on training data that may not capture the input-output relationship effectively with limited data. In contrast, as the dataset expands, a 6 \flow temperature near zero is conducive to minimizing the Mean Squared Error (MSE), as the LM\u2019s predictions are increasingly informed by the enriched data. However, the extremely high temperature(1) decreases performance across all data sizes, as the over-randomized output is not expected in the regression tasks. Interestingly, as the volume of training data increases, the optimal temperature setting trends to be lower. The result is shown in Fig.3(b). Although moderate randomness can improve the quality of the output in data-constrained scenarios, on the contrary, it decreases the accuracy of the output in data-rich environments. 0.75 0.50 0.25 0.00 Best Temperature (a) 0.00 0.50 MSE(Log10) -1.6 -2.0 -2.2 Temperature 102 103 104 Data Size (b) 1.00 -1.8 100 1000 10000 20000 40000 Figure 3: Evaluations of model performance with varying temperature settings. (a) MSE trends for the GPT model fine-tuned on different numbers of training samples as the temperature increases. (b) The temperature that gives the best MSE in different data sizes. The results are averaged from three trials. . 5.3 Prompt Influence Large language models, trained on extensive datasets, are adept at processing information about a wide array of topics, including electromagnetism. We tested whether providing a more detailed geometric description prompt improves model performance to leverage the knowledge. To this end, we evaluated the impact of prompt designs on the performance of fine-tuned GPT. In addition to the vector representation template, we propose a detailed description prompt template, aimed at enhancing the model\u2019s understanding of our task. This approach hypothesizes that a detailed contextual introduction might help the model to grasp the physical implications of the parameters. Examples of these templates are provided in Table 1. We fine-tuned GPT 3.5 using both prompt designs across datasets of varying sizes. The MSE comparison is shown in Figure 4. Our findings reveal an intriguing observation: both prompt designs yield nearly indistinguishable performance in all sizes of training samples. This consistency suggests that whether the input data is presented in a concise vector form or in a detailed description does not significantly influence the predictive accuracy of the model. The minimal impact of the feature name on the performance of the model is consistent with the insights from the previous paper[35]. 5.4 Inverse Design An important goal of deep learning in metamaterials is to use a model to generate geometry with the desired spectrum, which is called inverse design. Compared to regression, it is a difficult problem, as it is a one-to-many problem[40]. In this approach, we follow the concept of neural-adjoint[13], in which the inverse design is achieved by directly querying the well-trained forward LLM. Table 2 illustrates the example prompt and outputs. Unfortunately, this strategy did not produce successful results. On the one hand, models trained on datasets exceeding 10,000 samples were prone to producing invalid output. In the majority of instances, despite the imposition of strict constraints on the output format, our model will disregard the instruction and merely provide a list of numbers. In some cases, our model will provide responses that are not only erroneous but also crazy. This may be attributed to the lack of diversity in our training dataset. As all the completions are the list of numbers, this dominance in training data appears to have skewed the model learning process. However, models fine-tuned with small datasets could generate geometries, yet the majority of these are erroneous. Since our model is not trained to design metamaterial, it lacks this ability. 7 \fData Size 102 103 104 MSE 0.03 0.02 0.01 0.00 Figure 4: Evaluations of model performance with two different prompt templates. These templates are provided in Table 1. All the results presented are averages from three models. However, the results at the 10,000 data points are exceptions, as computational constraints limited these to single trials. 5.5 Interpretability The previous section discussed the fine-tuned GPT as a standard regression model. In this section, we evaluate the fine-tuned GPT\u2019s comprehension of electromagnetic metamaterials by asking questions about the impact of altering the geometry features. Table 3 presents the questions and answers. Contrary to expectations, the fine-tuned GPT did not demonstrate a significantly better understanding than the original model. This observation suggests that training on geometry-spectra pairs might not give the model a holistic grasp of the physical concepts of our task. Additionally, we observed a stylistic difference: The fine-tuned GPT tends to produce one-paragraph answers, in contrast to the original GPT\u2019s piecewise interpretation. The difference in output style could be due to the format of our training data. The completions are a single paragraph, which can significantly affect the output style of the model. 6 Conclusion In this paper, we fine-tuned LLMs to predict electromagnetic spectra and address design challenges in metamaterials. The experimental results indicate that LLMs, particularly when fine-tuned with extensive training data, can achieve competitive performance in high-dimensional regression tasks. In terms of MARE, the LLM exhibited superior performance. This indicates that LLMs can effectively capture complex relationships between geometry and spectra. However, we also face some limitations. We tested LLMs\u2019 performance in inverse design. The unexpected results show that the performance of LLMs in one-to-many tasks remains inadequate. Additionally, the reliance on extensive training data sets and the high cost of fine-tuning in LLMs limits the practical application, especially in limited data or budget scenarios. Future work could focus on improving the geometry representation and fine-tuning adjustments. Inspired by the success of SMILES[48] in cheminformatics, which is a technique that uses ASCII to present the structure of the module, developing encoding schemes to represent the geometry of metamaterials in linear text could improve LLM performance. Additionally, the cross-entropy used for LLM fine-tuning may not align well with our regression tasks. Therefore, exploring fine-tuning algorithms specifically designed for regression, or reformulating regression challenges as classification tasks, could enhance the performance of LLMs. Acknowledgments D.L. and Y.D. acknowledge the assistance of ChatGPT, developed by OpenAI, for editing and language improvement. 8" + }, + { + "url": "http://arxiv.org/abs/2404.13765v1", + "title": "SciDaSynth: Interactive Structured Knowledge Extraction and Synthesis from Scientific Literature with Large Language Model", + "abstract": "Extraction and synthesis of structured knowledge from extensive scientific\nliterature are crucial for advancing and disseminating scientific progress.\nAlthough many existing systems facilitate literature review and digest, they\nstruggle to process multimodal, varied, and inconsistent information within and\nacross the literature into structured data. We introduce SciDaSynth, a novel\ninteractive system powered by large language models (LLMs) that enables\nresearchers to efficiently build structured knowledge bases from scientific\nliterature at scale. The system automatically creates data tables to organize\nand summarize users' interested knowledge in literature via question-answering.\nFurthermore, it provides multi-level and multi-faceted exploration of the\ngenerated data tables, facilitating iterative validation, correction, and\nrefinement. Our within-subjects study with researchers demonstrates the\neffectiveness and efficiency of SciDaSynth in constructing quality scientific\nknowledge bases. We further discuss the design implications for human-AI\ninteraction tools for data extraction and structuring.", + "authors": "Xingbo Wang, Samantha L. Huey, Rui Sheng, Saurabh Mehta, Fei Wang", + "published": "2024-04-21", + "updated": "2024-04-21", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "SciDaSynth: Interactive Structured Knowledge Extraction and Synthesis from Scientific Literature with Large Language Model", + "main_content": "INTRODUCTION Nowadays, the rapid advancement of scientific research has witnessed an unprecedented growth of research literature from different disciplines. As a result, the extraction and synthesis of structured knowledge and findings from this vast amount of information has become increasingly paramount. This process is crucial for researchers to keep pace with the latest research developments, identify emerging trends, and drive innovative ideas and hypotheses based on prior research. Moreover, the structured organization of the extracted knowledge as databases facilitates a systematic and cohesive understanding of the research landscape, promotes seamless \u2217Both are corresponding authors. integration of new discoveries, and fosters collaboration and communication within the scientific community. Building structured knowledge bases from the massive research literature is a cognitively demanding and time-consuming process with a sequence of inter-connected tasks. Prior systems have been built to aid researchers with the preliminary stages of structured knowledge extraction, including literature discovery and collection [8, 17, 37, 41], comprehension, and digestion [19, 25, 26]. However, a critical gap remains in the ability of these systems to process the unstructured knowledge within the literature as structured data in a standardized format. To address this gap, several challenges arise. 1) Multimodal information in literature. Scientific papers often contain diverse modalities of information, such as text, tables, and figures. The multimodality adds complexity to identifying the relevant information within each modality scattered throughout a paper and integrating it into a structured and coherent format. 2) Variation and inconsistencies across literature. The style, structure, and presentation of the papers can significantly vary from one to another. The variation and inconsistencies make it difficult to standardize the information to be included in a structured knowledge base. For example, the same concepts may be described using different terminologies or measurement units. 3) Flexibility and domain adaptation. Users may have varying research questions for a collection of papers, and these papers can span across different domains. Therefore, the system must be flexible enough to adapt to the diverse data needs of different users and domains. To tackle these challenges, we leverage large language models (LLMs) as the backbone to interpret complex scientific literature, extract relevant information from diverse modalities, and produce structured output via QA-based interactions with users. Our choice is motivated by the following considerations: 1) recent LLMs (e.g., GPT-4 [35] and Llama 2 [46]) have exhibited promising understanding, reasoning and generation capabilities to solve various natural language and multimodal tasks across different domains [42, 52]; 2) LLM-based systems (e.g., ChatGPT and Gemini) with QA-based interactions have become increasingly popular for people to flexibly specify their analytical needs and conduct information-seeking and sensemaking. Despite their potential, LLMs can struggle with complex reasoning tasks in specialized domains (e.g., inconsistent arXiv:2404.13765v1 [cs.HC] 21 Apr 2024 \fWang et al. information disambiguation and numerical reasoning). Additionally, LLMs may suffer from hallucination problems, leading to the generation of misinformation. All these drawbacks are particularly problematic for the precision requirements of scientific knowledge bases and necessitate human expertise to oversee and rectify the structured knowledge generated by LLMs. We aim to synergize LLMs\u2019 strengths with researchers\u2019 expertise to efficiently build accurate and reliable knowledge bases from the scientific literature. To this end, we present SciDaSynth, a novel interactive system that helps researchers build structured knowledge bases from scientific literature in a systematic and scalable manner powered by LLMs. It enables users to distill their interested knowledge into structured data tables via QA-based interactions and provides a multi-faceted visual summary of different dimensions and subsets of the data tables to guide iterative validation, correction, and refinement. Particularly, the system supports dimension-guided flexible grouping of data records to assist a global understanding of data variations and inconsistencies across the literature. To further help users identify and fix data errors, SciDaSynth establishes, highlights, and maintains connections between generated data and relevant information in the literature and supports data editing by batch. We conducted a within-subjects study with 12 researchers to qualitatively and quantitatively study the effectiveness and usability of SciDaSynth for data extraction from literature. The results show that by using SciDaSynth, participants could produce quality data comparable to the human baseline in a much shorter time. Moreover, participants perceive various benefits brought by SciDaSynth, such as streamlining their data extraction workflow and facilitating data locating, validation, and refinement. However, several limitations of automated LLMs for data extraction were revealed. Participants remained cautious of LLM-generated results and expressed their preferences about using and trusting these generated results. Besides, participants also identify promising use cases of SciDaSynth, such as paper screening, data monitoring, results interpretation, and sharing. Finally, we discuss design implications for future human-AI interaction systems for information extraction tasks. In summary, our major contributions are: \u2022 SciDaSynth, an interactive system that offers a computational pipeline for data extraction and structuring from massive scientific literature and facilitates human-data interactions in data exploration, extraction, validation, and refinement via interactions and visualizations. \u2022 The quantitative and qualitative results of our user study that reveal the effectiveness, user experience, and promising use cases of SciDaSynth for data extraction from the scientific literature. \u2022 Implications for future system designs of human-AI interaction for data extraction and structuring. 2 RELATED WORK 2.1 Structured Information Extraction from Scientific Literature The exponential growth of scientific papers has generated largescale data resources for LLMs\u2019 building and applications for information extraction tasks, such as named entity recognition and relation extraction in scientific domains. Some representative LLMs (e.g., SciBert and Galactica) [4, 31, 43] adopt supervised fine-tuning on scientific publications and achieve good generalizability to perform information extraction from various domains. Building upon these models, Zhao et al. [54] proposed text-based and table-based BERTbased models for the optical-materials domain. Dagdelen et al. [11] leveraged LLMs to extract entities and their relations from material science text and organized them in JSON format. By integrating reinforcement learning with human feedback into the LLM training process, current LLMs (e.g., GPT-4 [35] and Llama [46]) enable zero-shot prompting to follow human instructions and demonstrate superior performance in complex analytical and reasoning tasks in diverse domains without fine-tuning. In our work, we prompt GPT-4 to identify relevant information in papers according to users\u2019 requests. Besides, data in scientific literature is another particular focus for extraction. The data is usually stored in tables and figures in PDFs of research papers, and many toolkits are available to parse PDF documents, such as PaperMage [33], GROBID [18], Adobe Extract API [22], GERMINE [45], GeoDeepShovel [53], PDFFigures 2.0 [10]. Here, we leverage the off-the-shelf tool to parse PDF text, tables, and figures. Besides the tools in the research community, Elicit [13] is a commercial software that facilitates systematic review. It enables users to describe what data to be extracted and create a data column to organize the results. However, it does not provide an overview of the extracted knowledge to help users handle variation and inconsistencies across different research literature. Here, we also formulate the knowledge as structured data tables. Moreover, we provide multi-faceted visual and text summaries of the data tables to help users understand the research landscape, inspect nuances between different papers, and verify and refine the data tables interactively. 2.2 Tools that Augment Literature Reading and Comprehension Research literature reading and comprehension is cognitively demanding, and many systems have been developed to facilitate this process [3, 9, 14, 15, 20, 25, 26, 28, 30, 36]. One line of research studies aims to improve the comprehension and readability of individual research papers. To reduce barriers to domain knowledge, ScholarPhi [20] provided in-situ support for definitions of technical terms and symbols within scientific papers. PaperPlain [3] helped healthcare consumers to understand medical research papers by AI-generated questions and answers and in-situ text summaries of every section. Some work [8, 38] designed interactive visualizations to summarize and group different papers and guide the exploration. Some systems support fast skimming of paper content. For example, Spotlight [30] extracted visual salient objects in a paper and overlayed it on the top of the viewer when scrolling. Scim [15] enabled faceted highlighting of salient paper content. To support scholarly synthesis, Threddy [25] and Synergi [26] facilitated a personalized organization of research papers in threads. Synergi further synthesized research threads with hierarchical LLM-generated summaries to support sensemaking. To address personalized information needs for a paper, Qlarify [14] provided paper summaries by recursively expanding the abstract. Kim et al. [28] linked text \fSciDaSynth with corresponding tables to promote a unified understanding of arguments in papers. Although these systems help users digest research papers and distill knowledge with guidance, we take a step further by converting unstructured knowledge and research findings scattered within research papers into a structured data table with a standardized format. 2.3 Document Question Answering Systems for Information Seeking People often express their information needs and interests in the documents using natural language questions [44]. Many researchers have been working on building question-answering models and benchmarks [12, 24, 29, 40, 47] for scientific documents. With recent breakthroughs in LLMs, some LLM-fused chatbots, such as ChatDoc [6], ChatPDF [7], ChatGPT [1], Claude [2], are becoming increasingly popular for people to turn to when they have analytic needs for very long documents. However, LLMs can produce unreliable answers, resulting in hallucinations [23, 27]. It is important to attribute the generated results with the source (or context) of the knowledge [49]. Then, automated algorithms or human raters can examine whether the reference source really supports the generated answers using different criteria [5, 16, 34, 39, 51]. In our work, we utilize retrieval-augmented generation techniques [32] to improve the reliability of LLM output by grounding it on the relevant supporting evidence in the source documents. Then, we use quantitative metrics, such as context relevance, to evaluate the answer quality and prioritize users\u2019 attention on checking and fixing low-quality answers. 3 FORMATIVE STUDY We aim to develop an interactive system that helps researchers distill, synthesize, and organize structured knowledge from scientific literature in a systematic, efficient, and scalable way1. To better understand the current practice and challenges they face during the process, we conducted a formative interview study. 3.1 Participants and Procedures 3.1.1 Participants. 12 researchers (P1-P12, five females, seven males, age: three from 18-24, nine from 25-34) were recruited from different disciplines, including medical and health sciences, computer science, social science, natural sciences, and mathematics. Nine obtained PhD degrees and three were PhD researchers. All of them had extracted data (e.g., interventions and outcomes) from literature, ten of which had further statistically analyzed data or narratively synthesized data. Seven rated themselves as very experienced, where they had led or been involved with the extraction and synthesis of both quantitative and qualitative data across multiple types of reviews. Five had expert levels of understanding and usage of computer technology for research purposes, and seven rated themselves at moderate levels. 3.1.2 Procedures. Before the interviews, we asked the participants to finish a pre-task survey, where we collected their demographics, 1Here, we focus on the stage where researchers have the final pool of studies ready for extraction, excluding literature search and screening. experience with literature data extraction and synthesis, and understanding and usage of computer technology. Then, we conducted 50-minute interviews with individuals over Zoom. During the interviews, we inquired the participants about (1) their general workflow for data extraction from literature, desired organization format of data; (2) what tools were used for data extraction and synthesis, and what are their limitations; (3) expectations and concerns about computer and AI support. 3.2 Findings and Discussions 3.2.1 Workflow and tools. After getting the final pool of included papers, participants first created a data extraction form (e.g., fields) to capture relevant information related to their research questions, such as data, methods, interventions, and outcomes. Then, they went through individual papers, starting with a high-level review of the title and abstract. Afterward, participants manually distilled and synthesized the relevant information required on the form. The data synthesis process often involved iterative refinement, where participants might go back and forth between different papers to update the extraction form or refine previous extraction results. Common tools used by participants included Excel (9/12) and Covidence or Revman (4/12) for organizing forms and results of data extraction. Some participants also used additional tools like Typora, Notion, Python or MATLAB for more specialized tasks or to enhance data organization. The final output of this process was structured data tables in CSV and XLSX format that provided a comprehensive representation of the knowledge extracted from the literature. 3.2.2 Challenges. Time-consuming to manually retrieve and summarize relevant data within the literature. Participants found it timeconsuming to extract different types of data, including both qualitative and quantitative data, located at different parts of the papers, such as text snippets, figures, and tables. P1 commented, \u201cSometimes, numbers and their units are separated out at different places.\u201d The time cost further increases when facing \u201cmany papers\u201d (7/12) to be viewed, \u201clong papers\u201d (5/12), or papers targeting very specialized domains they are not so familiar with (5/12). P3 added, \u201cWhen information is not explicit, such as limitations, I need to do reasoning myself.\u201d And P5 mentioned, \u201cIt takes much time for me to understand, summarize, and categorize qualitative results and findings.\u201d Tedious and repetitive manual data entry from literature to data tables. After locating the facts and relevant information, participants need to manually input them into the data tables, which is quite low-efficiency and tedious. P3 pointed out, \u201c... the data is in a table (of a paper), I need to memorize the numbers, then switch to Excel and manually log it, which is not efficient and can cause errors.\u201d P4 echoed, \u201cSwitching between literature and tools to log data is tedious, especially when dealing with a large number of papers, which is exhausting.\u201d Significant workload to resolve data inconsistencies and variations across the literature. Almost all participants mentioned the great challenges of handling inconsistencies and variations in data, such as terminologies, abbreviations, measurement units, and experiment conditions, across multiple papers. It was hard for them to \fWang et al. standardize the language expressions and quantitative measurements. P7 stated, \u201cPapers may not use the same terms, but they essentially describe the same things. And it takes me lots of time to figure out the groupings of papers.\u201d P9 said, \u201cI always struggle with choosing what words to categorize papers or how to consolidate the extracted information.\u201d Inconvenient to maintain connections between extracted data and the origins in literature. The process of data extraction and synthesis often required iterative review and refinement, such as resolving uncertainties and addressing missing information by revisiting original sources. However, when dealing with numerous papers and various types of information, the links between the data and their sources can easily be lost. Participants commonly relied on memory to navigate specific parts of papers containing the data, which is inefficient, unscalable, and error-prone. P8 admitted, \u201cI can easily forget where I extract the data from. Then, I need to do all over again.\u201d 3.2.3 Expectations and concerns about AI and computer support. Participants anticipated that AI systems could automatically extract relevant data from literature based on their requests (7/12), and organize it into tables (9/12). They desired quick data summaries and standardization to (6/12) facilitate synthesis. Additionally, they wanted support for the categorization of papers based on userdefined criteria (4/12) and enabling efficient review and editing in batches (4/12). Besides, participants expected that the computer support should be easy to learn and flexibly adapt to their data needs. Many participants stated that the existing tools like Covidence and Revman were somewhat complex, especially for new users who may struggle to understand their functionalities and interface interactions. Due to the intricate nature of scientific research studies, participants shared concerns about the accuracy and reliability of AI-generated results. They worried that AI lacks sufficient domain knowledge, and may generate results based on the wrong tables/text/figures. P12 demanded that AI systems should highlight uncertain and missing information. Many participants requested validation of AI results. 3.3 Design Goals Given the current practice and challenges of data extraction and synthesis from literature and the expectations and concerns about AI support, we distilled the following system design goals (DGs). DG1. Automated data extraction and structuring adapted to users\u2019 needs. DG2. Data summarization and standardization. DG3. Scalable and efficient review, editing, and refinement of data. 3.1 Flexible grouping and separation of papers based on userdefined criteria. 3.2 Awareness and validation of AI accuracy. 3.3 Maintain connections between extracted data and its origins in literature. 3.4 Efficient batch editing and refinement. DG4. Familiar and straightforward designs and interactions. 4 SYSTEM In this section, we introduce the design and implementation of SciDaSynth. First, we provide an overview of the system workflow. Then, we describe the technical implementations for data extraction and structuring. Afterward, we elaborate on the designs and interactions for the interface. Finally, we introduce a usage scenario to walk through the system. 4.1 System Workflow After uploading the PDF files of research literature, users can interact with SciDaSynth with natural language questions (e.g., \u201cWhat are the task and accuracy of different LMs?\u201d) in the chat interface. The system then processes this question and presents the user with a text summary and a structured data table (DG2). This natural language interaction directly addresses the data needs without requiring tedious interface interactions (e.g., drag and drop) (DG1, DG4). The data table provided by SciDaSynth includes specific dimensions related to the user\u2019s question, such as \u201cModel\u201d, \u201cTask\u201d, and \u201cAccuracy\u201d, along with corresponding values extracted from the literature. If there are missing values or information with low relevance scores, these are highlighted in the table. This feature directs the user\u2019s attention to areas that may need further exploration or validation by referring back to the original data sources in the literature (DG3.2). In cases where the system is found to use incorrect source information to generate the results, the user can easily access the original paper, review the extracted paper tables and figures, and make necessary corrections directly in the data table (DG3.3). To assist the user in gaining an overview of inconsistencies and variations in data, the system supports flexible grouping of papers into scatter plots based on the user\u2019s specified dimensions (DG3.1). The user can then select interested groups and perform batch editing of dimension values (DG3.4), for instance, unifying different expressions of the same entity. Once satisfied with the accuracy and completeness of the data table, the user can add it to the database, where it is automatically integrated with existing data. The user can then proceed to pose additional questions to the system and repeat the process for further data extraction and synthesis. Finally, when the user has completed their research, they can export the entire database in CSV format for further analysis or reporting. 4.2 Data Extraction and Structuring We leverage LLMs 2 to extract and structure data from scientific literature based on user questions. To mitigate the hallucination issues and facilitate user validation of LLM-generated answers, we adopt the retrieval-augmented generation (RAG) framework by grounding LLMs in the relevant information in the papers (as shown in Figure 1). The framework includes building the vector database for PDF collection, generating data dimensions from the question, and producing the answer (i.e., the data table and data summary) based on these dimensions and relevant document snippets retrieved from the vector database. The process begins with 2We use \u2018gpt-4-turbo\u2019 for data table generation considering the task complexity, while \u2018gpt-3.5-turbo\u2019 is used for data structure generation and summarization. Vectorization uses OpenAI\u2019s \u2018text-embedding-3-small\u2019 embedding. \fSciDaSynth Question: What the tasks and accuracy of different LMs? Data Dimensions Data Table Data Summary Relevant Document Snippets Vector Database Paper collection Table Caption Table Content Image Caption Image Content Text Snippets Model A achieved 88% accuracy in Named Entity Recognition (NER) and 78% in Relation Extraction (RE). Model B achieved 87% in NER, while Model C outperformed both in NER with a 90% accuracy. Table Summary Image Summary Text Summary Flexible paper grouping regarding interested dimension(s) Refer back to original sources Task 87% RE Check and correct results Resolve inconsistencies Data Extraction & Structuring Data Validation & Refinement Dataset Update Figure 1: System workflow of SciDaSynth: (1) Data extraction and structuring through Retrieval augmented generation (RAG) based technical framework using LLMs. (2) Data validation and refinement through iterative checking and correcting results with visual highlights and easy access to their original sources and resolving inconsistencies in data by flexible grouping based on interested data dimensions. (3) Database update by integrating the current data table. parsing the paper PDF collection into tables, text snippets, and images using a state-of-the-art toolkit for processing scientific papers [33]. Afterward, they are transformed into vectors3. For each table, text snippet, and image, we create a text summary using LLMs and encode both the summary and the original data as vectors for computation and retrieval. The text summary helps consolidate the verbose original content, improving scalability and reducing noise for retrieval. Given a user\u2019s question, we encode it as a vector and use vector-based similarity search to retrieve relevant information from the paper collection. This search finds the original tables, text, or images indexed by text summaries that are related to the question. Meanwhile, we prompt LLMs to infer data dimensions by generating dimension names and types based on the question. 3For figures, we use GPT4-V to provide text descriptions and convert the text into vectors Finally, the retrieved original document snippets and generated data dimensions are fused in a prompt to guide LLMs in generating the data table and data summary. This approach ensures that the extracted data is structured and relevant to the user\u2019s query, while also allowing for easy validation and refinement. 4.3 User Interface Building upon the technical framework for extracting and structuring data from scientific literature, the user interface enables the automatic generation of data tables via question-answering (in Figure 2, DG1, DG2). Based on the data tables, users can perform iterative data validation and refinement by pinpointing and correcting error-prone data records and by resolving data inconsistencies via flexible data grouping regarding specific dimensions. Finally, \fWang et al. A B C A1 A2 Figure 2: User interface of SciDaSynth. Users can use the user panel (A) to upload PDF files of scientific literature (A1) and pose a question about the PDFs in the \u201cQuery\u201d tab (A2). Then, the system will provide a text answer in the chat and present a structured data table in the \u201cCurrent Result\" tab (in C). Within the data table, missing values and records with low-relevance scores are highlighted. Users can explore the information used by the LLM from the PDF collection to generate each record in the data table. In addition, users can access the original PDF, system-parsed tables, figures, and meta-information. Additionally, users can examine data variations in scatter plots by selecting dimensions of interest at the top (in B). users can add the quality-ready data tables into the database. Here, we will introduce the system designs and interactions in detail. 4.3.1 Paper exploration and question answering. After uploading the PDF collection of scientific literature (in Figure 2A1), the paper meta information is organized as a data table in the database (in Figure 2C). Then, users can get familiar with the content of the paper collection in the scatter plot, where each paper is encoded as vector4 based on the paper title and abstract and projected onto the 2D plane using T-SNE. Papers that share similar content will form a visual cluster. Users can lasso a paper cluster and right-click in the scatter plot to trigger a context menu that has the option of requesting a summary of the abstracts. In addition, users can click individual dots and examine the corresponding paper PDFs in the tab of the right panel, as well as the parsed tables (!), figures (\u00d5), or meta (\u009f) by clicking the icons in the sidebar. Users can click the \u201cQuery\u201d tab (Figure 2A2) to start asking questions about the papers in the chat interface. The system will respond to users\u2019 questions with a text summary and present a structured data table in the \u201cCurrent Result\u201d tab (in Figure 2A3). 4.3.2 Multi-level and multi-faceted data summary. Dimensionguided data exploration. Users can gain an overview of data variations using the scatter plot by selecting interested dimensions at the header (in Figure 2B). Then, the system will perform a flexible grouping of papers based on dimension values. Specifically, each record (row) of the selected dimensions in the table is transformed into a text description (\u201cdimension_name: value\u201d), encoded as a 4All vectorization in this section uses OpenAI\u2019s \u2018text-embedding-3-small\u2019 embedding vector, and projected on the 2D plane as a dot5. The variations and similarities of dimension values for different rows are reflected in the distribution of clusters of dots using KMeans clustering. To concretize the data variations, each cluster is associated with a text label generated by LLMs\u2019 summarization of the text descriptions for the dimensions. Thereafter, users can lasso select an interested cluster to see the detailed values in the data table. They can resolve the inconsistencies of these values by assigning them the same label in the table. For example, after asking questions about crops and nutrients in the chat interface, users may select \u201ccrops\u201d as the target dimension to explore the distribution of different types of crops in the papers (Figure 2B). By examining different colored clusters and their labels, users understand there are different clusters of sweet potatoes and the orange cluster contains mixed crops. Besides, users can select multiple dimensions at once, such as \u201cnutrient_name\u201d and \u201cnutrient_value\u201d (in Figure 3), to explore different pairings of nutrients and their values (contained in crops). Afterward, users can select one cluster to look at the detailed values in the data table. In the table, the user may observe varied phrasings of measurement units and decide to unify them as \u201c\ud835\udf07\ud835\udc54/\ud835\udc54\u201d. Highlight error-prone data. To make users aware of and validate the potentially problematic results (DG3.2), the system highlights table cells with missing information or table rows (records) that are not quite relevant to the current question. The missing information is detected by prompting LLMs to output \u201cEmpty\u201d when they cannot decide the values based on the retrieved paper content. The relevance of the results is measured by the semantic similarities 5Numerical values are converted into categories from \u201clow\u201d to \u201chigh\u201d. \fSciDaSynth Figure 3: Users can select multiple dimensions to explore their distribution in the scatter plot. Then, they can lasso a cluster (i.e., \u201cVery low carotenoid content\u201d) to inspect the detailed values in the data table. In the data table, users can unify the expressions of the same measurement unit (\ud835\udf07\ud835\udc54/\ud835\udc54). of the vectors 6 between data dimensions and their corresponding records. Users can sort the table at the column header to check the records having low relevance scores (labeled with \u009f in red). Trace back data origins in literature. To check the quality of the generated data, users can right-click the individual rows to \u201copen the context\u201d pop-up that shows the original sources used by LLMs for the generation. Those original sources are the relevant context information retrieved from the vector database of tables, texts, and images in the paper collection. Moreover, the context information that matches with the generated data is highlighted to help users quickly locate the important evidence that supports the generation. If the system is found to rely on incorrect evidence to generate data, users can right-click the corresponding rows to open the paper or table or figure in a new tab for further inspection. For example, a user may want to inspect and fix a highlighted \u201cEmpty\u201d nutrient value for \u201cTotal carotenoids\u201d. Then, the user can check the system-parsed tables in the \u201cTable\u201d tab, where Table 3 is found relevant to the target nutrient value, but it seems to be wrongly parsed by the system. Thus, the user utilizes the mentions of this table (on Page 4 of the paper) below to trace back to the original table in the paper PDF. Afterward, the user finds the correct value (\u201c250.3\u201d for \u201cTotal carotenoids\u201d) and fixes the missing value in the resulting data table. 5 EVALUATION DESIGN Given the paper collection, we aimed to evaluate how SciDaSynth impacted the data extraction quality and efficiency and what were the perceived benefits and limitations when working with SciDaSynth. We conducted a user study with 12 researchers, who were tasked with building data tables given a set of predefined dimensions using a pool of scientific publications in PDF format. We adopted a within-subjects design, wherein participants needed to use SciDaSynth and the baseline system to extract data from a pool of paper PDFs. The paper collections were selected from a 6using cosine similarity \u0012\u000f Refer to the systemparsed tables )\u000f Examine and fix the missing \u201cnutrient_value\u201d 3.Locate the original table in the paper Figure 4: Identify missing information in the data table by examining the system-parsed tables and then referring to the original table in the paper PDF. recent systematic review published in Nature Food [21], focusing on micronutrient retention in biofortified crops through various processing methods. The supplementary data table from this review served as the ground truth for the extracted data tables. We measured task completion time and accuracy and collected participants\u2019 feedback on their experience with both systems. This approach allowed us to assess the usability and effectiveness of SciDaSynth in supporting the data extraction process from the scientific literature. Our research questions are: \u2022 Effectiveness of data extraction: * Data quality: How does SciDaSynth impact the quality of the final synthesized data table from scientific literature collection? * Efficiency: How does SciDaSynth impact the efficiency of data extraction? \u2022 User perceptions: What are the perceived benefits and limitations of system designs and workflows? \u2022 Promising use cases: How do researchers envision using SciDaSynth for data extraction in their studies? 5.1 Experiment Settings 5.1.1 Dataset & Processing. The datasets for this study were derived from the included studies in the systematic review published in Nature Food. We downloaded the corresponding research papers in PDF format. These papers examined the retention of micronutrients (e.g., provitamin A, iron, and zinc) in biofortified crops (e.g., maize, orange sweet potato, cassava, pearl millet, rice, beans, and wheat) after post-harvest processing (e.g., storage and fermentation). The supplementary data table published along with the systematic review includes all the extracted data from individual studies in CSV format. This data table served as the ground truth of our data extraction and synthesis study. We pre-processed the papers by extracting tables, figures, and text snippets from the PDFs and converting them into a vector database for data extraction and structuring, as described in subsection 4.2. For the user study, we \fWang et al. created two datasets, Dataset I and Dataset II, each containing 10 papers sampled from the studies included in the systematic review. 5.1.2 Participants. We recruited 12 researchers (P1-P12; eight females, four males; ages: four aged 18-24, seven aged 25-34, one aged 35-44) for the study. Their backgrounds were in nutritional sciences, including food science and technology, human nutrition, medical and health sciences, and life sciences. All participants (five postdoctoral fellows and seven PhD students) were actively engaged in research and familiar with the data dimensions from the systematic review, either through previous papers (10/12) or their own research (2/12). Most had extensive experience in extracting and analyzing both qualitative and quantitative data from literature and had led or been involved in at least one type of review (e.g., intervention, diagnostic test accuracy, and narrative). All participants had the need for data extraction and synthesis for their research studies. Their expertise and usage of computer technology varied, with five participants identifying as expert users who regularly coded and programmed and seven as intermediate users who coded as needed. 5.2 Baseline Implementation Participant-facing baseline without data extraction and structuring. This baseline, Baseline A, was a simplified version of SciDaSynth designed to replicate current practices in data extraction and synthesis. It provided users with a PDF viewer that supported highlighting, annotation, and searching, allowing them to explore individual PDF content. Additionally, it automatically parsed paper metadata, tables, and figures for user reference. Unlike SciDaSynth, Baseline A did not offer question-answering (QA)-based interactions for generating data tables or support dimension-guided data exploration with scatter plots. This baseline aimed to emulate the manual process of reviewing individual paper PDFs to distill and organize information into table format. It also offered an integrated workspace and computational parsing for data extraction and content review while maintaining connections between data and source PDFs with a side-by-side view. Automated GPT baseline. We developed Baseline B, a fully automated system based on GPT-3.5/4, to generate data tables according to specified data dimensions. This baseline was intended to evaluate the accuracy of our technical framework for automatic data table generation. The implementation followed the data extraction and structuring approach of SciDaSynth (described in subsection 4.2). We used web-based ChatGPT to generate two data questions based on the dimensions specified for the data extraction tasks. These questions were then input into Baseline B to generate two data tables for each dataset, resulting in a total of four data points for comparison with other systems. 5.3 Tasks Participants were instructed to use SciDaSynth and Baseline A to extract data from two paper collections, Dataset I and Dataset II, each containing 10 papers. These collections were sampled from a systematic review on micronutrients in crops (introduced in subsubsection 5.1.1). Due to the complexity of the data extraction tasks, participants were requested to extract four data dimensions from papers, including \u201ccrops (types)\u201d, \u201cmicronutrients (being retained)\u201d, \u201cabsolute nutrient raw value\u201d, and \u201craw value measurement units\u201d. These dimensions covered both qualitative and quantitative measurements. They needed to organize the data into tables and download them from the systems. The data extraction scenario was presented as \u201cworking with your colleagues to conduct a systematic review.\u201d The order of the systems and the datasets was counterbalanced, resulting in 4 (=2 x 2) conditions. 5.4 Procedure We conducted the experiment remotely via Zoom, with both the Baseline A and SciDaSynth deployed on a cloud server for participants\u2019 access. The produce of the study: pre-study setup; interface tutorial for the first system; main task for the first system followed by a survey; alternate and repeat for the second system; think-aloud exploration using SciDaSynth; and interview. First, we collected the participants\u2019 consent forms and background information, including demographics and prior research experience regarding data extraction and the nutrition domain. Then, participants were briefed about the study information. The pre-study survey and the introduction took about 10 minutes. Then, depending on the condition assigned to participants for each task, the interviewer demonstrated the PDF uploading and main features and interactions of SciDaSynth or Baseline A using a new collection of papers from the systematic review step-by-step via screen sharing. The tutorial took about 10 minutes for each system. Following that, participants used the assigned system to conduct the main task based on the assigned Dataset A or B and then answered a post-study survey about the system usage experience. After finishing both tasks, they were asked to freely explore SciDaSynth with interested data questions using both Datasets A and B for about 15 minutes. During the exploration, participants shared their screen and think-aloud. Finally, participants were interviewed to gather feedback on the system designs, workflow, and potential system use cases. Each participant spent about two hours in total for the study and was compensated with $30 USD. 5.5 Measurements Effectiveness of data extraction was assessed by evaluating the data quality and task completion time: For data quality, we compared the data tables generated by participants using SciDaSynth, Baseline A, and the automated GPT baseline (Baseline B) against the original data tables from the systematic review. The lead author (also the co-author of this paper) of the review scored the data tables based on accuracy and completeness on a 3-point scale. \u25e60 (Not Correct): Errors were present in the corresponding records for specific dimensions. \u25e61 (Partially Correct): Records were generally correct but incomplete, missing some information for certain dimensions. \u25e62 (Correct): Records in the data table were fully aligned with the original records in the review\u2019s data table. For SciDaSynth and Baseline A, we calculated 12 scores ranging from 0 to 20, corresponding to the number of papers for each dataset. For automated Baseline B, we had 4 (=2 x 2) scores in total for both datasets. Then, the paired Student\u2019s t-test was performed to compare the average scores of SciDaSynth and Baseline A. The \fSciDaSynth Physical Mental Temporal Effort Frustration Compatibility Fit Easy to learn Willing to use 7 6 5 4 3 2 1 ** * * * ** ** * * * Baseline A SciDaSynth Overview Streamline Data inconsistency awareness Question understanding Generated data quality Locating data Organizing data Validating data Refining data Confidence 7 6 5 4 3 2 1 ** ** * * * * * ** ** * * * * * Figure 5: User study questionnaire results for both Baseline A and SciDaSynth. The first row of items compared the ratings regarding the effectiveness in streamlining data extraction workflow, gaining an overall understanding of the paper collection, awareness of data inconsistencies, question understanding, perceived generated data quality, data locating, organization, validation, refinement, and confidence in the final data table. The second row compared the questionnaire items adapted from the NASA Task Load Index and the technology acceptance model. All ratings were on a 7-point scale. For ratings: \u201cMental\u201d, \u201cPhysical\u201d, \u201cTemporal\u201d, \u201cFrustration\u201d, \u201cEasy to learn\u201d, the lower the ratings, the better. For all other ratings, the higher the ratings, the better. **: p < 0.01, *: p < 0.05. Mann-Whitney U test was performed for comparison involving Baseline B [26]. For task efficiency, we measured task completion time from the moment the PDFs were uploaded to the system to the moment the final data table was downloaded. The task completion times for SciDaSynth and Baseline A were compared using paired Student\u2019s t-tests. Users\u2019 perceptions We measured participants\u2019 perceptions towards systems for data extraction via post-task questionnaires. For the perceived workload using the systems, we adopted the validated 6-item NASA Task Load Index on a 7-point scale. For the system compatibility and adaptability with participants\u2019 existing data extraction workflow, we adapted the technology acceptance model (5 items) on a 7-point scale [26, 50]. Furthermore, perceived utility around paper overview, workflow simplification, data location, organization, validation, awareness of data inconsistencies, editing and refinement, and confidence was measured via the questionnaire for each system on a 7-point scale. All questionnaire data was analyzed using non-parametric Wilcoxon\u2019s signed rank test. We also collected and summarized the participants\u2019 feedback during the post-study interviews on system designs and workflows and promising use cases of SciDaSynth for data extraction in their research work. 6 RESULTS AND ANALYSES 6.1 Effectiveness of Data Extraction Baseline A Baseline B Data Quality N=12 N=4 N=12 Condition SciDaSynth 20 10 5 0 15 * * Figure 6: The data quality of using SciDaSynth, Baseline A (human baseline), and Baseline B (automated method). There was no significant difference in data quality between SciDaSynth and Baseline A. However, there were significant differences between Baseline B and the other two systems. *: p < 0.05. 6.1.1 Data quality. Figure 6 shows the data quality results. Using SciDaSynth, participants were able to generate good-quality data \fWang et al. tables (M=16.73, SD=2.83), 83.65% (=16.73/20) accuracy, comparable to Baseline A (M=16.18, SD=1.60) that mostly rely on manual data extraction from papers. There was no significant difference in accuracy scores between the two systems, as rated by the expert (i.e., the lead author of the systematic review from which the ground truths were derived): p=0.56 using paired Student\u2019s t-test. The automated GPT baseline (i.e., Baseline B) achieved lower scores (M=13.00, SD=2.16), with 65.00% (=13.00/20) accuracy, which was less than both human-involved systems. And we observed significant differences between Baseline B and two other systems (vs. Baseline A: U=39.5, p=0.026; vs. SciDaSynth: U=38.5, p=0.040) with two-sided Mann-Whitney tests. Note that the rater was blind to the conditions under which each data record was generated. Condition 60 Baseline A Avg. Time (min) N=12 N=12 SciDaSynth SciDaSynth 50 40 20 10 0 30 *** Figure 7: The task completion time of using SciDaSynth and Baseline A. The pairwise comparison between Baseline A and SciDaSynth was significant. ***: p < 0.001. 6.1.2 Efficiency. On average, participants using SciDaSynth spent 31.49 (SD = 12.91) minutes finishing the task, while participants using Baseline A spent 43.60 (SD = 15.36) minutes, which was nearly 40% longer. The difference in the task completion time between SciDaSynth and Baseline A was significant (p<0.001 with paired Student\u2019s t-test). Given the comparable and good data quality scores of both systems, SciDaSynth demonstrated its efficiency in facilitating users to produce quality data in significantly less time. 6.1.3 Case analyses of GPT baseline built upon our technical framework. The overall accuracy for automated Baseline B was 65.00% (=13.00/20). We further investigated the failure cases, where the accuracy score was 1 or 2 for each paper, of the GPT baseline for two datasets with two repetitions on each (i.e., total score of 80 (=2 (repetitions) x 2 (datasets) x 20 (total score for one dataset)) and identified three major reasons for these failures: First, incomprehensive understanding of the query in the specific paper context (13/80). When asking about raw nutrient values in crops, Baseline B failed to contextualize the meaning of \u201craw\u201d in individual paper contexts. For example, some papers might use words like \u201cunprocessed\u201d and \u201cunwashed\u201d or imply it in the tables with the processing start time equal to zero, which the system failed to recognize. Also, there were cases where one paper could have multiple crop types, but Baseline B extracted only one. Second, incorrect table and figure parsing (9/80) Many failure cases stemmed from the retrieved tables and figures. Some tables, which had very complex designs and structures (e.g., hierarchical data dimensions), were parsed wrongly. And some information in the figures was overlooked. The quality of the retrieved information impacted the LLMs\u2019 reasoning, resulting in outputting \u201cempty\u201d cell values for specific dimensions. Third, missing associations between different parts of papers (6/80). In some instances, data in tables were incomplete and required interpretation with information from other sections. For example, when asking for what crops are in a paper, the system retrieved and reported all crop variety numbers from one table instead of crop names. However, the corresponding crop names were recorded in method sections, demonstrating the mappings between crop names and their variety numbers. 6.2 User Perceptions towards SciDaSynth Quantitative analysis of post-task survey results and qualitative analysis of interviews revealed various types of benefits gained from using SciDaSynth, such as streamlining data extraction workflow, summarizing data characteristics embedded in paper collections, and facilitating data locating, validation, and editing. In addition, we identified several system limitations. Participants shared some opinions about AI support for data extraction, provided some suggestions, and pointed out promising use cases in their research. 6.2.1 Streamline the data extraction workflow. Overall, participants felt that SciDaSynth simplified the data extraction flow in several ways. They agreed that SciDaSynth greatly saved their time and effort in scanning and extracting relevant information by presenting them with a compact structured data table to start with. P12 said, \u201cThe system completes a data table with multiple records and dimensions through just one query. This is so labor-saving and efficient.\u201d P8 commented, \u201cThe query helps me to find all of the key information, and I only need to verify them. That improves my efficiency a lot. \u201d This sentiment was also reflected in the significant difference in the questionnaire item: \u201ceffectiveness of simplifying data extraction workflow\u201d between SciDaSynth (M=5.83, SD=0.58) and Baseline A (M=4.33, SD=1.50): p=0.0127. Participants agreed that the system interactions were well-designed and that different system components were seamlessly glued together for data extraction. They appreciated the ability to filter data tables using both the summary plot (i.e., scatter plot) and table functions and the easy access to original paper content for specific data. Moreover, participants favored the question-answering interaction of the systems, which was deemed as \u201cnatural\u201d (P9), \u201cuser-friendly\u201d(P4), and \u201ccomfortable\u201d (P12) way for extracting data. As shown in Figure 5, participants felt that SciDaSynth could understand their data questions (M=6.00, SD=0.74) They generally did not see any issues with the system\u2019s ability to identify a set of interested data columns from their questions. Participants also agreed that SciDaSynth provided an acceptable quality of data tables (M=5.50, SD=0.80, with a score over 5) accordingly. 6.2.2 A global understanding of paper collections. reported that SciDaSynth significantly enhanced their overall understanding of the paper collection compared to Baseline A: (M=5.67, SD=0.49 vs. M=3.75, SD=1.06, p=0.005). Specifically, the scatter plot feature was 7All user questionnaire items were analyzed using Wilcoxon\u2019s signed rank test. \fSciDaSynth highlighted as particularly useful for developing a sense of and comparing different paper topics. P8 said, \u201cIn a real work scenario, I need to have a basic overview of each paper, or about this set of papers, before I go to seek pieces of information and extract data into the Excel file. This overview (scatter plot) just fits my purpose.\u201d P4 appreciated, \u201cIt helped to classify the literature so that I can dig deeper into details with less effort.\u201d P6 liked the flexibility to select a group of papers of interest and get a short summary of them. Moreover, SciDaSynth was found to facilitate the discovery of data inconsistencies and variations across the literature (M=5.67, SD=0.83) compared with Baseline A (M=4.25, SD=1.36): p=0.019. Many participants noted that the dimension-guided exploration in the scatter plot was effective in capturing the similarities and differences in papers, revealing data characteristics from different aspects, and conducting semantic filtering of data, especially in large paper collections (P4). For example, P3 stated, \u201cThose colored clusters with labels were really meaningful, and they helped me understand and reason the data semantics. \u201d P7 praised, \u201cI like how I can choose different dimensions to group different papers. I can really see the trends and significance of the topics from those groups. \u201d P1 shared, \u201cSometimes, I may be interested in multiple dimensions, like what crops contain beta carotene, what are their values for different processing methods. Previously, I may not easily get the answers to these questions. The scatter plot just nicely helps me label the information for me.\u201d 6.2.3 Data locating, validation, and refinement. As shown in Figure 5, SciDaSynth were rated helpful for locating (M=5.50, SD=0.67), organizing (M=5.92, SD=0.79), validating (M=5.17, SD=0.83), and editing data (M=5.75, SD=0.45) from literature, with all scores over five. There were significant differences in these dimensions compared to Baseline A (locating: M=4.08, SD=1.44, p=0.031; organization: M=4.50, SD=1.38, p=0.007; validation: M=4.08, SD=1.44, p=0.046; editing: M=5.08, SD=0.90, p=0.021). Particularly, participants found that SciDaSynth allowed them to quickly navigate to the relevant parts of the papers by establishing and maintaining the connections between data and paper PDFs. This was also helpful for validating the auto-generated data. P1 shared, \u201cI could easily access pdf, tables, and figures (using SciDaSynth). The option to look at context is also helpful to verify the data. For example, I easily cross-checked the data by clicking on open context, which saved time from skimming the whole paper.\u201c P7 added, \u201cIt helped me to direct my focus where the data is available in the paper.\u201d P3 said, \u201cThe listed keywords (highlighted in the relevant contexts by clicking the rows) can help me locate contextual information to determine whether the information is correct.\u201d Besides, many participants praised the batch editing feature. P5 mentioned, \u201cI find several clusters pointing at the same crops. ... after locating them in the table, it was super convenient for me to edit multiple rows of data in the table at once. \u201d 6.2.4 Reduced workload for data extraction. Participants generally felt that SciDaSynth provided support and reduced their workload for data extraction from the literature. Specifically, the results from NASA-TLX questionnaire items (shown in Figure 5) demonstrate that SciDaSynth lowered participants\u2019 mental workload (M=3.17, SD=1.03 vs. M=4.75, SD=0.97) and physical workload (M=2.92, SD=1.00 vs. M=4.25, SD=1.22) compared to Baseline A: mental: p=0.015, physical: p=0.034. However, there were no significant differences between SciDaSynth and Baseline A in perceived temporal demand (M=3.08, SD=1.00 vs. M=2.83, SD=1.03, p=0.39), effort (M=3.33, SD=0.89 vs. M=3.58, SD=1.08, p=0.43), and frustration (M=2.33, SD=1.07 vs. M=2.42, SD=1.08, p=0.81). 6.2.5 Compatibility, learnability, and adaptability. Participants thought SciDaSynth was well-aligned with their data extraction workflow. They perceived it as compatible with their existing workflow and fitting their expected ways of data extraction, with significant differences between SciDaSynth and Baseline A in terms of compatibility (p=0.027) and fit (p=0.005). Although SciDaSynth had additional visualizations and interactions compared to the baseline, participants found it fairly easy to learn the system. P12 said, \u201cI think the system is easy to learn and use, such as the query part, interface, and queried results.\u201d And \u201cIt is easy to add a query and start a run\u201d (P6), with minimum time to understand all the components (P1, P11, P12). Although SciDaSynth received a slightly higher score on the easiness of learning scale ( 1: easy to learn, 7: hard to learn) compared to Baseline A. The difference was not significant (p=0.21). Participants mentioned that some interactions took some time to become familiar with, such as \u201cthe operation on cluster results (enlarge, move, clear filtering)\u201d (P10). P8 mentioned \u201cI didn\u2019t feel which component is difficult to learn. Every part is easy to learn, but some components may not align with my habits, so I probably make error clicks.\u201d And Participants showed a stronger interest in using SciDaSynth (M=5.75, SD=0.97) than using SciDaSynth (M=3.92, SD=1.31) in their future work (p=0.002). 6.2.6 Participants remained cautious of AI-generated results. Participants were generally confident in their data tables built with SciDaSynth (M=5.75, SD=0.45) and with Baseline A (M5.08, SD=0.90). There was no significant difference in confidence between the two systems (p=0.33). In the interviews, they mentioned that they were reserved about AI-generated results and had their own preferences about using and trusting them. Generally, for the usage, they regarded the generated results as a starting point (\u201cgeneral guideline\u201d (P1)) to gain an overview of data distributions and get some sense of what data might look like. They felt more comfortable letting the system automate qualitative data analyses than quantitative ones, especially for \u201cstraightforward\u201d (P1, P3) and \u201cexact\u201d data (P8). However, when it came to a deep understanding of the paper content that requires specialized domain knowledge and rigor, participants were skeptical about generated results, regardless of their performance. They preferred to drill down to specific details in papers on their own. P12 said, \u201cWhen I need to find the similarity or summary across multiple papers, I prefer to use this system. But for a very limited number of papers, I need to get detailed and precise information; I don\u2019t depend on LLM.\u201d P8 added, \u201cI would say that I prefer not to rely on the system when collecting data from the results. These can be misinterpreted sometimes.\u201d She also noted that \u201cIn the scenario that if the information I want to extract needs more understanding of the whole paper or knowing of the area, I would like read by myself. Another scenario is that if I am not familiar with the area, I will read by myself first. After I get familiar with the paper type and research paradigms, I will use this system.\u201d Participants also expressed a fear of missing relevant information, which prompted them to cross-check the system-generated results. \fWang et al. P6 mentioned, \u201cAt some places, not all relevant data were extracted. For example, in one paper, there were multiple crop genotypes with biofortification, but the data was extracted for one. If that\u2019s the case for one paper, then I will always go back to the paper to cross-check if something was missed.\u201d 6.2.7 System suggestions. Participants also provided some valuable suggestions for system improvement. P8 advised that besides scientific papers written in English, the system could support more languages. P5 suggested, \u201cI tend to make notes and comments throughout the extraction, and it may be helpful to have a field dedicated to it.\u201d P10 said, \u201cI don\u2019t like downloading papers one by one, may let system loads papers from websites.\u201d P3 wanted a customizable interface where positions and sizes of views can be flexibly adjusted. Other suggestions mainly involve enriching table operations (e.g., change column orders (P6), ), tracking data provenance and reversing back the changes (P1), 6.3 Promising Use Cases During post-study interviews, participants mentioned several situations in their research work that SciDaSynth would be helpful for their research studies. 6.3.1 To screen papers and preliminary categorization of papers. Many participants thought SciDaSynth would be helpful in selecting and grouping papers more efficiently and systematically. P7 said, \u201cWhen I search papers, I need to go to different websites like PubMed and Google Scholar and customize the search functions using some regular expressions, it would be nice to use this system to simply specify my requirements in natural language questions. Paper screening is usually tedious and time-consuming. I can imagine this tool (SciDaSynth) can be very useful to screen papers really fast and find relevant ones according to my interests. The scatter plot can help me assess different papers and topics, like what topics are most studied and which clusters (of papers) are more relevant or irrelevant to my study. It is also nice to see I can get a quick summary of those papers.\u201d P5 commented \u201cI would love to use it for preliminary grouping and labeling of papers. This would help me get a sense of papers from my unfamiliar domains quickly and help me develop ideas about paper taxonomy for review.\u201d 6.3.2 To validate and monitor the database construction process. Participants also mentioned that SciDaSynth could help analyze the quality of included studies. P1 said, \u201cWhen I extract data from my paper collections, I usually delve into individual papers and do not have a big picture of what my data looks like. Sometimes, after extraction, I find that I may be inconsistently labeling specific topics. I think the data grouping supported in the scatter plot could keep me aware of my extracted data distribution throughout the process and alert me to potential biases or errors. \u201d P2 also liked about the idea of using SciDaSynth to track the data construction process on demand. P8 emphasized, \u201cThe system could identify my own inconsistent performance in data extraction and help me refine my extraction criteria.\u201d 6.3.3 To interpret and summarize results. Participants also shared an interest in using SciDaSynth to interpret and summarize their results after data extraction. P9 said, \u201cI am willing to use it to qualitatively summarize and explain relationships between different clusters of papers, especially for cases where narrative syntheses are needed.\u201d P10 added that sometimes data from different studies are too heterogeneous in terms of methods or outcomes to be combined together statistically. SciDaSynth could help categorize studies qualitatively and summarize the trends and findings with each category, highlighting any consistent and notable patterns. 6.3.4 To communicate and share findings with the community. Some participants felt excited about using SciDaSynth as an interactive data portal to publicize and share their findings with other researchers. P4 and P7 thought that the natural language interactions and interactive data visualizations were intuitive and helpful for people to easily access, explore, and engage with others\u2019 research work. P4 said, \u201cResearch findings were usually buried in different parts of papers; reading and digesting papers to extract them is exhausting and tedious. The data table (generated by the system) is a very nice way to organize and present them for people understand it. And the visuals and interactions (of the system) just make the data exploration so much fun and engaging.\u201d 7 DISCUSSION 7.1 Summary In this work, we built a computational pipeline based on LLMs to automatically generate structured data tables according to users\u2019 data questions for a paper collection. Building upon this, we designed and implemented an interactive system that supports data extraction and structuring from literature in a systematic and efficient manner. The user study with 12 researchers showed that SciDaSynth could help participants produce data tables with decent quality in a much shorter time compared to the human baseline and outperformed the fully automated baseline with higher data quality. Moreover, participants generally perceived that SciDaSynth effectively streamlined their data extraction process via natural question-answering interactions and provided a better overview of data characteristics and variations across the literature through flexible grouping in the scatter plot. Moreover, with the auto-generated data tables being the preliminary results, SciDaSynth facilitated data validation and refinement via easy access to the relevant information in the literature. Overall, the system designs and interactions helped reduce their workload, were compatible with their existing workflow, were easy to learn, and were desired for use in future research. Participants also came up with some use cases of SciDaSynth, such as paper screening, data extraction monitoring, results summary, and results sharing. We also identified several limitations and challenges regarding technical implementations and user experience of using SciDaSynth. The automated technical framework was still far from perfect regarding the generated data quality. The failure cases included incorrect table and figure parsing, missing associations between different parts of papers, and incomprehension in understanding the domain contexts. Meanwhile, participants were cautious of autogenerated results and felt hesitant to use them for situations that require a deep understanding of domain knowledge and rigor. They generally regarded them as preliminary evidence and would need \fSciDaSynth cross-checking with the source literature. In addition, participants expressed some challenges regarding navigating between different data contexts, missing highlighting of relevant information, and other usability functionality issues. 7.2 Design Implications 7.2.1 Structured data organization and presentation beyond table. In this work, we built a technical framework for automatically generating data tables from massive literature according to users\u2019 interested questions. The structured data table helped externalize and standardize the large scale of unstructured knowledge embedded in the paper collections. According to the user study, the structured data table provided a good basis for a global understanding of paper collections and interactive visualizations of data improved awareness of data variations in different dimensions. In the future, systems can consider other data representations beyond table format for structuring and presenting knowledge. For example, the mind map is a useful diagram that can visually summarize the hierarchy within data, showing relationships between pieces of the whole. It can help users build a conceptual framework and taxonomy for paper collections, identify future research directions, and present research findings by branching out to sub-findings, implications, and recommendations. In addition, knowledge graphs could be useful for presenting and explaining the integration of data from multiple sources. They can also enrich data with semantic information by linking entities to concepts in an ontology, adding layers of meaning and context, and revealing hidden connections between entities. 7.2.2 Reduce context switch and provide in-situ highlighting of information. To assist users in locating, validating, and refining data, SciDaSynth establishes, highlights, and maintains the connections between data and relevant information in the literature. In the user study, participants favored the keyword highlighting in the pop-ups of relevant data contexts for corresponding rows. And they could easily access the original source PDFs for each data record. Both of these designs helped them validate the data quality. However, some participants pointed out that they needed to switch different tabs to validate data tables with the source PDF content. They also desired the text highlighting in the original paper PDFs. All of these benefits and challenges in data validation emphasize the importance of designs for reducing context switches and in-situ highlighting of information in knowledge extraction tasks. 7.2.3 Provide analytical guidance during information extraction. During the system exploration in the user study, some participants mentioned that they were hesitant about what questions to ask and how they should be formatted when facing paper collections that they might not be very familiar with. The future system should provide adaptive support and guidance for users to navigate the complex information space by suggesting information questions or user interactions for initial start, follow-ups, and clarifications [3, 48]. Those user questions and interaction suggestions could also be learned from users\u2019 feedback and dynamic interactions as the question-answering process progresses. 7.2.4 Promote collaborative effort for knowledge extraction. In this work, we designed and built an interactive system, SciDaSynth, that facilitates users in extracting structured data from scientific literature based on LLM-generated results. The user study showed that SciDaSynth improved the efficiency of data extraction while presenting a comparable accuracy to the human baseline. However, the accuracies of both systems used by individual researchers were only slightly over 80%. There was still significant room for improvement regarding the quality of the data extracted by individuals. This showed that data extraction from literature is a demanding and challenging task. The system designs and workflow can further consider how to promote collaborative effort among individuals to extract and synthesize higher quality and more reliable data. 7.3 Limitations and Future Work We discuss the limitations and future work based on our design and evaluation of SciDaSynth. The technical limitations for future work include: \u2022 Improving domain context understanding. Currently, we use vanilla GPT3.5/4 to build a technical pipeline for data extraction from domain-specific literature. As reflected in the user study, the LLMs may still lack a deep understanding of the specialized domains and may impact users\u2019 usage and trust of the results. Therefore, future work can consider enhancing the domain knowledge and reasoning of LLMs via various approaches, such as model finetuning on domain-related articles and iterative human-in-the-loop feedback. \u2022 Incorporate more quantitative metrics to measure the quality of auto-generated results. We only considered the data relevance and missingness metrics to guide users\u2019 attention for cross-checking potentially low-quality data. However, errors could occur that are not captured by our metrics and may negatively impact the final data quality. In the future, we can develop and integrate more quantitative metrics to provide users with a more comprehensive understanding of LLM performance. The user study evaluation has the following limitations: \u2022 Lack of evaluation with diverse and larger user groups. In this study, we only evaluated our system with 12 researchers who came from nutritional science related backgrounds. Inviting more researchers from different disciplines would further enhance the evaluation of SciDaSynth. \u2022 Lack of longitudinal study in real research scenarios. The user study was conducted based on a set of predefined data extraction tasks and paper collections. However, in real research settings, participants may have interests in different data dimensions and paper topics. A longitudinal study of how researchers would use SciDaSynth can further help validate and comprehensively identify the benefits and limitations of SciDaSynth. 8 CONCLUSION In this paper, we designed and developed SciDaSynth, an interactive system for researchers to extract and synthesize data from massive scientific literature in an efficient and systematic way. Particularly, we built an LLM-based retrieval-augmented generation framework to automatically build structured data tables according \fWang et al. to users\u2019 data questions via question-answering interactions. Then, the system provided a suite of visualizations and interactions that guide the multi-faceted exploration of the generated data tables. During the exploration, users can gain a high-level understanding of data variations in different dimensions and quickly locate, validate, and refine data with relevant information in the source papers. Through a within-subjects study with 12 researchers, we demonstrated that SciDaSynth participants could use SciDaSynth to produce high-quality data tables in a shorter time compared to a baseline that mostly relies on manual data extraction from individual papers. And the system designs and workflow were perceived as useful by participants. They also pointed out some promising use cases of SciDaSynth in their research work. We further discussed some design implications and limitations based on the designs and evaluation of SciDaSynth." + }, + { + "url": "http://arxiv.org/abs/2404.15993v1", + "title": "Uncertainty Estimation and Quantification for LLMs: A Simple Supervised Approach", + "abstract": "Large language models (LLMs) are highly capable of many tasks but they can\nsometimes generate unreliable or inaccurate outputs. To tackle this issue, this\npaper studies the problem of uncertainty estimation and calibration for LLMs.\nWe begin by formulating the uncertainty estimation problem for LLMs and then\npropose a supervised approach that takes advantage of the labeled datasets and\nestimates the uncertainty of the LLMs' responses. Based on the formulation, we\nillustrate the difference between the uncertainty estimation for LLMs and that\nfor standard ML models and explain why the hidden activations of the LLMs\ncontain uncertainty information. Our designed approach effectively demonstrates\nthe benefits of utilizing hidden activations for enhanced uncertainty\nestimation across various tasks and shows robust transferability in\nout-of-distribution settings. Moreover, we distinguish the uncertainty\nestimation task from the uncertainty calibration task and show that a better\nuncertainty estimation mode leads to a better calibration performance. In\npractice, our method is easy to implement and is adaptable to different levels\nof model transparency including black box, grey box, and white box, each\ndemonstrating strong performance based on the accessibility of the LLM's\ninternal mechanisms.", + "authors": "Linyu Liu, Yu Pan, Xiaocheng Li, Guanting Chen", + "published": "2024-04-24", + "updated": "2024-04-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL", + "68T07, 68T50" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Uncertainty Estimation and Quantification for LLMs: A Simple Supervised Approach", + "main_content": "Introduction Large language models (LLMs) have marked a significant milestone in the advancement of natural language processing (Radford et al., 2019; Brown et al., 2020; Ouyang et al., 2022; Bubeck et al., 2023), showcasing remarkable capabilities in understanding and generating human-like text. However, one pressing issue for the LLMs is their propensity to hallucinate (Rawte et al., 2023) and generate misleading or entirely fabricated information that can significantly undermine their trustworthiness and reliability. The task of uncertainty estimation has then emerged to be an important problem, where an uncertainty estimation model can be used to determine the confidence levels of LLMs\u2019 outputs. While the problem of uncertainty estimation and calibration has seen considerable development within the general machine learning and deep learning domains (Abdar et al., 2021; Gawlikowski et al., 2023), we see less development in the domain of LLMs. One of the major challenges is the difference in the format of the output: while machine learning and deep learning typically involve fixed-dimensional outputs, natural language generation (NLG) tasks central to LLM applications require handling variable outputs that carry semantic meanings, and it is unclear whether the uncertainty estimation for NLG should target for (fixed-dimensional) token level or (variable-dimensional) sentence/semantic-level uncertainty. Existing uncertainty estimation approaches for LLMs usually involve designing uncertainty metrics for their outputs. For black-box LLMs, these metrics are computed by examining aspects like the generated outputs\u2019 consistency, similarity, entropy, and other relevant characteristics (Lin et al., 2023; *Equal contribution. \u00b6Corresponding to: xiaocheng.li@imperial.ac.uk, guanting@unc.edu. 1 arXiv:2404.15993v1 [cs.LG] 24 Apr 2024 \fManakul et al., 2023; Kuhn et al., 2023). Given the complexity of LLMs\u2019 underlying architectures, semantic information may be diluted when processing through self-attention mechanisms and during token encoding/decoding. To address this issue, a growing stream of literature argues that hidden layers\u2019 activation values within the LLMs offer insights into the LLMs\u2019 knowledge and confidence (Slobodkin et al., 2023; Ahdritz et al., 2024; Duan et al., 2024). Based on this argument, white-box LLMs, which allow access to more of LLMs\u2019 inner values, such as logits and hidden layers, are believed to have the capacity to offer a more nuanced understanding and improved uncertainty estimation results (Verma et al., 2023; Chen et al., 2024; Plaut et al., 2024). Most state-of-the-art uncertainty estimation methods in NLG-related tasks are developed in an unsupervised manner (Lin et al., 2022; Kuhn et al., 2023; Chen et al., 2024). However, in the realm of LLMs, there is increasing evidence suggesting the benefits of pursuing supervised approaches. For instance, LLMs\u2019 outputs and their internal states can offer conflicting information about truthfulness (Liu et al., 2023), and determining whether outputs or internal states are more reliable sources of information often varies from one scenario to another. This phenomenon underscores the potential advantages of a supervised learning approach, which can adaptively leverage both types of information for better uncertainty estimation. Meanwhile, employing a supervised learning approach to utilize hidden information within LLMs for uncertainty estimation remains relatively uncharted territory. Prior to the adventure of LLM, a line of literature has studied incorporating supervised learning approaches for uncertainty estimation and calibration (Desai and Durrett, 2020) for white-box (Zhang et al., 2021) and black-box (Ye and Durrett, 2021) natural language processing (NLP) models. However, the LLMs distinguish themselves from earlier NLP models through their architectures, training methodologies, and the datasets used. Consequently, there is a need to study and investigate supervised uncertainty estimation approaches specifically tailored for LLMs. There is also a recently expanding body of literature that employs supervised approaches that leverage hidden layers\u2019 information for hallucination detection within LLMs (CH-Wang et al., 2023; Azaria and Mitchell, 2023; Ahdritz et al., 2024). However, there is not so much of a consensus on the definition of hallucination while the uncertainty estimation problem is more well defined. Consequently, questions regarding the practicality, effectiveness, and potential of these supervised approaches to improve uncertainty estimation practices in LLM remain unanswered. Our study embarks on a systematic evaluation of the integration of LLMs\u2019 hidden states within the uncertainty estimation framework in a supervised manner. We aim to assess the tangible benefits this approach may offer in improving the confidence, reliability, transferability, and practicality of LLMs\u2019 uncertainty estimation. By examining the impact of incorporating the knowledge of these hidden states into the uncertainty estimation methods, our results seek to clarify whether and how the internal states of LLMs can contribute to uncertainty estimation, paving the way for more trustworthy and dependable LLMs. Our contributions are three-fold: \u2022 We formulate the problem of uncertainty estimation for LLMs. Building on this formulation, we explore the nuanced distinctions between uncertainty estimation and calibration for LLMs, and theoretically demonstrate how uncertainty estimation for LLMs differs from that in traditional ML models. Moreover, our theoretical analysis suggests that the existing architecture and training procedures of LLMs require additional features from their input and output for improved uncertainty estimation. \u2022 Motivated by the above findings, we propose a supervised method for the uncertainty estimation problem. Specifically, the method aims to train an uncertainty estimation function that maps the hidden activations of the LLMs, and the probability-related information of the LLMs (when generating the response) to an uncertainty score that captures the LLM\u2019s confidence about its 2 \fresponse. This supervised approach is systematically designed for straightforward implementation and broad applicability, suitable for black-box, grey-box, and white-box LLMs. \u2022 We conduct numerical experiments on NLP tasks including question answering, machine translation, and multiple choice to evaluate the performance of our proposed approach against existing benchmarks. When designing these experiments, we measure the performance of our approach across LLMs with different accessibility mode (black-box, grey-box, and white-box), and evaluate them in both in-distribution and out-of-distribution test datasets. The results demonstrate that by leveraging hidden activations of LLMs, our method consistently extracts additional knowledge from these models to enhance uncertainty estimation across various NLP tasks. These findings provide insights into the working mechanism of the uncertainty estimation method and its robustness and transferability. 1.1 Related literature Uncertainty estimation for natural language generalization. The uncertainty estimation and calibration for traditional machine learning is relatively well-studied (Abdar et al., 2021; Gawlikowski et al., 2023). However, with the rapid development of LLMs, there is a pressing need to better understand the uncertainty for LLMs\u2019 responses, and measuring the uncertainty from sentences instead of a fixeddimension output is more challenging. One stream of work has been focusing on unsupervised methods that leverage entropy (Malinin and Gales, 2021), similarity (Fomicheva et al., 2020; Lin et al., 2022), semantic (Kuhn et al., 2023; Duan et al., 2023), logit or hidden states\u2019 information (Kadavath et al., 2022; Chen et al., 2024; Su et al., 2024; Plaut et al., 2024) to craft a uncertainty metric that helps to quantify uncertainty. For black-box models, some of the metrics can be computed based on multiple sampled output of the LLMs (Malinin and Gales, 2021; Lin et al., 2023; Manakul et al., 2023); while for white-box models, more information such as the output\u2019s distribution, the value of the logit and hidden layers make computing the uncertainty metric easier. We also refer to Desai and Durrett (2020); Zhang et al. (2021); Ye and Durrett (2021); Si et al. (2022); Quach et al. (2023); Kumar et al. (2023); Mohri and Hashimoto (2024) for other related uncertainty estimation methods such as calibration and conformal prediction. Hallucination detection. Recently, there is a trend of adopting uncertainty estimation approaches for hallucination detection. The rationale is that the information of the value of logits and the hidden states contain some of the LLMs\u2019 beliefs about the trustworthiness of its generated output. By taking the activations of hidden layers as input, Azaria and Mitchell (2023) train a classifier to predict hallucinations, and Verma et al. (2023) develop epistemic neural networks aimed at reducing hallucinations. Slobodkin et al. (2023) demonstrate that the information from hidden layers of LLMs\u2019 output can indicate the answerability of an input query, providing indirect insights into hallucination occurrences. Chen et al. (2024) develop an unsupervised metric that leverages the internal states of LLMs to perform hallucination detection. More related works on hallucination detection can be found in CH-Wang et al. (2023); Duan et al. (2024); Xu et al. (2024). While there is a lack of a rigorous definition of hallucination, and its definition varies in the above-mentioned literature, the uncertainty estimation problem can be well defined, and our results on uncertainty estimation can also help the task of hallucination detection. Leveraging LLMs\u2019 hidden activation. The exploration of hidden states within LLMs has been studied to better understand LLMs\u2019 behavior. Mielke et al. (2022) leverage the language model\u2019s hidden states to train a calibrator that predicts the likelihood of outputs\u2019 correctness. With an unsupervised approach, Burns et al. (2022) utilizes hidden activations in language models to represent knowledge 3 \fabout the trustfulness of their outputs. Liu et al. (2023) show that LLMs\u2019 outputs and their internal states can offer conflicting information about truthfulness, and determining whether outputs or internal states are more reliable sources of information often varies from one scenario to another. By taking the activations of hidden layers as input, Ahdritz et al. (2024) employ a linear probe to show that hidden layers\u2019 information from LLMs can be used to differentiate between epistemic and aleatoric uncertainty. Duan et al. (2024) experimentally reveal the variations in hidden layers\u2019 activations when LLMs generate true versus false responses. Lastly, Li et al. (2024) enhance the truthfulness of LLMs during inference time by adjusting the hidden activations\u2019 values in specific directions. 2 Problem Setup Consider the following environment where one interacts with LLMs through prompts and responses: An LLM is given with an input prompt x = (x1, x2, ..., xk) \u2208X with xi \u2208V representing the i-th token of the prompt. Here V denotes the vocabulary for all the tokens. Then the LLM generates its response y = (y1, y2, ..., ym) \u2208Y (randomly) following the probability distribution yj \u223cp\u03b8(\u00b7|x, y1, y2, ..., yj\u22121). Here the probability distribution p\u03b8 denotes the distribution (over vocabulary V) as the LLM\u2019s output, and \u03b8 encapsulates all the parameters of the LLM. The conditional part includes the prompt x and all the tokens y1, y2, ..., yj\u22121 generated preceding the current position. We allow both the prompt and the response to have a variable length of k and m. We consider using the LLM for some downstream NLP tasks such as question answering, multiple choice, and machine translation. Such a task usually comes with an evaluation/scoring function that evaluates the quality of the generated response s(\u00b7, \u00b7) : Y \u00d7 Y \u2192[0, 1]. For each pair of (x, y), the evaluation function rates the response y with the score z := s(ytrue, y) where ytrue is the true response for the prompt x. The true response ytrue is usually decided by factual truth, humans, or domain experts. It does not hurt to assume a larger score represents a better answer; z = 1 indicates a perfect answer, while z = 0 says the response y is off the target. We define the task of uncertainty estimation for LLMs as the learning of a function g that predicts the score g(x, y) \u2248E [s(y, ytrue)|x, y] (1) where the expectation on the right-hand side is taken with respect to the (possible) randomness of the true response ytrue. We emphasize two points on this task definition: \u2022 The uncertainty function g takes the prompt x and y as its inputs, which has the following implications. First, the true and predicted uncertainty score can and should depend on the specific realization of the response y. Second, the uncertainty function g does not require the true response ytrue as the input. Since true response data are often limited and typically only available from labeled datasets or human experts, this design broadens the applicability of the function across various settings, enhancing its practical usage. \u2022 The uncertainty score function g, in the language of uncertainty calibration of ML models (Guo et al., 2017; Abdar et al., 2021), is defined in an individual (conditional) sense. That is, the predicted score g(x, y) is hopefully precise and matches the true score on an individual level (for 4 \feach prompt-response pair) but not on the population level (for a distribution of prompt-response pairs). With such a definition of the uncertainty quantification task, the score can be useful in that a well-calibrated score guides the extent to which the users should trust the response. In the following sections, we will first describe our method as a generic framework of uncertainty estimation that formulates the problem as a supervised task and utilizes the available labeled dataset. Then we discuss how this approach can be effectively integrated in the state-of-the-art LLMs, and present the empirical experiments and findings. Finally, we discuss the relationship between our method and the existing literature on fine-tuning LLM and hallucination detection. 3 Uncertainty Estimation via Supervised Calibration In Section 3.1, we present our method of supervised calibration as a post hoc procedure to estimate the uncertainty of the LLM\u2019s responses. 3.1 Supervised calibration We consider a supervised approach of learning the uncertainty function g : X \u00d7 Y \u2192[0, 1], which is similar to the standard setting of uncertainty quantification for ML/deep learning models. First, we start with a dataset of n samples Draw = {(xi, yi, yi,true, s(yi, yi,true))}n i=1 . Draw can be generated based on a labeled dataset for the tasks we consider. Here xi = (xi,1, ..., xi,ki) and yi = (yi,1, ..., yi,mi) denote the prompt and the corresponding LLM\u2019s response, respectively. yi,true denotes the true response (that comes from the labeled dataset) of xi, and s(yi, yi,true) assigns a score for the response yi based on the true answer yi,true. We remark that when generating the dataset Draw with the LLM, there can be multiple samples with the same prompt for tasks such as question answering and machine translation. That is, we can generate multiple samples with xi \u2261x, but these samples may have different responses yi\u2019s due to the probabilistic nature of the LLM. Meanwhile, because of the same xi, these samples have the same true response yi,true. Each different realization of yi constitutes one meaningful sample for the uncertainty estimation task. The next step is to formulate a supervised learning task based on the dataset Draw. Specifically, we construct the following dataset Dun = {(vi, zi)}n i=1 where zi := s(yi, yi,true) \u2208[0, 1] denotes the target score to be predicted. The vector vi summarizes useful features for the i-th sample based on (xi, yi) for the prediction task. In this light, a supervised learning task on the dataset Dun corresponds exactly to the definition of the uncertainty estimation task in (1). Now we discuss how we construct the feature vector vi based on (xi, yi). We mainly consider features from two sources. \u2022 White-box features: LLM\u2019s hidden-layer activations. We feed (xi, yi) as input into a LLM, and extract the corresponding hidden layers\u2019 activations of the LLM. Given the architecture of the decoder-based transformer, the consensus is that the activations of the last token (of an input sequence) should contain the most information (Azaria and Mitchell, 2023; Chen et al., 2024). Thus we use the hidden layers\u2019 activation of the last token of (xi, yi) as part of vi. We defer more discussions on the choice of the layer to Section 4.3. 5 \f\u2022 Grey-box features: Entropyor probability-related outputs. The entropy of a discrete distribution p over the vocabulary V is defined by H(p) := \u2212 X v\u2208V p(v) log (p(v)) . For a prompt-response pair (x, y) = (x1, ..., xk, y1, ..., ym), we consider as the features the entropy at each token such as H(p\u03b8(\u00b7|x1, ..., xj\u22121)) and H(p\u03b8(\u00b7|x, y1, ..., yj\u22121)) where p\u03b8 denotes the LLM. We defer more discussions on feature construction to Appendix A.1. Also, we note that another direct feature for predicting zi is to ask the LLM \u201chow certain it is about the response\u201d and incorporate its response to this question as a feature for predicting zi (Tian et al., 2023). This is naturally a viable option, and there are also other sources of features that can be incorporated into vi such as the hidden activations for the second last token. We do not try to exhaust all the possible features, and the aim of our paper is more about formulating the uncertainty estimation for the LLMs as a supervised task and understanding how the internal states of the LLM encode uncertainty. To the best of our knowledge, our paper is the first one to do so. Specifically, the above formulation aims for the following two outcomes: \u2022 We train an uncertainty model \u02c6 g(vi) that predicts zi. Based on the dataset Dun = {(vi, zi)}n i=1, one can learn a model \u02c6 g that predicts the uncertainty zi with summarized features vi. Intuitively, this approach will give better performance (because of its supervised nature and the reason presented in Section 3.3) than the ad-hoc/black-box methods such as the entropy-based method (Malinin and Gales, 2021; Kuhn et al., 2023), similarity-based method (Lin et al., 2022), etc., which is indeed verified by the numerical experiments in the next section. Thus this result underscores the usefulness of the supervised dataset for predicting uncertainty which is more aligned with the canonical uncertainty quantification methods. \u2022 We aim to answer the question of whether the hidden layers carry the uncertainty information. We note that using hidden layer information is not a mainstream approach for the existing uncertainty estimation literature on ML models. This highlights the difference between the uncertainty estimation for classic deep learning models and that for LLMs. We provide some theoretical insights in the following sections and show by theoretical insights and empirical experiments why the supervised approach can extract additional knowledge of LLMs to enhance uncertainty estimation. 3.2 Uncertainty estimation v.s. uncertainty calibration So far in this paper, we focus on the uncertainty estimation task which aims to predict whether the LLM makes mistakes in its response or not. There is a different but related task known as the uncertainty calibration problem. In comparison, the uncertainty calibration aims to ensure that the output from the uncertainty estimation model such as \u02c6 g in above conveys a probabilistic meaning. In this sense, our paper is more focused on addressing the predictability of the LLM\u2019s uncertainty, and we find that our supervised formulation and the LLM\u2019s hidden activations are helpful in such a prediction task. In terms of the uncertainty calibration aspect, our developed uncertainty estimation model is compatible with all the recalibration methods for ML models in the literature of uncertainty calibration. And intuitively, a better uncertainty estimation/prediction will lead to a better-calibrated uncertainty model, which is also verified in our numerical experiments in Section 4.4. 6 \f3.3 Why hidden layers as features? In this subsection, we provide a simple theoretical explanation for why the hidden activations of the LLM can be useful in uncertainty estimation. Consider a binary classification task where the features X \u2208Rd and the label Y \u2208{0, 1} are drawn from a distribution P. We aim to learn a model f : Rd \u2192[0, 1] that predicts the label Y from the feature vector X, and the learning of the model employs a loss function l(\u00b7, \u00b7) : [0, 1] \u00d7 [0, 1] \u2192R. Proposition 3.1. Let F be the class of measurable function that maps from Rd to [0, 1]. Under the cross-entropy loss l(y, \u02c6 y) = y log(\u02c6 y) + (1 \u2212y) log(1 \u2212\u02c6 y), the function f \u2217that minimizes the loss f \u2217= arg max f\u2208F E [l(Y, f(X))] is the Bayes optimal classifier f \u2217(x) = P(Y = 1|X = x) where the expectation and the probability are taken with respect to (X, Y ) \u223cP. Moreover, the following conditional independence holds Y \u22a5X | f \u2217(X). The proposition is not technical and it can be easily proved by using the structure of f \u2217(X). It states a nice (probably well-known) property of the cross-entropy loss. Specifically, the function learned under the cross-entropy loss coincides with the Bayes optimal classifier. Note that this is contingent on two requirements. First, the function class F is the measurable function class. Essentially, it suffices to require the function class F to be rich enough to cover the Bayesian optimal classifier, also known as the realizability condition. With the capacity of the large vision and NLP models, we can think this requirement to be (at least approximately) met. Second, it requires the function f \u2217learned through the population loss rather than the empirical loss/risk consisting of the training samples. With a large size of training data, we can think this requirement also to be (approximately) met. The proposition also states one step further on conditional independence Y \u22a5X | f \u2217(X). This means all the information related to the label Y that is contained in X is summarized in the prediction function f \u2217. While the numeric situation cannot be captured by such a simple theory (due to these two requirements), it provides insights into why the existing uncertainty quantification/calibration/recalibration methods do not utilize much of the original feature X or the hidden-layer activations. Specifically, when a prediction model \u02c6 f : Rd \u2192[0, 1] is well-trained, the predicted score \u02c6 f(X) should capture all the information about the true label Y contained in the features X, and there is no need to get the features X re-involved in the recalibration procedure when one adjusts the prediction model \u02c6 f to meet some calibration objectives. This indeed explains why the classic uncertainty quantification and calibration methods only work with the predicted score \u02c6 f(X) for re-calibration, including Platt scaling (Platt et al., 1999), isotonic regression (Zadrozny and Elkan, 2002), temperature scaling (Guo et al., 2017), etc. When it comes to LLMs, we will no longer have conditional independence, and that requires additional procedures to retrieve more information on Y . The following corollary states that when the underlying loss function \u02dc l does not possess this nice property (the Bayes classifier minimizes the loss point-wise) of the cross-entropy loss, the conditional independence will collapse. Corollary 3.2. Suppose the loss function \u02dc l satisfies P f \u2217(x) \u0338= arg min \u02dc y\u2208[0,1] E h \u02dc l(Y, \u02dc y)|X = x i! > 0, 7 \fwhere f \u2217is defined as Proposition 3.1, then for the function \u02dc f = arg max f\u2208F E h \u02dc l(Y, f(X)) i , where the expectation is with respect to (X, Y ) \u223cP, there exists a distribution P such that the conditional independence no longer holds Y \u0338\u22a5X | \u02dc f(X). Proposition 3.1 and Corollary 3.2 together illustrate the difference between uncertainty estimation for a traditional ML model and that for LLMs. For the traditional ML models, the cross-entropy loss which is commonly used for training the model is aligned toward the uncertainty calibration objective. When it comes to uncertainty estimation for LLMs, the label Y can be viewed as the binary variable for whether the LLM\u2019s response is correct, and the features X represent some features extracted from the prompt-response pair. Meanwhile, the LLMs are often pre-trained with some other loss functions (for example, the negative log-likelihood loss for next-token prediction), and this causes a misalignment between the model pre-training and the uncertainty quantification task. The consequence is the collapse of conditional independence in Corollary 3.2. The collapse is of a deeper extent than the violation of the two requirements as discussed above. Consequently, the original features, the prompt-response pair, may and should (in theory) contain information about the uncertainty score Y that cannot be fully captured by \u02dc f(X). This justifies why we formulate the uncertainty estimation task as the previous subsection and take the hidden-layer activations as features to predict the uncertainty score; it also explains why we do not see much similar treatment in the mainstream uncertainty quantification literature. 3.4 Three regimes of supervised uncertainty estimation In Section 3.1, we present the method of supervised uncertainty estimation under the assumption that we know the parameters of the LLMs including both the hidden activation and the output probabilities. Now we categorize the application of our method into three regimes to solve the case where the LLM is black-box and we do not have access to the parameter information. A natural implementation of our supervised learning approach involves using an LLM to generate the response y for input x, and extracting insights on confidence from the same LLM\u2019s hidden layers\u2019 activations. This method functions effectively with white-box LLMs where hidden activations are accessible, but with black-box LLMs, which restrict access to hidden activations, alternative black-box uncertainty estimation methods become necessary. We observe that obtaining hidden layers\u2019 activations merely requires an LLM and the prompt-response pair (x, y). Therefore, it is not mandatory for (x, y) to be generated by the LLM that provides the hidden layers\u2019 activations. Based on this observation, we note that the extra knowledge of uncertainty can come from the hidden layers of any white-box LLM that takes as input the (x, y) pair, not necessarily from the LLM that generates (x, y). It is natural to start with the belief that the LLM which generates (x, y) should have more information about the uncertainty of y. However, any white-box LLM can output the hidden activations corresponding to (x, y). To clarify further, once (x, y) is generated by the LLM p\u03b8, all of p\u03b8\u2019s knowledge about the uncertainty of y is encoded within its internal representations. However, another LLM q\u03b8 can also evaluate (x, y), with its understanding of uncertainty reflected in q\u03b8\u2019s internal representations as well. Next, we formally present our supervised uncertainty calibration method for white-box, grey-box, and black-box LLMs. 8 \fWhite-box supervised uncertainty estimation (Wb-S): This Wb-S approach implements the method discussed in Section 3.1. It first constructs the features from both two sources and trains a supervised model to predict the uncertainty of the LLM\u2019s responses. Grey-box supervised uncertainty estimation (Gb-S): This Gb-S regime constructs the features only from the grey-box source, that is, those features relying on the probability and the entropy (such as those in Table 5 in Appendix A.1) but it ignores the hidden-layer activations. Both the above two regimes consider one single LLM throughout the whole procedure. Specifically, the dataset Draw is generated based on the LLM p\u03b8, and then features are extracted from Draw to construct Dun. In particular, the feature extraction is based on the same LLM p\u03b8 as well and this assumes the full knowledge of p\u03b8 as the following diagram Draw p\u03b8 \u2212 \u2192Dun. Then a supervised uncertainty estimation model is trained upon Dun. Black-box supervised uncertainty estimation (Bb-S): The Bb-S regime does not assume the knowledge of the parameters of p\u03b8 but still aims to estimate its uncertainty. To achieve this, it considers another open-source LLM denoted by q\u03b8. The original data Draw is generated by p\u03b8 but then the uncertainty estimation data Dun is constructed based on q\u03b8 from Draw as illustrated in the following diagram Draw q\u03b8 \u2212 \u2192Dun. For example, for a prompt x, a black-box LLM p\u03b8 generates the response y. We utilize the open-source LLM q\u03b8 to treat (x, y) jointly as a sequence of (prompt) tokens and extract the features of hidden activations and entropy as in Section 3.1. In this way, we use q\u03b8 together with the learned uncertainty model from Dun to estimate the uncertainty of responses generated from p\u03b8 which we do not have any knowledge about. Algorithm 1 Supervised uncertainty estimation Input: Target LLM p\u03b8 (the uncertainty of which is to be estimated), tool LLM q\u03b8 (used for uncertainty estimation), a labeled training dataset D, a test sample with prompt x 1: %% Training phase: 2: Use p\u03b8 to generate responses for the samples in D and construct the dataset Draw 3: For each sample (xi, yi) \u2208Draw, extract features (hidden-layer activations, entropyand probabilityrelated features) using the LLM q\u03b8, and then construct the dataset Dun 4: Train a supervised learning model \u02c6 g that predicts zi with vi based on the dataset Dun 5: %% Test phase: 6: Generate the response y for the test prompt x 7: Extract features v using q\u03b8 Output: Associate the response y with the uncertainty score \u02c6 g(v) Algorithm 1 summarizes our discussions so far on the supervised approach for uncertainty estimation. When the target LLM p\u03b8 = q\u03b8, it corresponds to the first two regimes (white-box and grey-box). When the target LLM p\u03b8 \u0338= q\u03b8, it corresponds to the third regime (black-box). 4 Numerial Experiments and Findings In this section, we provide a systematic evaluation of the proposed supervised approach for estimating the uncertainty of the LLMs. All code used in our experiments is available at https://github.com/LoveCatc/supervisedllm-uncertainty-estimation. 9 \f4.1 LLMs, tasks, benchmarks, and performance metrics Here we outline the general setup of the numerical experiments. Certain tasks may deviate from the general setup, and we will detail the specific adjustments as needed. LLMs. For our numerical experiments, we mainly consider two open-source LLMs, LLaMA2-7B (Touvron et al., 2023) and Gemma-7B (Gemma Team et al., 2024) as p\u03b8 defined in Section 2. For certain experiments, we also employ the models of LLaMA2-13B and Gemma-2B. We also use their respective tokenizers as provided by Hugging Face. We do not change the parameters/weights \u03b8 of these LLMs. Tasks and Datasets. We mainly consider three tasks for uncertainty estimation, question answering, multiple choice, and machine translation. All the labeled datasets for these tasks are in the form of {(xi, yi,true)}n i=1 where xi can be viewed as the prompt for the i-th sample and yi,true the true response. We adopt the few-shot prompting when generating the LLM\u2019s response yi, and we use 5 examples in the prompt of the multiple-choice task and 3 examples for the remaining natural language generation tasks. This enables the LLM\u2019s in-context learning ability (Radford et al., 2019; Zhang et al., 2023) and ensures the LLM\u2019s responses are in a desirable format. We defer more details of the few-shot prompting to Appendix A.2. The three tasks are: \u2022 Question answering. We follow Kuhn et al. (2023) and use the CoQA and TriviaQA (Joshi et al., 2017) datasets. The CoQA task requires the LLM to answer questions by understanding the provided text, and the TriviaQA requires the LLM to answer questions based on its pre-training knowledge. We adopt the scoring function s(\u00b7, \u00b7) as Rouge-1 (Lin and Och, 2004a) and label a response yi as correct if s(yi, yi,true) \u22650.3 and incorrect otherwise. \u2022 Multiple choice. We consider the Massive Multitask Language Understanding (MMLU) dataset (Hendrycks et al., 2020), a collection of 15,858 questions covering 57 subjects across STEM. Due to the special structure of the dataset, the generated output yi and the correct answer ytrue,i \u2208 {A, B, C, D}. Therefore, this task can also be regarded as a classification problem for the LLM by answering the question with one of the four candidate choices. \u2022 Machine translation. We consider the WMT 2014 dataset (Bojar et al., 2014) for estimating LLM\u2019s uncertainty on the machine translation task. The scoring function s(\u00b7, \u00b7) is chosen to be the BLEU score (Papineni et al., 2002; Lin and Och, 2004b) and the generated answer yi is labeled as correct if s(yi, yi,true) > 0.3 and incorrect otherwise. Benchmarks. We compare our approach with a number of the state-of-the-art benchmarks for the problem. Manakul et al. (2023) give a comprehensive survey of the existing methods and compare four distinct measures for predicting sentence generation uncertainty. The measures are based on either the maximum or average values of entropy or probability across the sentence, including Max Likelihood, Avg Likelihood, Max Ent, and Avg Ent defined in Table 5. We note that each of these measures can be applied as a single uncertainty estimator, and they are all applied in an unsupervised manner that does not require additional supervised training. In particular, in applying these measures for the MMLU dataset, since the answer only contains one token from {A, B, C, D}, we use the probabilities and the entropy (over these four tokens) as the benchmarks which represent the probability of the most likely choice and the entropy of all choices, respectively. Kuhn et al. (2023) generate multiple answers, compute their entropy in a semantic sense, and define the quantity as semantic entropy. This semantic-entropy uncertainty (SU) thus can be used as an uncertainty estimator for the LLM\u2019s responses. Tian et al. (2023) propose the approach of asking the LLM for its confidence (denoted as A4U) which directly obtains the uncertainty score from the LLM itself. Our methods. We follow the discussions in Section 3.4 and implement three versions of our proposed supervised approach: black-box supervised (Bb-S), grey-box supervised (Gb-S), and white-box supervised 10 \f(Wb-S). These models have the same pipeline of training the uncertainty estimation model and the difference is only on the availability of the LLM. For the Bb-S method, we use the Gemma-7B as the model q\u03b8 to evaluate the uncertainty of LLaMA-7B p\u03b8 (treated as a black-box), and reversely, use LLaMA-7B to evaluate Gemma-7B. The supervised uncertainty model \u02c6 g is trained based on the random forest model (Breiman, 2001). Details on the feature construction and the training of the random forest model are deferred to Appendix A.3. Performance metrics. For the model evaluation, we follow Filos et al. (2019); Kuhn et al. (2023) and compare the performance of our methods against the benchmark using the generated uncertainty score to predict whether the answer is correct. The area under the receiver operator characteristic curve (AUROC) metric is employed to measure the performance of the uncertainty estimation. As discussed in Section 3.2, AUROC works as a good metric for the uncertainty estimation task whereas for the uncertainty calibration task, we follow the more standard calibration metrics and present the results in Section 4.4. 4.2 Performance of uncertainty estimation Now we present the performance on the uncertainty estimation task. 4.2.1 Question answering and machine translation The question answering and machine translation tasks can all be viewed as natural language generation tasks so we present their results together. Table 1 summarizes the three versions of our proposed supervised method against the existing benchmarks in terms of AUROC. Dataset LLM Benchmarks Ours Max Pro Avg Pro Max Ent Avg Ent SU A4C Bb-S Gb-S Wb-S TriviaQA G-7B 0.797 0.769 0.783 0.759 0.712 0.512 0.964 0.893 0.963 L-7B 0.835 0.826 0.833 0.808 0.817 0.528 0.856 0.852 0.879 CoQA G-7B 0.720 0.701 0.716 0.671 0.657 0.509 0.747 0.729 0.769 L-7B 0.699 0.672 0.692 0.644 0.647 0.508 0.706 0.713 0.751 WMT-14 G-7B 0.735 0.813 0.507 0.814 0.536 0.590 0.730 0.822 0.828 L-7B 0.623 0.700 0.588 0.689 0.509 0.517 0.654 0.708 0.752 Table 1: Out-of-sample AUROC performance for benchmarks and our methods on natural language generation tasks. G-7B and L-7B represent Gemma-7B and LLaMA2-7B, respectively. The columns Max Pro, Avg Pro, Max Ent, and Avg Ent all come from Manakul et al. (2023). The column SU implements the semantic uncertainty estimation by Kuhn et al. (2023), and the column A4C implements the ask-for-confidence method by Tian et al. (2023). The columns Bb-S, Gb-S, and Wb-S represent respectively the three regimes (black-box supervised, grey-box supervised, and white-box supervised) of our supervised method with details in Section 3.4. We make several remarks on the numerical results. First, our methods generally have a better performance than the existing benchmarks. Note that the existing benchmarks are mainly unsupervised and based on one single score, and also that our method proceeds with the most standard pipeline for supervised training of an uncertainty estimation model. The advantage of our method should be attributed to the supervised nature and the labeled dataset. While these unsupervised benchmark methods can work in a larger scope than these NLP tasks (though they have not been extensively tested on open questions yet), our methods rely on the labeled dataset. But in addition to these better numbers, the experiment results show the potential of labeled datasets for understanding the uncertainty in LLM\u2019s responses. In particular, our method Gb-S uses the exactly same features as the benchmark methods, and 11 \fit shows that some minor supervised training can improve a lot upon the ad-hoc uncertainty estimation based on one single score such as Max Pro or Max Ent. Second, our method Wb-S has a clear advantage over our method Gb-S. Note that these two methods differ in that the Wb-S uses the hidden activations while the Gb-S only uses probability-related (and entropy-related) features. This implies that the hidden activations do contain uncertainty information which we will investigate more in the next subsection. Also, we note from the table that there is no single unsupervised grey-box method (under the Benchmarks columns) that consistently surpasses others across different datasets/NLP tasks. For example, among all these unsupervised benchmark methods, Avg Ent emerges as a top-performing one for the Gemma-7B model when applied to the machine translation task, but it shows the poorest performance for the same Gemma-7B model when tested on the questionanswering TriviaQA dataset. This inconsistency highlights some caveats when using the unsupervised approach for uncertainty estimation of LLMs. Lastly, we note that the Bb-S method has a similar or even better performance as the Wb-S method. As discussed in Section 3.4, the performance of uncertainty estimation relies on the LLM that we use to evaluate the prompt-response pair. Therefore, it is not surprising to see that in the TriviaQA task, for answers generated by Gemma-7B, Bb-S features better uncertainty estimation than Wb-S, possibly because LLaMA2-7B, the LLM that is used as the \u201ctool LLM\u201d in Algorithm 1, encodes better knowledge about the uncertainty of the answers than Gemma-7B. We also note that the performance of Bb-S is not always as good as Wb-S, and we hypothesize that it is because LLMs\u2019 output distribution differs, which could result in evaluating the uncertainty of different answers. Despite these inconsistencies, the performance of Bb-S is still strong, and these results point to a potential future avenue for estimating the uncertainty of closed-source LLMs. 4.2.2 Multiple choice (MMLU) Table 2 presents the performance of our methods against the benchmark methods on the MMLU dataset. For this multiple choice task, the output is from {A,B,C,D} which bears no semantic meaning, and therefore we do not include the Semantic Uncertainty (SU) as Table 1. The results show the advantage of our proposed supervised approach, consistent with the previous findings in Table 1. Model Benchmarks Ours Probability Entropy A4C Bb-S Gb-S Wb-S Gemma-7B 0.712 0.742 0.582 0.765 0.776 0.833 LLaMA2-7B 0.698 0.693 0.514 0.732 0.698 0.719 Table 2: Out-of-sample AUROC performance for benchmarks and our methods on the MMLU dataset. The columns Probability and Entropy come from Manakul et al. (2023), and the column A4C implements the ask-for-confidence method by Tian et al. (2023). The columns Bb-S, Gb-S, and Wb-S represent respectively the three regimes (black-box supervised, grey-box supervised, and white-box supervised) of our supervised method with details in Section 3.4. 4.3 Interpreting the results Now we use some visualizations to provide insights into the working mechanism of the uncertainty estimation procedure for LLMs and to better understand the experiment results in the previous subsection. 4.3.1 Layer comparison For general LLMs, each token is associated with a relatively large number of hidden layers (32 layers for LLaMA2-7B for example), each of which is represented by high-dimensional vectors (4096 for LLaMA212 \f7B). Thus it is generally not a good practice to incorporate all hidden layers as features for the uncertainty estimation due to this dimensionality. Previous works find that the middle layer and the last layer activations of the LLM\u2019s last token contain the most useful features for supervised learning (Burns et al., 2022; Chen et al., 2024; Ahdritz et al., 2024; Azaria and Mitchell, 2023). To investigate the layerwise effect for uncertainty estimation, we implement our Wb-S method with features different in two aspects: (i) different layers within the LLM architecture, specifically focusing on the middle and last layers (e.g., LLaMA2-7B: 16th and 32nd layers out of 32 layers with 4096 dimensions; Gemma-7B: 14th and 28th layers out of 28 layers with 3072 dimensions); and (ii) position of token activations, including averaging hidden activations over all the prompt/answer tokens or utilizing the hidden activation of the last token. The second aspect makes sense when the output contains more than one token, so we conduct this experiment on the natural language generation tasks only. Figure 1 gives a visualization of the comparison result. While the performances of these different feature extraction ways are quite similar in terms of performance across different tasks and LLMs, activation features from the middle layer generally perform better than the last layer. This may come from the fact that the last layer focuses more on the generation of the next token instead of summarizing information of the whole sentence, as has been discussed by Azaria and Mitchell (2023). TriviaQA CoQA WMT-14 0.75 0.80 0.85 0.90 0.95 1.00 AUROC Avg token, mid layer Avg token, last layer Last token, mid layer Last token, last layer (a) Features from Gemma-7B TriviaQA CoQA WMT-14 0.75 0.80 0.85 0.90 0.95 1.00 AUROC Avg token, mid layer Avg token, last layer Last token, mid layer Last token, last layer (b) Features from LLaMA2-7B Figure 1: Performance comparison of using hidden activations from different tokens and layers as features in the Wb-S method. The bars filled with \u2018/\u2019 and \u2018.\u2019 represent the activations averaged over the answer tokens and the hidden activation of the last token, respectively. And the green and orange bars denote the activations from the middle and the last layer, respectively. 4.3.2 Scaling effect In Figure 2, we investigate whether larger LLMs\u2019 hidden activations enhance our uncertainty estimation method. For a fair comparison, we fix the target LLM that generates the output in Algorithm 1 and vary the tool LLM used for analysis. For example, in the left plot of Figure 2, we use Gemma-7B to generate the outputs, and LLaMA2-7B, LLaMA2-13B, and Gemma-7B to perform uncertainty estimation. We find that there is no significant performance difference between LLaMA2-7B and LLaMA2-13B, nor between Gemma-2B and Gemma-7B (Gemma does not offer a 13B model). Our result suggests that larger LLM do not necessarily encode better knowledge about uncertainty. Furthermore, based on the performance of Wb-S in Figure 2, we suggest use the same LLM to generate the output and evaluate the uncertainty. 4.3.3 Histogram of correlations Figure 3 plots the histograms of the pairwise correlations between the neuron activations and the labels (whether the LLM\u2019s response is correct). We make two observations here: First, for both LLMs, 13 \fMMLU TriviaQA CoQA WMT-14 0.75 0.80 0.85 0.90 0.95 1.00 AUROC Use LLaMA to predict Gemma-7B Wb-S 7B 13B MMLU TriviaQA CoQA WMT-14 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 Use Gemma to predict LlaMA-7B Wb-S 2B 7B Figure 2: (Left) Using the hidden activations of LLaMA-2-7B and LLaMA-2-13B to estimate the uncertainty of the answer provided by Gemma-7B. (Right) Using the hidden activations of Gemma-2B and Gemma-7B to estimate the uncertainty of the answer provided by LLaMA-7B. some neurons have a significantly positive (or negative) correlation with the label. We can interpret these neurons as the uncertainty neuron for the corresponding task. When these neurons are activated, the LLMs are uncertain about their responses. Second, Gemma-7B has more significant neurons than LLaMA2-7B, and this is consistent with the better performance of Gemma-7B in Table 1 and Table 2. Also, this reinforces that the hidden activations of the LLMs contain uncertainty information about the LLM\u2019s output. 0.2 0.1 0.0 0.1 0.2 LLaMA2-7B 0 20 40 60 80 100 120 0.5 0.0 0.5 Gemma-7B 0.2 0.0 0.2 LLaMA2-7B 0 20 40 60 80 100 120 0.5 0.0 0.5 Gemma-7B Figure 3: The histograms of the pairwise correlations on the TriviaQA task between the neuron activations and the labels (whether the LLM\u2019s response is correct), where the neural values are the last-token hidden activations of answers from the middle layer (left) and the last layer (right) of two models respectively. Figure 4 plots some example neurons\u2019 activation by selecting the neurons with the largest and smallest correlations in Figure 3. More such neurons can be found in Figure 6 in the Appendix. These neurons as an individual indicator exhibit different distributional patterns when the response is correct compared to when the response is incorrect, and thus reflect the uncertainty of the LLM\u2019s responses. 4.4 Calibration performance In Section 3.2, we distinguish the two tasks of uncertainty estimation and uncertainty calibration. Throughout the paper, we have been focused on improving the performance on the task of uncertainty estimation \u2013 to predict when the LLM is uncertain about its response. Generally, a better uncertainty estimation model leads to one with better calibration performance. The calibration (or recalibration) of the uncertainty estimation model can be indeed reduced to the classic ML setting which does not involve the LLM. Table 3 gives the calibration performance and we see an advantage of our supervised methods over benchmark methods consistent with the AUROC performance in Table 1. We adopt the histogram 14 \f2 0 2 4 1496-th neuron act. 0 50 100 150 200 250 300 # Samples 4 2 0 2 1607-th neuron act. true answer false answer (a) Neurons from the middle layer of Gemma-7B 1 0 1 2 1415-th neuron act. 0 250 500 750 1000 1250 1500 # Samples 0 2 4 2401-th neuron act. true answer false answer (b) Neurons from the middle layer of LLaMA2-7B Figure 4: Distribution of values from particular neurons on TriviaQA dataset. binning method here because we find that the temperature scaling method and the Platt scaling method will give all predicted scores concentrated within a small range such as [0.2, 0.6]. We also do not exclude the possibility that the other calibration methods can give even better performance. The point to make here is that uncertainty estimation and uncertainty calibration are two closely related tasks. Note that (i) a better uncertainty estimation model leads to a better calibration performance and (ii) the LLMs are pre-trained and not designed for these NLP tasks in the first place (see Section 3.3) so that there is no uncertainty score readily available (as the predicted probabilities for the image classifiers); we emphasize the importance of an extra uncertainty estimation procedure as our supervised one so to extract the uncertainty information from the inside of the LLMs. Metric Dataset Model Benchmarks Ours Max Pro Avg Pro Max Ent Avg Ent SU A4C Bb-S Gb-S Wb-S NLL TriviaQA G-7B 0.540 0.586 0.530 0.494 0.536 0.623 0.292 0.366 0.429 L-7B 0.486 0.459 0.508 0.569 0.530 0.618 0.482 0.455 0.414 CoQA G-7B 0.653 0.553 0.525 0.602 0.548 0.574 0.464 0.657 0.488 L-7B 0.594 0.613 0.705 0.725 0.547 0.565 0.598 0.778 0.519 WMT-14 G-7B 0.605 0.481 0.692 0.564 0.627 0.566 0.509 0.527 0.461 L-7B 0.664 0.603 0.703 0.605 0.710 0.665 0.675 0.600 0.560 ECE TriviaQA G-7B 0.031 0.050 0.039 0.045 0.031 0.036 0.033 0.037 0.044 L-7B 0.055 0.026 0.044 0.045 0.060 0.021 0.042 0.059 0.052 CoQA G-7B 0.056 0.046 0.059 0.068 0.056 0.027 0.049 0.065 0.032 L-7B 0.038 0.036 0.045 0.046 0.035 0.025 0.045 0.049 0.022 WMT-14 G-7B 0.046 0.042 0.075 0.030 0.043 0.042 0.040 0.042 0.037 L-7B 0.037 0.026 0.033 0.021 0.033 0.027 0.051 0.041 0.039 Brier TriviaQA G-7B 0.162 0.168 0.163 0.149 0.177 0.215 0.071 0.120 0.076 L-7B 0.148 0.150 0.152 0.161 0.160 0.207 0.144 0.145 0.131 CoQA G-7B 0.153 0.155 0.161 0.161 0.160 0.167 0.147 0.158 0.143 L-7B 0.174 0.178 0.178 0.183 0.182 0.189 0.173 0.181 0.161 WMT-14 G-7B 0.184 0.157 0.213 0.156 0.204 0.195 0.177 0.159 0.149 L-7B 0.229 0.210 0.234 0.212 0.236 0.236 0.227 0.208 0.192 Table 3: Calibration performance on natural language generation tasks after histogram binning. The base models are from Table 1. The original uncertainty scores from the base models are first scaled into [0, 1] and then a histogram binning is performed with 20 bins of equal length. 4.5 Transferability In this subsection, we evaluate the robustness of our methods under the out-of-distribution (OOD) setting for both question-answering and multiple-choice tasks. 15 \fSetup for the OOD multiple-choice task. We split the MMLU datasets into two groups based on the subjects: Group 1 contains questions from the first 40 subjects while Group 2 contains the remaining 17 subjects, such that the test dataset size of each group is similar (around 600 questions). Note that these 57 subjects span a diverse range of topics, and this means the training and test set can be very different. To test the OOD robustness, we train the proposed methods on one group and evaluate the performance on the other group. Setup for the OOD question-answering task. For the QA task, since we have two datasets (CoQA and TriviaQA), we train the supervised model on either the TriviaQA or CoQA dataset and then evaluate its performance on the other dataset. While both datasets are for question-answering purposes, they diverge notably in two key aspects: (i) CoQA prioritizes assessing the LLM\u2019s comprehension through the discernment of correct responses within extensive contextual passages, while TriviaQA focuses on evaluating the model\u2019s recall of factual knowledge. (ii) TriviaQA typically contains answers comprising single words or short phrases, while CoQA includes responses of varying lengths, ranging from shorter to more extensive answers. LLMs Test data Ours Best of benchmarks Bb-S Gb-S Wb-S Best GB Best BB Transferability in MMLU G-7B Group 1 0.756(0.768) 0.793(0.799) 0.846(0.854) 0.765 0.538 Group 2 0.738(0.760) 0.755(0.754) 0.804(0.807) 0.721 0.616 L-7B Group 1 0.733(0.749) 0.715(0.713) 0.726(0.751) 0.719 0.504 Group 2 0.700(0.714) 0.676(0.677) 0.685(0.692) 0.679 0.529 Transferability in Question-Answering Datasets G-7B TriviaQA 0.956(0.964) 0.837(0.893) 0.899(0.963) 0.797 0.712 CoQA 0.707(0.747) 0.706(0.729) 0.722(0.769) 0.720 0.657 L-7B TriviaQA 0.777(0.856) 0.848(0.852) 0.860(0.879) 0.835 0.817 CoQA 0.708(0.706) 0.702(0.713) 0.717(0.751) 0.699 0.647 Table 4: Transferability of the trained uncertainty estimation model across different groups of subjects in MMLU and question-answering datasets. For our proposed Bb-S, Gb-S, and Wb-S methods, values within the parentheses (\u00b7) represent the AUROCs where the calibration model is trained and tested on the same group of subjects or dataset, while values outside the parentheses are AUROCs where the model is trained on another group of subjects or dataset. The Best GB and Best BB columns refer to the best AUROC achieved by the unsupervised grey-box baselines and black-box baselines (fully listed in Table 1 and Table 2), respectively. Table 4 summarizes the performance of these OOD experiments. As expected, for all the methods, there is a slight drop in terms of performance compared to the in-distribution setting (reported by the numbers in the parentheses in the table). We make the following observations based on the experiment results. First, based on the performance gap between in-distribution and OOD evaluation, it is evident that although incorporating white-box features such as hidden activations makes the model more susceptible to performance decreases on OOD tasks, these features also enhance the uncertainty estimation model\u2019s overall capacity, and the benefits outweigh the drawbacks. It is also noteworthy that even in these scenarios of OOD, our Wb-S and Bb-S method almost consistently outperform baseline approaches. Overall, the robustness of our methods shows that the hidden layers\u2019 activations within the LLM exhibit similar patterns in encoding uncertainty information to some extent. The performance drop (from indistribution to OOD) observed in the MMLU dataset is notably less than that in the question-answering dataset, which may stem from the larger disparity between the CoQA and TriviaQA datasets compared to that between two distinct groups of subjects within the same MMLU dataset. This suggests that in 16 \fcases of significant distributional shifts, re-training or re-calibrating the uncertainty estimation model using test data may be helpful. 5 Conclusions In this paper, we study the problem of uncertainty estimation and calibration for LLMs. We follow a simple and standard supervised idea and use the labeled NLP datasets to train an uncertainty estimation model for LLMs. Our finding is that, first, the proposed supervised methods have better performances than the existing unsupervised methods. Second, the hidden activations of the LLMs contain uncertainty information about the LLMs\u2019 responses. Third, the black-box regime of our approach (Bb-S) provides a new approach to estimating the uncertainty of closed-source LLMs. Lastly, we distinguish the task of uncertainty estimation from uncertainty calibration and show that a better uncertainty estimation model leads to better calibration performance. We also remark on the following two aspects: \u2022 Fine-tuning: For all the numerical experiments in this paper, we do not perform any fine-tuning with respect to the underlying LLMs. While the fine-tuning procedure generally boosts the LLMs\u2019 performance on a downstream task, our methods can still be applied for a fine-tuned LLM, which we leave as future work. \u2022 Hallucination: The hallucination problem has been widely studied in the LLM literature. Yet, as mentioned earlier, it seems there is no consensus on a rigorous definition of what hallucination refers to in the context of LLMs. For example, when an image classifier wrongly classifies a cat image as a dog, we do not say the image classifier hallucinates, then why or when we should say the LLMs hallucinate when they make a mistake? Comparatively, the uncertainty estimation problem is more well-defined, and we provide a mathematical formulation for the uncertainty estimation task for LLMs. Also, we believe our results on uncertainty estimation can also help with a better understanding of the hallucination phenomenon and tasks such as hallucination detection. One limitation of our proposed supervised method is that it critically relies on the labeled data. For the scope of our paper, we restrict the discussion to the NLP tasks and datasets. One future direction is to utilize the human-annotated data for LLMs\u2019 responses to train a supervised uncertainty estimation model for open-question prompts. We believe the findings that the supervised method gives a better performance and the hidden activations contain the uncertainty information will persist." + } + ] +} \ No newline at end of file