AcademicEval / intro_28K /test_introduction_long_2404.16461v2.json
jiyuuuu's picture
syn
b0f675a
raw
history blame
238 kB
{
"url": "http://arxiv.org/abs/2404.16461v2",
"title": "Large Language Models Perform on Par with Experts Identifying Mental Health Factors in Adolescent Online Forums",
"abstract": "Mental health in children and adolescents has been steadily deteriorating\nover the past few years. The recent advent of Large Language Models (LLMs)\noffers much hope for cost and time efficient scaling of monitoring and\nintervention, yet despite specifically prevalent issues such as school bullying\nand eating disorders, previous studies on have not investigated performance in\nthis domain or for open information extraction where the set of answers is not\npredetermined. We create a new dataset of Reddit posts from adolescents aged\n12-19 annotated by expert psychiatrists for the following categories: TRAUMA,\nPRECARITY, CONDITION, SYMPTOMS, SUICIDALITY and TREATMENT and compare expert\nlabels to annotations from two top performing LLMs (GPT3.5 and GPT4). In\naddition, we create two synthetic datasets to assess whether LLMs perform\nbetter when annotating data as they generate it. We find GPT4 to be on par with\nhuman inter-annotator agreement and performance on synthetic data to be\nsubstantially higher, however we find the model still occasionally errs on\nissues of negation and factuality and higher performance on synthetic data is\ndriven by greater complexity of real data rather than inherent advantage.",
"authors": "Isabelle Lorge, Dan W. Joyce, Andrey Kormilitzin",
"published": "2024-04-25",
"updated": "2024-04-26",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"label": "Original Paper",
"paper_cat": "LLM Fairness",
"gt": "The recent development of powerful Large Language Models such as GPT3.5 [2] and GPT4 [3] able to perform tasks in a zero-shot manner (i.e., without having been specifically trained or fine- tuned to do so) by being simply prompted with natural language instructions shows much promise for healthcare applications and the domain of mental health. Indeed, these models display more impressive general natural language processing abilities than their predecessors and excel at tasks such as Question Answering and Named Entity Recognition [4, 5, 6, 7]. Models with the ability to process social media content for indicators of mental health issues have the potential to become invaluable cost-effective tools for applications such as public health monitoring [8] and online moderation or intervention systems [9]. In addition, synthetic data produced by LLMs can be a cost effective and privacy-preserving tool for training task specific models [10]. There have been several studies aimed at assessing the abilities of LLMs to perform a range of tasks related to mental health on datasets derived from social media. Yang et al. [11] conducted a comprehensive assessment of ChatGPT (gpt-3.5-turbo), InstructGPT3 and LlaMA7B and 13B [12] arXiv:2404.16461v2 [cs.CL] 26 Apr 2024 on 11 different datasets and 5 tasks (mental health condition binary/multiclass detection, cause/factor detection, emotion detection and causal emotion entailment, i.e. determining the cause of a described emotion). They find that while the LLMs perform well (0.46-0.86 F1 depending on task), with ChatGPT substantially outperforming both LLaMA 7B and 13B, they still underperform smaller models specifically fine-tuned for each task (e.g., RoBERTa). Xu et al. [13] find similar results for Alpaca [14], FLAN-T5 [15] and LLaMA2 [16], with only fine-tuned LLMs able to perform on par with smaller, task-specific models such as RoBERTa [17, 18]. However, we find that previous studies suffer from the following shortcomings: 1. They focus on adult mental health 2. They focus on tasks with a closed (or finite) set of answers, where the model is asked to perform each task in turn 3. They do not investigate how LLMs perform on synthetic data, i.e., text they are asked to simultaneously generate and label There is growing consensus that we are facing a child mental health crisis [1]. Before the COVID-19 pandemic there was already increasing incidence of mental health conditions in children and young people (CYP), such as depression, anxiety and eating disorders [19] as well as rising rates of self-harm and suicidal ideation [20] and cyberbullying strongly linked to adverse mental health outcomes [21]. The advent of the pandemic accelerated this already precarious situation and created additional challenges [22, 23] such as discontinuity of healthcare service provision in addition to interruption to young people\u2019s usual engagement in education and their social lives. This age range is particularly vulnerable to onset of mental health issues, with half of conditions appearing by early adolescence and 10-20% of children and young people experiencing at least one mental health condition [24]. Females, those with low socioeconomic backgrounds, trauma, abuse or having witnessed violence [25] are at heightened risk. On the other hand, social media now forms an important part of children and adolescents\u2019 daily lives, whose impact on mental health is debated, with potential benefits (stress reduction and support networks [26]) as well as potential risks (sleep disturbance, self esteem issues and cyberbullying [27]). Regardless of their detrimental or protective impact, social media may contribute valuable insights into CYP\u2019s mental health, with opportunities for monitoring and intervention, for example identifying those at risk of depression and mood disorders [28]. Given the mental health of CYP is a particularly pressing public health concern, we wished to investigate how LLMs perform on extracting mental health factors when faced with social media content generated by young people aged 12-19. Indeed, several issues related to mental health either exclusively apply to children and adolescents (such as school bullying and ongoing family abuse) or are particularly prevalent in this age range (such as eating disorders [29] and self-harm [30]), making both content type and factors of interest distinct from those found in adult social media posts. In addition, previous studies focused on tasks which had either a binary or closed sets of answers (e.g., choosing between several given conditions or between several given causal factors). In contrast, we wish to examine how LLMs perform on a task of open information extraction, where they are given categories of information and asked to extract any which are found in the text (e.g., asked to detect whether there is any mental health condition indicated in the text). Furthermore, in previous studies the models were tested with each task in turn (e.g., asked to detect depression in one dataset, then detect suicidality in another dataset), whereas we gather and annotate our own dataset in order to be able to ask the LLMs to extract all categories simultaneously (e.g, extract all conditions and symptoms in a given sentence). Finally, to our knowledge there has been no investigation on how LLM performance compares when asked to annotate text as they generate it, i.e., how their performance on synthetic data compares with their performance on real data. There is growing interest in synthetic data for healthcare [31]. Given the potential for training models and running simulations and digital twin experiments with the benefit of reduced issues of data scarcity and privacy, we believe that our work will contribute to better understanding of limitations and benefits of using synthetic data for real-world tasks. 2",
"main_content": "In summary, we aim to: 1. Generate and annotate with high-quality expert annotations a novel dataset of social media posts which allows extraction of a wide range of mental health factors simultaneously. 2. Investigate performance of two top-performing LLMs (GPT3.5 and GPT4) on extracting mental health factors in adolescent social media posts to verify whether they can be on par with expert annotators. 3. Investigate how these LLMs perform on synthetic data, i.e., when asked to annotate text as they generate it, with the aim of assessing utility of these data in training task specific models 3 Method 3.1 Reddit dataset We use Python\u2019s PRAW library to collect post from the Reddit website (www.reddit.com) over the last year, including posts from specific forum subthemes (\u2018subreddits\u2019) dedicated to mental health topics: r/anxiety, r/depression, r/mentalhealth, r/bipolarreddit, r/bipolar, r/BPD, r/schizophrenia, r/PTSD, r/autism, r/trau-matoolbox, r/socialanxiety, r/dbtselfhelp, r/offmychest and r/mmfb. The distribution of subreddits in the dataset can be found in Figure 1. As in previous works [32], we use heuristics to obtain posts from our target age range (e.g, posts containing expression such as I am 16/just turned 16/etc.) We gather 1000 posts written by 950 unique users. To optimise the annotation process, we select the most relevant sentences to be annotated by embedding a set of mental health keywords with Python\u2019s sentence-transformers library [33] calculating the cosine similarity with post sentences, choosing a threshold of 0.2 cosine similarity after trial and error. We keep the post index for each sentence to provide context. The resulting dataset contains 6500 sentences. 3.2 Ethical considerations In conducting this research, we recognised the importance of respecting the autonomy and privacy of the Reddit users whose posts were included in our dataset. While Reddit data is publicly available and was obtained from open online forums, we acknowledge that users may not have anticipated their contributions being used for research purposes and will therefore make the data available only on demand. The verbatim example sentences given in later sections have been modified to prevent full-text searching strategies to infer the post author\u2019s immediate identity on reddit. To protect the confidentiality of participants, we did not provide usernames or other identifying information to our annotators. Annotators were psychiatrists who were warned that the content of the posts was highly sensitive with potentially triggering topics such as self-harm and child abuse. Reddit\u2019s data sharing and research policy allows academic researchers to access certain Reddit data for the purposes of research, subject to the platform\u2019s terms and conditions. They require researchers to obtain approval through their data access request process before using the API. The policy outlines requirements around protecting user privacy, obtaining consent, and properly attributing the data source in any published work. They reserve the right to deny data access requests or revoke access if the research is deemed to violate Reddit\u2019s policies. Researchers must also agree to Reddit\u2019s standard data use agreement when accessing the data. Our research aims to contribute to the understanding of mental health discourse from adolescents on social media platforms. We believe the potential benefits of this work, in terms of insights that could improve mental health support and resources, outweigh the minimal risks to participants. However, we remain aware of the ethical complexities involved in using public social media data, and encourage further discussion and guidance in this emerging area of study. 3 3.3 Synthetic dataset In addition to the real dataset, we generate two synthetic datasets of 500 sentences each by prompting GPT3.5 (gpt-3.5-turbo-0125) and GPT4 (gpt-4-0125-preview) to create and label Reddit-like posts of 5 sentences (temperature 0, all other parameters set to default). The instructions given were made as similar as possible to those given to annotators, and the model was expliclity told to only label factors which applied to the author of the post (e.g., not to label My friend has depression with CONDITION). The prompt used can be found in Appendix A. Figure 1: Distribution of subreddits 3.4 Annotation schema Given our goal is to obtain a wide range of relevant annotations for each sentence in order to test the LLMs\u2019 ability to generalise and perform open information extraction, and the previously mentioned important factors related to trauma [34] and precarity [35], we create the following six categories in consultation with a clinical psychiatrist: \u2022 TRAUMA (sexual abuse, physical abuse, emotional abuse, school bullying, death, accident, etc.) \u2022 PRECARITY (socioeconomic, parental conflict, parental illness, etc.) \u2022 SYMPTOM (self-harm, low self-esteem, anhedonia, panic attack, flashback, psychosis, insomnia, etc.) 4 \u2022 CONDITION (eating disorder, depression, bipolar, bpd, anxiety, ptsd, adhd, substance abuse/addiction, etc.) \u2022 SUICIDALITY (no subcategories) \u2022 TREATMENT (no subcategories) Nineteen expert annotators were contacted and asked to annotate 500 sentences each for a fixed compensation of \u00a3120 (\u2248\u00a360/hour). These were UK-trained psychiatrists, all of whom had obtained Membership of the Royal College of Psychiatrists by post-graduate experience and formal examinations. Thirteen annotators annotated the Reddit dataset, two annotators annotated the synthetic datasets and four annotators re-annotated samples from the Reddit and synthetic datasets for inter-annotator agreement computation (100 sentences from each dataset, 1500 sentences in total). Annotators were given the above subcategory examples but allowed to use new subcategories when appropriate (no closed set of answers). They were given the post indices to provide context (i.e., so as to be aware which sentences belonged to the same post). They were asked to annotate only school bullying as bullying, and other instances (e.g., sibling harassment) as emotional abuse. Anxiety was to be annotated as a symptom rather than condition unless specifically described as a disorder. Experts performed the annotation by filling in the relevant columns in an Excel sheet with each sentence as a row. Importantly, given the known limitations of language models with negation [36], we wished to annotate both POSITIVE and NEGATIVE evidence in order to test LLMs\u2019 ability to handle both polarities (e.g., I am not feeling suicidal as negative suicidality or We don\u2019t have any money issues as negative socioeconomic precarity). For this purpose, annotators were asked to use the prefixes P and N (e.g., P(adhd) in the CONDITION column or N(socioeconomic) in the PRECARITY column). 3.5 Data processing and dataset statistics In order to compare expert annotations with LLM annotations despite the wide variety of subcategories and terms used by annotators we create dictionaries mapping each term found in the dataset to a standard equivalent (e.g., p(emotional) to p(emotional abuse), p(physical violence) to p(physical abuse), p(gun violence) and p(school shooting) to p(violence), p(rape) to p(sexual abuse), p(financial burden) and p(poor) to p(socioeconomic precarity), p(divorce) to p(family conflict), p(self hatred) to p(low self esteem), etc.). Parental substance abuse is considered family illness and any underspecified subcategories are marked as \u2018unspecified\u2019 (e.g., p(trauma unspecified)). The distribution of subcategories for each category can be found in figures 2, 3, 4 and 5 in Appendix B. The most frequent subcategory in TRAUMA is emotional abuse, which occurs twice as often as physical abuse and death in the dataset. The most frequent form of PRECARITY is family conflict, then family illness (including parental substance abuse) and socioeconomic precarity. The most frequent CONDITIONS are depressive disorders, followed by substance abuse/addiction and ADHD. The most frequent SYMPTOMS are anxiety, low self-esteem, self-harm and low mood. Interestingly, the distribution of subcategories differs quite substantially in the synthetic datasets (distributions for the GPT3.5 and GPT4 generated datasets can be found in Appendix B). Overall, the number of subcategories is reduced, indicating less diversity (however, these are smaller datasets). The top trauma subcategories are sexual abuse for GPT3.5 and school bullying for GPT4, both of which were much less prevalent in real data. The second most prevalent condition for both GPT3.5 and GPT4 is eating disorders, whereas these ranked in 8th place in real data. Finally, unlike in real data, flashbacks and panic attacks are the 3d and 4th most frequent symptoms for both GPT3.5 and GPT4-generated data, whereas self-harm ranks much lower than in real data. Given many of these subcategories were given as examples in the annotator guidelines and LLM prompt, it is likely that the LLMs used them in a more homogenous manner for generation than the distribution which would be found in real data. However, the distribution is not entirely homogenous, which suggests the LLMs did leverage some of the biases learned from their training data. 4 Results Once both human and LLM annotations are standardised, we conduct analyses to assess performance. We provide precision, recall and F1 at the category level and accuracy at the subcategory level 5 collapsed across subcategories (given their high number). We compute category performance in two ways: Positive or Negative, where a point is awarded if the category contains an annotation in both human and LLM annotations, regardless of polarity (i.e., the annotator considered there was relevant information concerning the category TRAUMA) and Positive Only metrics, where negative annotations are counted as no annotations. The difference between the two metrics can be seen clearly in Table 1 (GPT3.5 results), where precision increases but recall diminishes for Positive Only. The increase in precision is due to the fact that GPT3.5 outputs a substantial number of negative annotations in cases where human annotators did not consider it relevant to mention the category. The reduction in recall, on the other hand, results from the fact that LLMs often confuse positive and negative annotations and will occasionally output a negative annotation for a positive one. For real data (Tables 1 and 2), GPT3.5\u2019s performance at the category level is average, with better performance in the Positive Only metrics (0.57). GPT4 performs better, especially in Positive Only metrics (0.63) and subcategory accuracy (0.48 vs. 0.39). In general, recall is higher than precision, indicating LLMs may be overpredicting labels. The performance for synthetic data (Tables 3 and 4) is substantially better, with no gap between the Positive or Negative and Positive Only metrics, suggesting less irrelevant negative annotations. Here again, GPT4 outperforms GPT3.5, both at the category level (0.75 vs 0.70 and 0.73 vs 0.68) and more particularly at the subcategory level, where GPT4 reaches an impressive accuracy of 0.72 (vs 0.42). The gap between recall and precision is reduced for GPT4, whereas GPT3.5 displays higher precision than recall here. In order to assess the upper bound of human performance, we calculate inter-annotator agreement for both real and synthetic datasets using Cohen\u2019s Kappa. Values can be found in Table 5. Interestingly, while performance at the category level in real data is lower (GPT3.5) or similar (GPT4) compared to humans, GPT4 displays a substantially higher accuracy at the subcategory level (0.47 vs 0.35). For synthetic data, GPT3.5 still underperforms human agreement on all three metrics, while GPT4 is on par with humans for the Positive Only and subcategory metrics and only underperforms in the Positive and Negative metric. Category Positive or Negative Positive Only Subcategory Precision Recall F1-Score Precision Recall F1-Score Accuracy TRAUMA 0.38 0.78 0.51 0.56 0.65 0.60 0.39 PRECARITY 0.26 0.43 0.33 0.45 0.31 0.37 0.22 CONDITION 0.33 0.85 0.48 0.54 0.72 0.62 0.55 SYMPTOMS 0.39 0.62 0.48 0.46 0.58 0.52 0.31 SUICIDALITY 0.44 0.79 0.56 0.80 0.68 0.73 / TREATMENT 0.48 0.72 0.58 0.72 0.58 0.64 / ALL 0.37 0.70 0.49 0.55 0.60 0.57 0.39 Table 1: GPT3.5 (real data). Positive or Negative: counting annotation in category regardless of polarity (category level); Positive Only: counting negative annotations as NaN (category level); Subcategory: accuracy at the subcategory level 5 Error analysis We examine some of the sentences annotated by the LLMs in order to perform error analysis and extract the following findings (as mentioned previously some words have been paraphrased to preclude full-text search allowing user identification): \u2022 Both GPT3.5 and GPT4 produce infelicitous negations, i.e., negative annotations which would seem irrelevant to humans, e.g., (I have amazing people around me =>negative parental death or The internet is my one only coping mechanism =>trauma unspecified) \u2022 Despite being specifically prompted to only annotate factors related to the writer/speaker, LLMs (including GPT4) do not always comply, e.g., She comes from what is, honestly, a horrific family situation =>emotional abuse) 6 Category Positive or Negative Positive Only Subcategory Precision Recall F1-Score Precision Recall F1-Score Accuracy TRAUMA 0.44 0.89 0.59 0.57 0.84 0.68 0.57 PRECARITY 0.31 0.52 0.39 0.50 0.46 0.48 0.36 CONDITION 0.46 0.81 0.59 0.61 0.77 0.68 0.57 SYMPTOMS 0.35 0.78 0.49 0.45 0.73 0.56 0.41 SUICIDALITY 0.36 0.93 0.51 0.70 0.87 0.77 / TREATMENT 0.39 0.87 0.54 0.64 0.81 0.71 / ALL 0.39 0.80 0.52 0.55 0.75 0.63 0.48 Table 2: GPT4 (real data). Positive or Negative: counting annotation in category regardless of polarity (category level); Positive Only: counting negative annotations as NaN (category level); Subcategory: accuracy at the subcategory level Category Positive or Negative Positive Only Subcategory Precision Recall F1-Score Precision Recall F1-Score Accuracy TRAUMA 0.90 0.49 0.64 0.90 0.49 0.64 0.38 PRECARITY 0.84 0.69 0.76 0.86 0.69 0.76 0.54 CONDITION 0.44 0.67 0.53 0.47 0.67 0.55 0.59 SYMPTOMS 0.85 0.59 0.70 0.84 0.59 0.69 0.36 SUICIDALITY 0.75 1.00 0.85 0.77 0.90 0.83 / TREATMENT 0.68 0.84 0.75 0.76 0.57 0.65 / ALL 0.74 0.65 0.70 0.77 0.61 0.68 0.42 Table 3: GPT3.5 (synthetic data). Positive or Negative: counting annotation in category regardless of polarity (category level); Positive Only: counting negative annotations as NaN (category level); Subcategory: accuracy at the subcategory level Category Positive or Negative Positive Only Subcategory Precision Recall F1-Score Precision Recall F1-Score Accuracy TRAUMA 0.84 0.95 0.89 0.86 0.92 0.89 0.82 PRECARITY 0.85 0.84 0.85 0.91 0.82 0.86 0.80 CONDITION 0.61 0.67 0.64 0.60 0.67 0.63 0.67 SYMPTOMS 0.49 0.78 0.60 0.53 0.80 0.64 0.69 SUICIDALITY 0.81 0.94 0.87 0.78 0.82 0.80 / TREATMENT 0.85 0.89 0.87 0.87 0.78 0.82 / ALL 0.69 0.83 0.75 0.69 0.79 0.73 0.72 Table 4: GPT4 (synthetic data). Positive or Negative: counting annotation in category regardless of polarity (category level); Positive Only: counting negative annotations as NaN (category level); Subcategory: accuracy at the subcategory level \u2022 Even GPT4 makes errors regarding negation (e.g., I\u2019ve read about people with autism getting temper tantrums/meltdowns, however, that has never really been a problem for me=>negative autism or i had in my head that something inside was very wrong, but i never felt completely depressed all the time so i never took bipolar seriously =>negative bipolar disorder) \u2022 Despite being prompted to annotate suicidality in a separate category, LLMs often annotate it in the SYMPTOM rather than SUICIDALITY category \u2022 GPT3.5 especially often outputs irrelevant/spurious/incorrect labels (e.g., \u2018unemployed\u2019 as condition, \u2018ambition\u2019 as symptom, labelling physical conditions instead of mental conditions only, etc.) 7 Positive and Negative Positive Only Subcategory Annotator vs. Annotator (real data) 0.60 0.59 0.35 GPT3 vs. Annotator (real data) 0.39 0.52 0.37 GPT4 vs. Annotator (real data) 0.43 0.58 0.47 Annotator vs. Annotator (synthetic data) 0.77 0.71 0.68 GPT3 vs. Annotator (synthetic data) 0.64 0.63 0.40 GPT4 vs. Annotator (synthetic data) 0.70 0.69 0.71 Table 5: Inter-annotator agreement (Cohen\u2019s Kappa) \u2022 Even GPT4 makes errors regarding factuality (e.g., It was around my second year in junior high school when my father tried to take his life =>positive death) However, in many cases the assessment is not entirely fair, as the LLMs (particularly GPT4) often catch annotations which human annotators missed, or the difference in subcategories is subjective and open to debate (e.g., school bullying vs emotional abuse, emotional abuse vs abuse unspecified, etc.). Thus it is possible that LLMs, or most likely GPT4, in fact outperformed experts on this task. 6 Discussion The results obtained from our comparison of LLM annotations with human annotations on both real and synthetic data allow us to make a few conclusions and recommendations. Overall, both LLMs perform well. Inter-annotator agreement and performance indicate that GPT4 performs on par with human annotators. In fact, error analysis and manual examination of annotations suggest the LLMs potentially outperform human annotators in terms of recall (sensitivity), catching annotations which have been missed. However, while recall might be improved in LLMs versus human annotators, precision may suffer in unexpected ways, for example through errors in the use of negation and factuality, even in the case of GPT4. LLMs display a particular tendency to overpredict labels and produce negative annotations in infelicitous contexts, i.e., when humans would deem them irrelevant, creating an amount of noise. However, these negative annotations are not technically incorrect. While accuracy errors could be found in the LLM output, the experts\u2019 outputs were not entirely free of them, and previous work by [37] suggests LLMs may both be more complete AND more accurate than medical experts. There may still be a difference in the type of accuracy errors produced by LLMs, which will have to be investigated in future research. In terms of accuracy at the subcategory level, we were surprised to find GPT4 outperformed human agreement by a large margin in real data (0.47 vs 0.35). We hypothesise this is due to the fact that human annotators display higher subjectivity in their style of annotation at the subcategory level (given the lack of predetermined subcategories) and diverge more between them. LLMs are likely to be more \u2018standard\u2019 and generic and thus potentially more in agreement with any given human annotator. More specifically, LLMs tend to be consistent from one annotation to the other with higher recall whereas human annotators showed less consistency. Therefore, if a sentence mentions physical, sexual and emotional abuse, annotators might only mention two out of three but when mentioning all three an LLM is more likely to be in agreement than another annotator, i.e., the LLM will catch more of the perfectly recalled annotations than the second annotator. The better performance demonstrated on synthetic data doesn\u2019t seem due to LLMs performing better on data they are generating, but rather to the synthetic data being less complex and diverse and thus easier to annotate for both LLMs and humans, as evidenced by GPT4 reaching similar inter-annotator agreement scores to humans (with agreement both in humans and LLM/human 10% higher for synthetic data). This better performance could still warrant using synthetic data for e.g., training machine learning models (given more reliable labels) but only in cases where the potential loss in diversity is compensated by the increase in label reliability. This will likely depend on the specific application. 8 7 Conclusion We presented the results of a study examining human and Large Language Models (GPT3.5 and GPT4) performance in extracting mental health factors from adolescent social media data. We performed analyses both on real and synthetic data and found GPT4 performance to be on par with human inter-annotator agreement for both datasets, with substantially better performance on the synthetic dataset. However, we find GPT4 still performing non-human errors in negation and factuality, and synthetic data to be much less diverse and differently distributed than real data. The potential for future applications in healthcare will have to be determined by weighing these factors against the substantial reductions in time and cost achieved through the use of LLMs. Acknowledgment I.L., D.W.J., and A.K. are partially supported by the National Institute for Health and Care Research (NIHR) AI Award grant (AI_AWARD02183) which explicitly examines the use of AI technology in mental health care provision. A.K. declare a research grant from GlaxoSmithKline (unrelated to this work). This research project is supported by the NIHR Oxford Health Biomedical Research Centre (grant NIHR203316). The views expressed are those of the authors and not necessarily those of the UK National Health Service, the NIHR or the UK Department of Health and Social Care.",
"additional_info": [
{
"url": "http://arxiv.org/abs/2404.14618v1",
"title": "Hybrid LLM: Cost-Efficient and Quality-Aware Query Routing",
"abstract": "Large language models (LLMs) excel in most NLP tasks but also require\nexpensive cloud servers for deployment due to their size, while smaller models\nthat can be deployed on lower cost (e.g., edge) devices, tend to lag behind in\nterms of response quality. Therefore in this work we propose a hybrid inference\napproach which combines their respective strengths to save cost and maintain\nquality. Our approach uses a router that assigns queries to the small or large\nmodel based on the predicted query difficulty and the desired quality level.\nThe desired quality level can be tuned dynamically at test time to seamlessly\ntrade quality for cost as per the scenario requirements. In experiments our\napproach allows us to make up to 40% fewer calls to the large model, with no\ndrop in response quality.",
"authors": "Dujian Ding, Ankur Mallick, Chi Wang, Robert Sim, Subhabrata Mukherjee, Victor Ruhle, Laks V. S. Lakshmanan, Ahmed Hassan Awadallah",
"published": "2024-04-22",
"updated": "2024-04-22",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.AI",
"cs.CL"
],
"label": "Original Paper",
"paper_cat": "LLM Fairness",
"gt": "Large language models (LLMs) have become the dominant force in natural language processing in recent years [Zhao et al., 2023]. Their impact has been especially striking in generative applications where it has extended beyond standard language understanding and question-answering benchmarks like [Hendrycks et al., 2020, Srivastava et al., 2022] to several successful real-world deployments. These include the wildly popular ChatGPT [OpenAI, b] and several other chatbots [Zheng et al., 2023] powered by different LLMs [Taori et al., 2023, Touvron et al., 2023, OpenAI, 2023], which allow users to engage in natural language conversations and obtain informative responses on a range of practically useful tasks like creative writing, translation, code completion, etc. An important added attraction of these models is their accessibility. Users can input queries and receive responses in natural language, without any specialized data or code, and this is what has created such a widespread demand for their services across regions, professions, and disciplines. The best performing LLMs are based on the transformer architecture of [Vaswani et al., 2017] and generally have tens of billions of parameters. E.g., Alpaca [Taori et al., 2023] has 13 billion parameters, the best version of Llama-2 [Touvron et al., 2023] has 70 billion parameters, and OpenAI\u2019s GPT-3.5 [OpenAI, a], and GPT4 [OpenAI, 2023] are rumored to be much larger. Their \u2217work performed during internship at Microsoft \u2020work performed while at Microsoft 1 arXiv:2404.14618v1 [cs.LG] 22 Apr 2024 (a) Accuracy v/s size of LLM (b) Tail of accuracy difference (c) Results with routing Figure 1: We use a dataset of natural language queries from a range of tasks like question answering, summarization, information extraction, etc. (See Section 4 for details). We observe that (a) smaller models generally give poorer response quality or lower BART score [Yuan et al., 2021], (b) Llama-2 (13b) outperforms GPT-3.5-turbo on around 20% examples, and (c) our router can make 22% fewer calls to GPT-3.5-turbo (cost advantage) with 1% drop in response quality (BART score). enormous size and the autoregressive nature of text generation in their transformer architectures means that these models typically have a high compute and memory requirement that can only be met by expensive cloud servers [Yu et al., 2022]. This can potentially impose an enormous cost on developers and users as more LLM-based services are introduced. In response to this there has been a surge of interest in designing smaller, cost-effective LLMs \u2013 e.g., [Touvron et al., 2023] provides multiple versions of Llama-2, with the smallest having only 7 billion parameters, small enough to run on a laptop1, while the smallest offering of Google\u2019s Palm-2 model can even run on mobile devices2. However empirical evaluations in [Chung et al., 2022, Touvron et al., 2023] as well as our own evaluation in Figure 1a show that smaller models generally lag behind in terms of response quality. Faced with this tradeoff between response quality and inference cost, we propose a hybrid inference approach which provides the best of both worlds. Our approach is motivated by the observation that most tasks for which LLMs are useful, like creative writing, translation, code completion, etc., include a range of queries of different difficulty levels and there is always a subset of \u201ceasy\u201d queries for which responses of a small (inexpensive and weak) model may be comparable to, and sometimes even better than those of a large (expensive and powerful) model. This is also illustrated in Figure 1b where we plot the tail of the quality gap (defined in Section 3) between the 13 billion parameter version of Llama-2 and OpenAI\u2019s GPT-3.5-turbo, the model that powers ChatGPT. Quality gap is non-negative for examples where the response quality of Llama-2 is comparable to or better than that of GPT-3.5-turbo which is the case for around 20% queries in our dataset (described in Section 4). We leverage this insight to train a router that takes a large model and a small model as input, and learns to identify these easy queries as a function of the desired level of response quality, while taking into account the generative nature of tasks, inherent randomness in LLM responses, and response quality disparity between the two models. At test time, the router seamlessly adjusts to different response quality requirements and assigns the corresponding \u201ceasy\u201d queries to the small model, leading to significant inference cost reduction with minimal drop in response quality. In 1https://github.com/microsoft/Llama-2-Onnx 2https://blog.google/technology/ai/google-palm-2-ai-large-language-model/ 2 Figure 2: Routing between edge and cloud. Figure 1c our router assigns 22% of queries to Llama-2 (13b) 3 with less than 1% drop in response quality measured in BART scores [Yuan et al., 2021]. The gains are even higher for pairs where the small model is closer in terms of response quality to the large model (see Section 4). With the explosion in the complexity and costs of LLM deployments, small companies and individual consumers, have started to rely on the pre-existing LLMs hosted on platforms like HuggingFace [Face] and OpenAI [OpenAI, c]. This is an instance of the broader Machine-Learning- As-A-Service (MLaaS) paradigm, wherein users (small companies/individual consumers) interact with the models through an API where they submit their queries [Kang et al., 2022] and have limited visibility into the models themselves. In this context, our hybrid inference approach can reduce the costs incurred by both consumers and platform owners because a) consumers can use it to route easy queries to small models hosted on their edge devices (laptops/smartphones) and only call the API for the more complex queries (illustrated in Figure 2) and b) platform owners can automatically route queries to lower cost models at the backend without affecting the user experience, as long as the response quality levels are maintained. Thus our hybrid inference approach offers a flexible and cost-effective solution for harnessing the full potential of LLMs while accommodating diverse cost budgets and quality requirements. The main technical contributions of this work are: a) we are the first to explore cost-effective and quality-aware hybrid LLM inference, b) we design a novel query router which routes queries based on an estimate of the response quality gap between models (Section 3.1), c) we incorporate uncertainty due to randomness in LLM responses in our router design to improve performance (Section 3.2), d) we identify challenges for our router when the small model is significantly weaker than the large model and introduce a novel data transformation to address this issue (Section 3.3), and e) we provide extensive experimental results (Section 4) on a large benchmark dataset of real world natural language queries and responses [Jiang et al., 2023] thereby demonstrating the value of the approach and its superiority over baseline approaches, enabling LLM providers and consumers to cost-efficiently enable LLM-backed experiences. 3We term the fraction of queries routed to the small model as the cost advantage (see \u00a72.3) 3",
"main_content": "2.1 Related Work Large Language Models (LLMs). The advent of LLMs has led to a paradigm shift in the study of natural language processing (NLP), computer vision, information retrieval, and other domains[Menghani, 2023, Chen et al., 2023, Jiang et al., 2023]. The impressive effectiveness and generalizability of LLMs has come at the price of a drastic increase in LLM sizes [Treviso et al., 2023] and consequent challenges, including huge amounts of computational resources and data required to train, and prohibitive expenses at both training and deployment stages [Bender et al., 2021]. Efficient Machine Learning (ML) Inference. LLMs belong to a class of models called foundation models [Bommasani et al., 2021] \u2013 models that are trained once and can then be used to serve a wide variety of tasks. As such, we expect inference cost to dominate the overall cost of such models and hence focus on works that reduce the cost of ML inference [Menghani, 2023]. The most common approach for efficient ML inference is model compression i.e., replacing a large model with a smaller model of comparable accuracy. Common techniques for model compression include (i) model pruning [Hassibi et al., 1993, LeCun et al., 1989] which drops parts of the model with minimal accuracy loss, (ii) quantization [Jacob et al., 2018, Vanhoucke et al., 2011] which reduces model memory footprints and inference latency by reducing the precision of data representation (e.g., FP32 to INT8), (iii) knowledge distillation [Hinton et al., 2015, Urban et al., 2016] which trains small student models to mimic large teacher models, and (iv) Neural Architecture Search [Elsken et al., 2019, Zoph and Le, 2016] which tunes model architecture to improve model performance, under inference cost constraints. Such static efficiency optimizations typically produce a fixed model with lower inference cost and lower accuracy compared to the large model which may not suffice for foundation models like LLMs, whose core premise is that the same model will serve a range of tasks, each with its own accuracy/cost constraints. This is already manifesting in inference platforms described in Section 1 which need more dynamic optimizations to meet the demands of all users. Hybrid ML Inference. Recent works [Kag et al., 2022, Ding et al., 2022] have introduced a new inference paradigm called hybrid inference which uses two models of different sizes instead of a single model for inference. The smaller model (e.g. Llama2 [Touvron et al., 2023]) generally has lower inference cost but also lower accuracy than the larger model (e.g. GPT-4 [OpenAI, 2023]). The key idea is to identify and route easy queries to the small model so that inference cost can be reduced while maintaining response quality. By tuning a threshold on query difficulty we can dynamically trade off quality and cost for the same inference setup. [Kag et al., 2022] study this setup for image classification and propose to train the small model, large model, and router from scratch. However LLM training is expensive and retraining LLMs from scratch for every scenario goes against the very premise of inference with pre-trained foundation models. Moreover text generation [Iqbal and Qureshi, 2022] is often more ambiguous and challenging than image classification due to which novel techniques are required for effective hybrid LLM inference for text generation. Inference with Multiple LLMs. Some recent works [Jiang et al., 2023, Chen et al., 2023, Leviathan et al., 2023, Kim et al., 2023] use multiple LLMs for inference but these approaches typically call more than one LLM for a single query that can incur significant computational overheads. Specifically [Jiang et al., 2023] calls an ensemble of LLMs at inference time due to which the inference cost will be proportional to the number of models in the system. [Chen et al., 2023] performs inference using a cascade of LLMs where responses to the query are generated sequentially by the LLMs in the cascade until one of the models has a confidence score higher than a predefined 4 threshold. Our work provides high quality responses while always making a single LLM call for all queries and will thus incur much lower computational cost than both of these works on average. Speculative decoding, introduced in [Leviathan et al., 2023, Kim et al., 2023] speeds up decoding of expensive models by invoking small-and-efficient decoders on the \u201ceasy\u201d decoding steps. Instead, in our work we are interested in query routing which assigns \u201ceasy\u201d queries to small models to reduce overall inference costs while maintaining high performance. While the two approaches have different goals, an interesting line of future work would be to combine these so that our router assigns queries to the small or large model based on query difficulty and then speculative decoding is applied on top to speed up inference for queries assigned to the large model thereby leading to further cost reduction. 2.2 Problem Setting We extend the hybrid ML paradigm to LLM inference by routing queries between two models with different inference costs and accuracy. This allows platforms [Face, OpenAI, c] to route queries across backend LLMs to lower costs while dynamically tuning the ratio of queries assigned to each model as per user quality requirements. It also allows users with small models on local (edge) devices to only call the platform for hard queries (Figure 2), thus significantly reducing their expenses. We use X and Z to denote the input query space and the set of all possible output responses respectively. Let L : X \u2192Z denote the large model and S : X \u2192Z denote the small model. Formally, the objective in our paradigm is to learn a router r : X \u2192{0, 1} such that each user query x \u2208X is routed to the small model S(x) if r(x) = 0, and to the large model L(x), otherwise. Note that we always route each query to a single LLM at inference time as opposed to using an ensemble [Jiang et al., 2023] or a cascade [Chen et al., 2023] of LLMs, which may call multiple LLMs to resolve a single query and incur significant computational overheads. 2.3 Evaluation Metric Response Quality Automatic evaluation for text generation is a challenging and widely studied problem. Traditional metrics, such as BLEU and ROUGE, initially designed for machine translation and summarization, have been found to be of limited concordance with human judgment and restricted applicability across diverse NLP tasks [Blagec et al., 2022]. Significant research efforts have been devoted to implementing task-agnostic evaluation metrics with neural networks. GPT-ranking [Jiang et al., 2023], as a representative example, employs GPT models (e.g., GPT-4 [OpenAI, 2023]) to provide relative rankings between pairs of generated outputs. In spite of the high correlation with human perception, GPT-ranking suffers from high computational costs and inability to distinguish between examples with the same ranking. Instead, we use the BART score [Yuan et al., 2021] to evaluate response quality of different models since (1) it is inexpensive to compute in comparison to LLM-based metrics such as GPT-ranking, and (2) it has been shown in prior work [Jiang et al., 2023] that this metric correlates well with the ground truth. We also provide a case study in Appendix C.2 to empirically justify using BART score as the response quality metric. We use q(z), q : Z \u2192R to denote the BART score (response quality) of model responses z \u2208Z. Cost Advantage The absolute costs of running a model may not be known a priori, and may be expressed using a variety of metrics, including latency, FLOPs, energy consumption, etc. In LLM inference, however, each of these metrics is affected by several underlying confounders such as different prompt templates, hardware capability, network connectivity, etc. Moreover different 5 platforms/users may be interested in different metrics. However the common underlying assumption in this and previous works on efficient ML inference is that smaller models are more efficient than larger models and therefore we expect to obtain an improvement in all the metrics by routing more queries to the smaller model. Hence we define cost advantage as the percentage of queries routed to the smaller model. Note that the notion cost advantage has been used as a generic efficiency metric in previous hybrid ML work [Kag et al., 2022], where it is termed as coverage. 3 Hybrid LLM Inference Easy Queries. We refer to queries for which the response quality of the small model is close to the response quality of the large model as \u201ceasy\u201d queries. The goal of our hybrid inference framework is to identify the easy queries and route them to the small model thereby ensuring significant inference cost reduction without much drop in response quality. Note that the easy queries as defined here, need not necessarily be queries that are easy/inexpensive to respond to, they are just queries for which the small model can match up to the large model. Examples of easy and hard queries as per this definition are provided in Appendix C.1. Quality Gap. We define quality gap of a query x as H(x) := q(S(x))\u2212q(L(x)) i.e. the difference in quality of the small model\u2019s response S(x) and the large model\u2019s response L(x). The quality gap is a random variable since LLM responses are typically non-deterministic. This is illustrated in Figure 3 below where the blue and orange plots correspond to the distribution of responses from FLAN-t5 (800m) 4 [Chung et al., 2022] and Llama-2 (13b) [Touvron et al., 2023] for a single query. Figure 3: Response quality distribution for FLAN-t5 (800m) and Llama-2 (13b) on the query \u201cHow to identify the index of median?\u201d measured in BART scores. Llama-2 (13b) with transformation significantly overlaps with FLAN-t5 (800m). Proposed Orchestration Framework. Queries are routed using a BERT-style encoder model (e.g., DeBERTa, [He et al., 2020]) which is trained on a dataset of representative queries and learns to predict a score. Since the router is an encoder model, a single pass of the query through it is sufficient to generate the score and we assume that the cost of this step is negligible compared to the cost of running autoregressive decoding using the large model L(x) [Sun et al., 2019]. Thus, we expect that using the router to route queries to the small model will not detract significantly from the realizable cost advantage. Router Score. We design the router score to be large for easy queries as defined above. Intuitively, an estimate of Pr[H(x) \u22650] is a suitable candidate since a large value of Pr[H(x) \u22650] = Pr[q(S(x)) \u2265q(L(x))] corresponds to queries for which there is a high likelihood that the response quality of the small model will be at least as high as that of the large model. However we show below that in scenarios where the large model is significantly more powerful than the small model i.e. q(S(x)) << q(L(x)) in general, one can train more effective routers by relaxing the definition of easy queries to Pr[H(x) \u2265\u2212t] = Pr[q(S(x)) \u2265q(L(x)) \u2212t] for an appropriate t > 0. At test time we achieve the desired performance accuracy tradeoff by tuning a threshold on the score and routing queries with score above the threshold to the small model. For a 4We use the FLAN-t5-large model from https://huggingface.co/google/flan-t5-large. 6 router with parameters w, we denote router score by pw(x), pw : X \u2192[0, 1]. We discuss different router score designs in the rest of this section assuming a training set of N queries x1, . . . , xN. 3.1 Deterministic Router Previous work on hybrid ML [Ding et al., 2022, Kag et al., 2022] makes the assumption that neural models are deterministic functions that map input features to a single point in the output space. To realize this for LLMs, we sample a single response per query from each model. We assign boolean labels ydet i = 1[q(S(xi)) \u2265q(L(xi))], i = 1, . . . , N to each training query with the BART score as the quality function q(.). Our router is trained by minimizing the binary cross-entropy loss [Ruby and Yendapalli, 2020]. L(w) = \u22121 N N X i=1 \u0000ydet i log(pw(xi)) + (1 \u2212ydet i ) log(1 \u2212pw(xi)) \u0001 (1) Observe that the assigned labels ydet i can be viewed as an estimate for Pr[H(xi) \u22650] given a single response per query from each model and thus minimizing the above loss encourages the router score pw(x) to be close to Pr[H(x) \u22650] for test queries. We refer to this deterministic router as rdet. 3.2 Probabilistic Router The determinism assumption can be justified for tasks where the ground truth labels are often explicit and unique such as image classification [Masana et al., 2022] and video segmentation [Yao et al., 2020]. When it comes to NLP tasks, however, there is usually no single best answer due to the intrinsic ambiguity and complexity of natural languages. LLMs are widely used as non-deterministic generators to capture the intrinsic uncertainty of NLP tasks, as shown in Figure 3 (ignore the dashed curve for now). The non-determinism mainly comes from the randomness in the decoding phase. Users typically control the level of uncertainty by choosing different decoding strategies such as nucleus sampling [Holtzman et al., 2019], as well as the values of the hyper-parameter temperature. Intuitively, higher temperature values result in a higher level of randomness and diversity among the generated responses. For black-box LLM APIs such as GPT-4 [OpenAI, 2023], it has been observed that even upon setting temperature to the minimum value 0, it can still provide different responses for the same input queries. The underlying mechanism is still an open problem while a recent study hints at the instability of the MoE backbone [Skyward, 2023]. We propose to incorporate the uncertainty due to the non-deterministic nature of LLM comparisons into the router training loss by relaxing the hard labels ydet i \u2208{0, 1} to the soft labels yprob i := Pr[H(xi) \u22650] = Pr[q(S(xi)) \u2265q(L(xi))] = E[1[q(S(xi)) \u2265q(L(xi))]] where E denotes the expectation. In practice, we estimate expectation by sampling 10 responses from each model and computing the sample average of the corresponding indicator function values. Observe that the hard label ydet i is a higher-variance estimate of E[1[q(S(xi)) \u2265q(L(xi))]] (since it is obtained from a single sample) and hence we expect improved performance with the following training loss, L(w) = \u22121 N N X i=1 \u0010 yprob i log(pw(xi)) + (1 \u2212yprob i ) log(1 \u2212pw(xi)) \u0011 (2) We refer to this probabilistic router as rprob. 7 (a) Before transformation. (b) Grid search for the best t. (c) After transformation. Figure 4: Effect of data transformation on labels for training the router. 3.3 Probabilistic Router with Data Transformation While so far we have designed scores that try to estimate Pr[H(x) \u22650], we observe that the empirical estimate of Pr[H(xi) \u22650] = E[1[q(S(xi)) \u2265q(L(xi))]] tends to be extremely small when the large model is significantly more powerful than the small model (0 for almost 90% of the queries in Figure 4a with Flan-t5 (800m) as the small model and Llama-2 (13b) as the large model). Because q(S(x)) << q(L(x)) for most queries in this case, it provides an extremely weak signal for training using Equation (2) and as shown in Section 4 both rdet and rprob fail to provide much improvement over random query assignment in this case. Traditional approaches for learning with imbalanced data have their own shortcomings [Krawczyk, 2016]. Moreover our goal is to only design a router that can reduce inference cost while maintaining response quality as much as possible and so we are not tied to a particular definition of class labels to achieve this. We leverage this flexibility to introduce new labels ytrans i (t) := Pr[H(xi) \u2265\u2212t] = Pr[q(S(xi)) > q(L(xi)) \u2212t] for some t > 0. Since \u2212t < 0, Pr[H(x) \u2265\u2212t] \u2265Pr[H(x) \u22650] by definition of the tail distribution and so we expect this relaxation to provide a stronger signal for router training while still allowing us to identify the easy queries i.e. those queries for which q(S(x)) has a high likelihood of being close to q(L(x)) (q(S(x)) > q(L(x)) \u2212t). Visually, this corresponds to comparing the distribution of the small model\u2019s response with a shifted distribution of the large model\u2019s response to a query (dotted curve in Figure 3). Now the question is how to choose the best relaxation t? Given that tail probability Pr[H(x) \u2265\u2212t] lies in [0, 1], we choose t by maximizing the average pairwise differences between the transformed labels to push them as far apart as possible and provide a strong signal for training. Thus we set, t\u2217= arg max t 1 N 2 X (i,i\u2032) | ytrans i (t) \u2212ytrans i\u2032 (t) | (3) We currently solve the above optimization problem via grid-search and leave more sophisticated approaches for future work. We plot the optimization objective for different values of t for our training dataset in Section 3.3 and show the distribution of transformed labels ytrans i (t\u2217) in Figure 4c. As we see, the distribution is significantly more balanced now and we expect the resulting router to be much more effective. Once again, we train the router by minimizing the loss L(w) = \u22121 N N X i=1 \u0000ytrans i (t\u2217) log(pw(xi)) + (1 \u2212ytrans i (t\u2217)) log(1 \u2212pw(xi)) \u0001 (4) and we refer to this probabilistic router as rtrans. 8 Cost Advantage (%) Response Quality (BART Score) Drop w.r.t all-at-large (%) S: Llama-2 (7b) S: Llama-2 (13b) S: FLAN-t5 (800m) L: Llama-2 (13b) L: GPT-3.5-turbo L: Llama-2 (13b) rdet rprob rtrans rdet rprob rtrans rdet rprob rtrans 10 0.1 -0.1 0.1 0.1 -0.1 0.2 2.3 2.2 2.1 20 0.1 0.0 0.0 1.0 0.8 0.8 5.8 5.8 4.7 40 0.2 0.1 0.0 3.5 3.4 2.9 13.8 13.1 10.3 Table 1: Cost advantage v.s. performance drop for model pairs of different performance gaps. Performance drops are computed w.r.t. the all-at-large baseline. 4 Evaluation 4.1 Evaluation Setup Dataset. We use the MixInstruct dataset from [Jiang et al., 2023] to evaluate the effectiveness of different routing strategies across a wide range of tasks (e.g., question answering, summarization, information extraction). MixInstruct is a large-scale collection of real-world instructions and consists of examples from four public datasets (see Table 5 in Appendix B). The broad range of tasks in the dataset enables us to train a generic router that will be effective across different scenarios. We uniformly sample 10k training examples from the training split of MixInstruct, for each of which we generate 10 responses from all LLMs under consideration. Our validation and test splits are the same as the MixInstruct dataset, which consist of 5k instruction examples separately. Router Model. We use DeBERTa-v3-large [He et al., 2020] (300M) as the backbone to train our routers. We train each router with the corresponding loss from Section 3 for 5 epochs and use the validation set to choose the best checkpoints for final evaluation. All experiments are conducted with 1 NVIDIA A100 GPU of 80GB GPU RAM. We have made our source code available at https://github.com/m365-core/hybrid_llm_routing. Evaluation Measures. We use BART score [Yuan et al., 2021] as the quality metric and use fraction of queries routed to the small model (cost advantage) as the efficiency metric (see Section 2.3). Baselines. To the best of our knowledge, there has been no prior work specifically on routing between LLMs. We consider three straightforward baselines: all-at-large, all-at-small, and random. All-at-large routes all queries to the large model, while all-at-small routes all queries to the small model. Random generates a random number in [0,1] and selects the large model if it is below the probability threshold. Experiments. We investigate all three routers (see Section 3): the deterministic router rdet, the probabilistic router rprob, and the probabilistic router augmented with data transformation rtrans. We select candidate model pairs from FLAN-T5 (800m), FLAN-T5 (11b), Llama-2 (7b), Llama-2 (13b), and GPT-3.5-turbo for our experiments. At test time the trained router (rdet, rprob, or rtrans) takes a threshold value as input and routes all queries with router score higher than the threshold to the small model as these are the easy queries. We evaluate the router performance in Section 4.2 in terms of both BART score and cost advantage (Figure 5 and Table 1), validate that the router is 9 (a) Small performance gap (b) Medium performance gap (c) Large performance gap Figure 5: Error-cost tradeoffs achieved by rdet, rprob, and rtrans for different performance gaps. indeed routing easy queries to the small model in Section 4.3, demonstrate that our routers are of negligible compute overhead in Section 4.4, show how to choose routing thresholds in practice in Section 4.5, evaluate the effectiveness of our routers using a response quality metric other than the BART score in Section 4.6, and test the generalizability of routers across model pairs in Section 4.7. 4.2 Router Performance Results Small performance gap. LLMs of the same architectures are observed to be of small performance gap such as Llama-2 (7b) v.s. Llama-2 (13b), as seen in Figure 5a. In this case, by trading little to no performance drop, we show that (1) the deterministic router rdet can achieve good cost advantages, (2) rprob consistently improves rdet, and (3) rtrans is able to match or slightly improve the performance of rprob. Numerical comparison results are summarized in Table 1. rdet routes 20% (40%) queries to the small model i.e. Llama-2 (7b) with only 0.1% (0.2%) drop in response quality w.r.t. the all-at-large baseline. Impressively rprob and rtrans achieve 20% cost advantages without any quality drop, and rtrans is able to achieve even 20% cost advantage without quality drop, which can be attributed to these methods capturing the non-deterministic nature of LLMs. Medium performance gap. Often there is only a moderate performance gap between leading open-source LLMs like Llama-2 (13b) and state-of-the-art commodified LLMs, such as GPT-3.5-turbo (Figure 5b). In this case, all our routers deliver reasonable cost advantages with acceptable quality drop. The effectiveness order between rdet, rprob, and rtrans resembles that in the small quality gap case. All routers achieve 20% (40%) cost advantage with \u22641% (\u22644%) quality drop (Table 1). In the 40% cost advantage regime, rprob slightly outperforms rdet and rtrans improves rprob by 0.5% in terms of quality drop. Large performance gap. In the edge-cloud routing scenarios, edge devices often have very limited resources and can only support small models of limited quality, which can be significantly 10 (a) Small performance diff. (b) Medium quality gaps. (c) Large quality gaps. Figure 6: Difference between average quality gap of queries routed to the small and large models with different performance gaps. outperformed by large models deployed on the cloud. We investigate how to effectively route queries with LLM pairs of large performance gaps, such as FLAN-t5 (800m) and Llama-2 (13b) (Figure 5c). Non-trivial routing is challenging in this situation since the large model dominates for a majority of examples. Both rdet and rprob perform marginally better than the random routing baseline. In contrast, rtrans can still effectively distinguish relatively easy queries from the harder ones. rtrans achieves 40% cost advantages with 10.3% quality drop, which is 3.5% and 2.8% lower than rdet and rprob respectively (Table 1). In the course of these experiments we made the following interesting observations: 1. When the cost advantage is modest (e.g., 10%) and the LLM performance gaps are not large (e.g., Llama-2 (7b) v.s. Llama-2 (13b)) rprob is able to achieve even better performance than all-at-large which leads to the \u201cnegative quality drops\u201d in Table 1. This is because, as seen from the large value of Pr[H(x) \u22650] in the tail distribution in Figure 5, the response quality of the small model may be higher than that of the large model for several queries and by routing these queries to the small model, the router is able to even beat all-at-large. 2. For lower cost advantages (\u226410%) and small or moderate LLM performance gaps rtrans can be slightly outperformed by rdet or rprob. This might be due noise in the estimation of the relaxation parameter t from sample averages instead of expectation in Equation (3) and from the grid search process leading to suboptimal settings of rtrans. However we clearly see that in more challenging routing scenarios with high cost advantage targets or large LLM performance gaps, both rdet and rprob have difficulty in correctly routing queries, and rtrans starts to dominate due to the benefits of the data transformation. 4.3 Router Validation Results We also validate that the router is functioning as intended, that is, routing easy queries to the small model and hard queries to the large model. To see this, in Figure 6 we plot the difference between the average quality gaps of queries routed to the small model and those routed to the large model for our router and the random baseline v/s different values of cost advantages (i.e., the fraction of queries routed to the small model). Since the random baseline randomly assigns queries the average difference is nearly always zero. However our router routes easy queries i.e. queries with large quality gap (q(S(x)) \u2212q(L(x))) to the small model and queries with small quality gap to the 11 large model. Hence the difference between the average quality gaps always has a significant positive value indicating that more easy queries are routed to the small model than to the large model in our approach as compared to the random assignment approach at all cost advantages. 4.4 Router Latency We measure the latency of our router and compare it to the latency of the different LLMs \u2013 Flan-t5 (800m), Llama-2 (7b), and Llama-2 (13b) that we use in our experiments for generating responses to user queries. Note that the latency of all the routers rdet, rprob, and rtrans will be the same since they use the same model (DeBERTa-v3-large [He et al., 2020]) and are just trained differently. Also, we do not measure the latency of GPT-3.5-turbo since its responses are generated by querying the OpenAI API [OpenAI, c] as the model weights are not publicly available due to which it is not possible to disentangle the inference latency from the network latency, queueing delay, latency of the API call, etc. However we note that the inference latency of all other LLMs we consider is significantly larger than that of the router (see Table 2) and therefore we expect the same to hold for GPT-3.5-turbo as well. The latency results are reported in Table 2 where we measure the average latency per query averaged over 200 randomly chosen queries from our dataset (confidence bounds correspond to one standard error). As expected the router processes queries significantly faster than all the LLMs (nearly 10\u00d7 faster than the fastest LLM \u2013 FLAN-t5(800m)). This is both due to its smaller size (300m parameters) and the fact that it performs a single forward pass over the query to generate the score while the LLMs generate the response token-by-token in an autoregressive fashion due to which the inference latency is proportional to the response length. Thus the router adds minimal overhead to the inference cost due to its small size and extremely low latency. Model Latency (seconds) Router 0.036 \u00b1 0.002 FLAN-t5 (800m) 0.46 \u00b1 0.039 Llama-2 (7b) 7.99 \u00b1 0.15 Llama-2 (13b) 14.61 \u00b1 0.27 Table 2: Latency Values for Different Models 4.5 Empirical Determination Of Routing Threshold Recall that at test time the model owner is required to set a threshold on the router score which serves to separate the easy queries from the hard ones (see Section 3). All queries with router score higher than the threshold will be routed to the small model. Thus the threshold is a user-defined parameter controlling the achieved efficiency-performance trade-off, to best serve the interests of different users. In this section we show how to empirically choose thresholds on router scores to achieve cost reduction with little to no performance drops. For this, we use a small calibration set to recommend default thresholds to users. We investigate all three routers rdet, rprob, and rtrans with different LLM pairs that we use in our experiments. For each LLM pair, we randomly draw 500 samples from the validation set and use grid search to determine the threshold that delivers the highest cost advantages i.e., cost savings on the validation set while keeping the performance drop 12 Router S: Llama-2 (7b) L: Llama-2 (13b) S: Llama-2 (13b) L: GPT-3.5-turbo S: FLAN-t5 (800m) L: Llama-2 (13b) Perf. Drop Cost Adv. Perf. Drop Cost Adv. Perf. Drop Cost Adv. rdet Val. 0.99% 98.20% 0.97% 15.20% 0.77% 5.40% Test 1.60% 98.56% 0.55% 15.15% 0.69% 4.89% rprob Val. 0.92% 97.60% 0.56% 8.60% 0.70% 5.00% Test 1.42% 96.80% 0.11% 8.38% 0.57% 4.44% rtrans Val. 0.79% 96.00% 0.77% 17.00% 0.92% 4.00% Test 1.39% 96.45% 0.49% 15.68% 1.02% 5.05% Table 3: Test performance drops v.s. cost advantages achieved by thresholds chosen from 500 validation samples with \u22641% sampled performance drops. (reduction in BART score) less than 1%. The limit on performance drop can be adjusted as per user requirements. With the selected thresholds, we report the achieved performance drops and cost advantages on the test sets, as summarized in Table 3. As seen from the table the performance and the cost advantage obtained on the test sets closely follows that on the validation sets for all categories of LLM pairs. This clearly illustrates that a threshold chosen on the validation set generalizes well to the test set. We note that there is a slight increase in the performance drop from the validation to the test set for the LLama-2 (7b) and Llama-2 (13b) pair, i.e the LLM pair with small performance gap as per the categorization in Section 4. However this is also the pair with the highest cost advantage or cost savings (> 96% for all routers) and thus the issue can be addressed by just using a more conservative limit on the performance drop while choosing the threshold which would still lead to very significant cost savings. 4.6 Alternate Evaluation Metrics To provide a more comprehensive evaluation of our routers, we test the routing performance with metrics in addition to BART score [Yuan et al., 2021]. GPT-4-based evaluators have been found to be well correlated with human assessments [Liu et al., 2023, Chase, 2022]. We generate GPT-4 evaluation scores (integer ratings from 1 to 10) for test responses from Flan-t5 (800m), Llama-2 (7b), Llama-2 (13b), and GPT-3.5-turbo that we investigate in our experiments, using LangChain scoring evaluator [Chase, 2022]. Recall that our routers are trained with BART score due to efficiency and effectiveness reasons as discussed in Section 2.3. Intuitively, if the quality gaps measured by BART score and GPT-4 score are highly correlated, we could expect good routing performance even under the GPT-4 score as we have seen in Section 4.2. We compute the correlation between quality gaps measured by BART score and GPT-4 score, and report it along with routing performance evaluated with GPT-4 score, as shown in Figure 7. Aligned with our intuition, when the two metrics are well correlated (Figure 7a), our routers trained with BART score are still effective even when evaluated against GPT-4 score. Typically, rdet, rprob, and rtrans are able to achieve 20% cost advantage with up to 1% performance drop, and 40% cost advantage with up to 2.1% performance drop. As the correlation gets weaker, the router performance gradually decays, as shown in Figure 7b and 7c. This observation suggests a simple-yet-effective strategy of using BART score in practice to save labelling costs while maintaining 13 (a) High correlation (r = 0.46, \u03c1 = 0.44). (b) Medium correlation (r = 0.38, \u03c1 = 0.38). (c) Low correlation (r = 0.26, \u03c1 = 0.27). Figure 7: Routing performance evaluated with GPT-4 scores. Pearson (r) and spearman (\u03c1) correlation coefficients between quality gaps measured by BART score and GPT-4 score are computed for each LLM pair. routing performance. We can first compute the correlation between BART score and the target metrics (e.g., human assessments) using a small sample and use BART score as training labels whenever there is strong positive correlation with target metrics. 4.7 Generalizing To Different Model Pairs We evaluate the generalizability of our routers by testing their routing performance on LLM pairs different than the pairs they were trained with. We compute the correlation between quality gaps of training and testing LLM pairs, and report it along with routing performance, as shown in Figure 8. Similar to our observation in Section 4.6, our routers can generalize well if the quality gaps of testing LLM pairs exhibit strong positive correlation with the quality gaps of the training pairs. In Figure 8a, both pearson and spearman correlation coefficients exceed 0.7, and all three routers are able to achieve 20% cost advantage with up to 1.6% performance drop, and 40% cost advantage with up to 4.1% performance drop. As the correlation becomes weaker, the generalizability of our router gets restricted and routing performance decays, as shown in Figure 8b and 8c. This observation sheds light on using the quality gap correlation as an effective indicator to decide if our routers can be applied to new LLM pairs in the early stage. Given a pair of LLMs (source pair) and a router trained on this pair we can measure the correlation between the quality gap of the source pair and the quality gap of any new target pair of LLMs to decide if the router will be effective on the target pair. 5 Discussion and Conclusion Motivated by the need to optimize the trade-off between LLM inference costs and response quality, we have presented a hybrid inference approach based on quality-aware query routing. We train a router to discriminate between \u201chard\u201d and \u201ceasy\u201d queries, enabling the LLM provider to make cost-efficient decisions about which model should serve a given query. Our experimental results on a variety of state-of-the-art LLMs of varying sizes show that such an optimization is possible and that we can realize cost advantages of up to 40% with no significant drop in response quality. To the best of our knowledge, this is the first work exploring the possibilities of cost-effective 14 (a) High correlation(r=0.76, \u03c1=0.71) (b) Med. correlation(r=0.55, \u03c1=0.50) (c) Low correlation(r=0.06, \u03c1=0.02) Figure 8: Routing performance on the testing LLM pairs that are different than the pairs routers were trained with. Pearson (r) and spearman (\u03c1) correlation coefficients between quality gaps of the training and testing LLM pairs are computed for each setting. and quality-aware query routing between LLMs. We identify several important extensions for future work: (1) Task-aware routing. Our current routers make routing decisions purely based on query inputs. To improve routing effectiveness, we can provide more informative signals which help routers distinguish easy queries from the hard ones, such as task labels for query examples and can also identify tasks which may be more suited to routing for a given pair of LLMs. (2) Generalizing to N-model routing. Modern MLaaS platforms typically host a large number of LLM instances of the same or different configurations to efficiently serve users in different scenarios. This naturally forms a more challenging routing problem with richer optimization opportunities (e.g., load balancing) (3) Out-of-distribution (OOD) generalization. In this work, the model pair and data distribution is fixed across training and testing. In the real-world it may be cumbersome/infeasible to train a new router for every new model pair and for every new data distribution. Therefore there is a need for techniques to generalize our approach to changes in the model pair and/or data distribution at test time. (4) Novel evaluation metrics. Effective evaluation metrics are critical to train high-quality routers. It is intriguing to see how to develop metrics of higher human-judgment correlation and to which extent it will improve the routing performance. Acknowledgments The authors would like to thank Daniel Madrigal Diaz, Mirian del Carmen Hipolito Garcia, Chen Dun, Guoqing Zheng, Menglin Xia, Wen Xiao, and Jieyu Zhang for helpful discussions."
},
{
"url": "http://arxiv.org/abs/2404.15549v2",
"title": "PRISM: Patient Records Interpretation for Semantic Clinical Trial Matching using Large Language Models",
"abstract": "Clinical trial matching is the task of identifying trials for which patients\nmay be potentially eligible. Typically, this task is labor-intensive and\nrequires detailed verification of patient electronic health records (EHRs)\nagainst the stringent inclusion and exclusion criteria of clinical trials. This\nprocess is manual, time-intensive, and challenging to scale up, resulting in\nmany patients missing out on potential therapeutic options. Recent advancements\nin Large Language Models (LLMs) have made automating patient-trial matching\npossible, as shown in multiple concurrent research studies. However, the\ncurrent approaches are confined to constrained, often synthetic datasets that\ndo not adequately mirror the complexities encountered in real-world medical\ndata. In this study, we present the first, end-to-end large-scale empirical\nevaluation of clinical trial matching using real-world EHRs. Our study\nshowcases the capability of LLMs to accurately match patients with appropriate\nclinical trials. We perform experiments with proprietary LLMs, including GPT-4\nand GPT-3.5, as well as our custom fine-tuned model called OncoLLM and show\nthat OncoLLM, despite its significantly smaller size, not only outperforms\nGPT-3.5 but also matches the performance of qualified medical doctors. All\nexperiments were carried out on real-world EHRs that include clinical notes and\navailable clinical trials from a single cancer center in the United States.",
"authors": "Shashi Kant Gupta, Aditya Basu, Mauro Nievas, Jerrin Thomas, Nathan Wolfrath, Adhitya Ramamurthi, Bradley Taylor, Anai N. Kothari, Regina Schwind, Therica M. Miller, Sorena Nadaf-Rahrov, Yanshan Wang, Hrituraj Singh",
"published": "2024-04-23",
"updated": "2024-04-27",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"label": "Original Paper",
"paper_cat": "LLM Fairness",
"gt": "Clinical trials are essential to advance scientific discovery, improve patient care and drive innovation in medicine[1]. Their significance is particularly pronounced in oncol- ogy, where they can provide potential treatment options for patients with limited alternatives [2\u20135]. Despite this importance, a substantial number of clinical trials encounter recruitment challenges. Only about 7% of adults participate in cancer clin- ical trials [1]. One of the major factors contributing to this recruitment bottleneck is the considerable challenge that physicians encounter when systematically reviewing each patient against the list of available trials. This cumbersome task leads to a lower rate of trial recommendations [6], often due to the intricacy involved in deciphering the eligibility of a patient against the nuanced inclusion and exclusion criteria of these trials [7\u201311]. The core information pertinent to the inclusion and exclusion criteria of clinical tri- als are primarily found within unstructured EHRs, such as medical notes[12]. A recent study shows that this is particularly relevant for oncology, where key parameters for clinical trial screening are virtually absent in structured EHRs [13]. These conditions create substantial challenges in interpretation at scale. Even when dealing with struc- tured EHRs, creating queries using the trial criteria often proves challenging[14, 15]. To address this complexity, several research studies leveraging Natural Language Processing (NLP) have emerged, aiming to streamline the trial matching process. One common approach involves converting the inclusion and exclusion criteria of trials into structured queries[13, 15\u201317]. These queries can then be utilized to efficiently retrieve relevant data from dedicated clinical data warehouses. An alternate strategy focuses on the extraction of key data elements from patient notes and reformatting this information into a structured layout to aid in filtering clinical trials. [12]. 2 While both approaches have demonstrated effectiveness in clinical trial matching, they come with limitations. Both approaches frequently rely on rule-based engineer- ing, which is cumbersome and inflexible. Additionally, scaling these approaches to encompass the diverse range of clinical trials and the variety of potential patient sce- narios is also challenging. Recent advancements in leveraging LLMs for patient-trial matching show considerable promise[14, 18\u201320]. However, these studies predominantly utilize relatively simplistic or synthetic patient notes and clinical trial setups, which do not accurately reflect the complexity of real-world scenarios. Furthermore, they do not deal with the long context scenarios where a single patient may have hundreds of notes in their EHR. While [14] attempts to deal with the entire patient journey, they restrict themselves to a well-defined variable while extracting them using an exten- sively trained model. Finally, most of these methods, if not all, rely on proprietary LLMs which are difficult to deploy in the sensitive and privacy focused healthcare domains [21]. To the best of our knowledge, our study is the first to present a comprehen- sive pipeline to demonstrate an end-to-end clinical matching system using real-world patient EHR data and LLMs. This approach solely relies on the direct input of unstruc- tured patient notes and the detailed clinical trial inclusion and exclusion criteria into the LLMs. Our contributions can be summarized as follows: 1. Scalable End-to-End Pipeline: We introduce a novel pipeline, PRISM, which performs patient records interpretation and directly uses the semantics of inclusion and exclusion criteria in clinical trials to match patients through a mechanism to directly ingest free text criteria into the pipeline without any rule-based processing. 2. Fine-tuned LLM: Our custom-tuned model, OncoLLM, demonstrates superior performance over GPT-3.5 and comparable efficacy to GPT-4. OncoLLM is signifi- cantly smaller than both and can be hosted within private infrastructure to address privacy concerns. 3. Benchmarking Against Medical Doctors: For the first time, we present evi- dence that LLMs can almost match the performance of qualified medical doctors for the task of clinical trial matching. This finding suggests the potential of LLMs for real-world clinical applications. 4. Comprehensive Evaluation: We conduct an extensive evaluation of the pro- posed pipeline, not only in ranking clinical trials but also in providing detailed interpretation for the eligibility of patients with specific trial criteria. 5. Ranking Algorithm: We propose a novel algorithm for ranking clinical trials, significantly improving the average position of relevant trials compared to a baseline approach. 6. Both Search Directionalities: We demonstrate that the same pipeline used to identify trials for patients (patient-centric search) can also enable researchers to create eligible patient cohorts for a trial (trial-centric search). Our work is the first to demonstrate both directionalities using the same pipeline. 3 Progress Note PATIENT Discharge Summ Pathology Note Filter the notes Chunk the notes Database of IE criteria (Free text) Query & graph generation All the chunks of the patients Simplified Questions with Heirarchy Relevant Chunks Calculating Graph Score Weighted Graph Evaluation Ranking Retriever Onco-Generator Fig. 1: The pipeline only uses unstructured notes to effectively match the patients to potential clinical trials. Patient notes are first filtered as per the defined rules and are then chunked using a contextual chunker. The chunks are then stored in a database. The trial criteria are ingested as plain text as extracted from clinicaltrial.gov and are converted into a graphical question representation as described in Section 4. This graph is then used to retrieve relevant snippets of information, and our proprietary fine-tuned language model calculates a score for the graph. We then also apply weights to that graph using our developed heuristics, which allow the pipeline to rank the trials accurately.",
"main_content": "2.1 Clinical Trial Matching The task of clinical trial matching, or patient-trial matching, has attracted significant attention from academia as well as industry. Based on the directionality of the search, patient-trial matching can either be patient-centered, i.e., identifying trials for a patient [18, 22\u201325] or trial-centered, i.e., finding eligible patients given a trial [15, 26\u2013 28]. Both have their individual importance with some overlaps. The patient-centered viewpoint offers a direct methodology for physicians to treat trials as a care option an important goal of the cancer moonshot program[13]. Trial-centric methodology, on the other hand, allows researchers to conduct pragmatic clinical trials or conduct feasibility analyses. In this study, we focus on patient-centered trial matching when we 4 mention clinical trial matching, but we also show that our framework can be readily extended to support both directions. One well studied approach involves converting the inclusion and exclusion criteria of a trial into a structured query and using that query to search over structured clinical data repository [15\u201317]. This, however, has limitations in oncology as the majority of information relevant for clinical trial screening for cancer patients is found in unstructured data [13, 14]. An alternative approach uses deep neural networks to directly convert patient information to an embedding representation to match against trials[24, 25]. Another method involves using patients already enrolled in trials as representations of those trials and employing embedding similarity techniques to find patients similar to those recruited[29]. These techniques have, however, shown limited success in a complicated domain such as oncology. 2.2 Large Language Models Language models have recently shown great promise for a variety of use cases in healthcare [30\u201334]. For clinical trial matching, TrialGPT [18] demonstrated the capabilities of GPT-3.5[35] in effectively ranking clinical trials for patients. This was succeeded by the TrialLlama research [19], which provided evidence that open-source models, specifically LLaMA 2[36], could surpass GPT-3.5\u2019s performance, demonstrating their potential in privacy-sensitive applications. [14] utilize GPT-4 but in a limited setup converting clinical trial criteria into a structured schema. Then they use Bidirectional Encoder Representations from Transformers (BERT) and alike models [37\u201340] to extract information from notes and use this structured information to check final eligibility. In parallel with our work, a study by [20] employed GPT-4[41] and other open-source models for processing multiple patient notes while integrating retrievers to enhance cost-efficiency and overall effectiveness. However, their approach has a few significant limitations. It depended on a curated selection of patient charts containing only 2-5 notes, which is minimal compared to the usual 100-200 notes in real-world patient records. Furthermore, their method was tested on matching with a single clinical trial that used generic inclusion and exclusion criteria with only 13 criteria, greatly reducing the complexity typical of real-world clinical trials. In contrast, we evaluate the proposed pipeline in a setup with a number of potential trial candidates and the entire EHR of the patients, resulting in a dataset of over 200 trials and around 10,000 diverse clinical trial criteria. This substantial scope significantly differentiates our approach from previous studies. 3 Results 3.1 Criteria/Question Level Accuracy Accuracy Comparison: We compare the accuracy of different LLMs in assessing whether a given patient meets a particular clinical trial\u2019s criteria. We evaluate the performance of OncoLLM model, a 14B parameter model, against several prominent LLMs, including Azure OpenAI\u2019s GPT-3.5-Turbo (175B), Qwen14B-Chat (14B)[42], Mistral-7B-Instruct (7B)[43], Mixtral-8x7B-Instruct (56B)[44], and Azure OpenAI\u2019s 5 Without Model Name All N/A samples GPT3.5-Turbo 53% 48% Mistral-7B-Instruct 41% 32% Mixtral-8x7B-Instruct 49% 43% Qwen14B-Chat 43% 34% OncoLLM 63% 66% GPT4 68% 72% Expert Doctors* 70% (a) Question Level Accuracy Cancer T ype Cancer Subtype Cancer Stage Cancer Grade/ Histology Genetic and Biologic Markers Lab/Imaging Criteria Comorbidities Prior Treatment/Surgery 0.2 0.4 0.6 0.8 1.0 Mistral-7B-Instruct Qwen14B-Chat Mixtral-8x7B-Instruct GPT3.5-T urbo GPT4 OncoLLM (b) Concept-Wise Accuracy Fig. 2: (a) OncoLLM outperforms most of the prominent LLMs at criteria/question level answering accuracy. First column All shows the question level accuracy across all the 720 Q&A dataset for oncology related clinical trials. Second column Without N/A samples shows question level accuracy after removing those questions whose answers were \u2019N/A\u2019 by medical experts. * Human accuracy was obtained only on 109 questions which was annotated by two medical experts. (b) OncoLLM (in red) performs consistently well across all the relevant oncology related concepts. GPT-4. Table 2a shows the performance of different LLMs at criteria/question level answering accuracy. TThe accuracy is defined as the percentage of instances in which the model\u2019s predictions match the clinically annotated answers for questions regarding patients, derived from the inclusion and exclusion criteria of clinical trials. Our results demonstrate that OncoLLM significantly outperforms both the similarly sized models and the larger GPT-3.5-Turbo, achieving an accuracy of 63% compared to 53% by GPT-3.5-Turbo, 43% by Qwen14B-Chat, 41% by Mistral-7B-Instruct, and 49% by Mixtral-8x7B-Instruct. Although GPT-4 reached a higher accuracy of 68%, it incurs substantially greater computational costs, being approximately more than 100x times larger than OncoLLM (even though there are no official details on the parameter size of GPT-4, it is estimated to be more than a trillion parameters). Additionally, the Mistral-7B-Instruct model requires rule-based regex processing to output in JSON format, highlighting further limitations beyond raw accuracy. Another key aspect of this finding is that institutions can implement OncoLLM in their privacy-compliant infrastructure rather than transmitting sensitive patient data to external cloud servers. In an additional analysis where all questions initially marked as \u2019N/A\u2019 (meaning not enough information was available in patient records to answer those questions) were excluded to reduce ambiguity, we observed significant changes in model performances. By removing these uncertain inputs, the accuracy of our OncoLLM model increased to 66% (from 63%), and the accuracy of Azure OpenAI\u2019s GPT-4 model also rose, reaching 72% (from 68%). Interestingly, the accuracy rates of Azure OpenAI\u2019s GPT3.5-Turbo and the open-sourced models decreased, suggesting that these models might rely more on ambiguous inputs to maintain higher performance levels or may 6 1/100 1/200 1/300 1/400 1/500 1/182 1/144 1 / NA 0.61 0.25 0.5 0.75 Question Level Accuracy Number of NA samples in Manual Annotation OncoLLM Mistral-7B-Instruct Qwen14B-Chat Mixtral-8x7B-Instruct GPT3.5-T urbo GPT4 OncoLLM Fig. 3: Accuracy Comparison Based on Model Size and Number of \u201dN/A\u201d Outputs. This figure presents a comparison of model accuracy with the frequency of \u201dN/A\u201d outputs. A higher frequency of \u201dN/A\u201d outputs indicates lower usefulness of the model. The size of each bubble represents the number of parameters of the model. This highlights the close performance of OncoLLM to GPT4 despite having relatively fewer parameters. exhibit less robustness in more clearly defined scenarios. This result indicates that some of the performance of the other \u2019weaker\u2019 models is inflated highlighting the gap between our model\u2019s performance and GPT-3.5. This is further illustrated in Fig. 3. Concept-Level Scores: We also conducted a comparison of the accuracy of each model on the Q&A task at Concept level. For this analysis, we classified each question into various oncology-related clinical trial concepts, such as cancer type, cancer subtype, biomarkers, etc. These concepts were decided on the basis of physician input and were organized into different tiers based on their estimated clinical importance. Overall, we delineated 13 different concepts and categorized them into four tiers (Tier 1, Tier 2, Tier 3, and Tier 4), with the importance level as Tier 1 > Tier 2 > Tier 3 > Tier 4 (See Section S2 of Supplementary for details). We observed a similar pattern of accuracy at concept level as well. OncoLLM generally secured the second position for most concepts. Notably, it outperformed GPT-4 in the biomarkers concept (Fig. 2b). Annotation Process: We manually annotated the Q&A dataset from 10,000 notes 7 and 50 cancer patients. We extracted 720 questions (each inclusion and exclusion criteria was converted into multiple questions with \u2019yes\u2019 or \u2019no\u2019 as possible answers), ensuring a balance across various patients, disease types, and categories of trial criteria (See Section 4.3.1). To simplify the annotation, we used GPT-3.5 to identify and assess relevant sections for each question, reducing the potential workload from 800,000 segments to approximately 8,000. GPT-3.5 showed 98% precision and 94% recall in determining the relevance of segments to questions after refining our prompts. However, due to the high cost and token consumption (> 2 billion) for processing, we opted not to use this method in our final setup as it would cost too much in the end to end pipeline for all the patients and trials. Five medical doctors annotated each question based on the chunks marked as relevant by GPT-3.5, labeling them as \u2019YES\u2019, \u2019NO\u2019, or \u2019N/A\u2019. Each of the 720 questions was reviewed by at least one doctor, with about 12 questions reviewed by all five. The best performing annotators from this round participated in a second round, where 109 randomly selected questions were used to evaluate inter-annotator reliability. The average inter-annotator agreement (calculated as the percentage of times the annotators gave same answer to a particular question) among all five annotators was 64% and 70% for the two selected annotators (See Section 1 of Supplementary). 3.2 Ranking Scores In this analysis, we conducted a comparative assessment of the ranking efficacy between our proposed approach (outlined in Section 4) for OncoLLM compared to GPT-3.5-Turbo model. Due to the cost associated with matching 1000 patient-trial pairs, we did not evaluate GPT4 for this task. To facilitate ranking, we utilized a scoring module (detailed in Section 4.4.1), which assigns a matching score to each patient-trial pair. This score was used to rank trials for a patient and vice-versa. 3.2.1 Patient-Centric Ranking Evaluation Metric: For this evaluation, we assembled a cohort of 98 cancer patients who had participated in clinical trials. We identified 10 real-world trials for each patient, all of which shared the same cancer disease type (e.g., lung cancer, breast cancer, etc.) and were actively recruiting patients at the time the patient enrolled in a clinical trial. Among these 10 trials, one trial served as the ground truth trial, denoting the trial in which the patient was actually enrolled (see section 4.2). To assess the performance, we analyzed the proportion of times the rank of the ground truth trial fell within the top-3 ranks. This metric directly reflects the practical utility of our method, as it aids in the efficient shortlisting of eligible trials for patients within a hospital or institution setting. Analysis: In our assessment, OncoLLM demonstrated superior performance compared to GPT-3.5-Turbo across all three scoring methods. Specifically, with the 8 Simple Iterative Tier Weighted Tier 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Ratio of trial ranked <= 3 Patient T o Trial Ranking GPT3.5-T urbo OncoLLM (a) The proportion of patients for whom their respective ground truth trial ranked within the top-3. Here, \u201dground truth trial\u201d is the trial in which the patient was enrolled. Simple Iterative Tier Weighted Tier 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 NDCG Score Trial T o Patient Ranking GPT3.5-T urbo OncoLLM (b) NDCG score for ranking patients within a specified trial. Patients who were initially enrolled for the specified trial are assigned a relevance score of 1, while rest are assigned 0. Fig. 4: OncoLLM with Weighted Tier scoring method performs best for both way search. (a) OncoLLM (Weighted Tier) ranked ground truth trials 65.3% of times in the top-3 among 10 considered trials, while GPT3.5-Turbo (Iterative Tier) ranked ground truth trials only 61.2% of times in the top-3. (b) OncoLLM (Weighted Tier) scored an NDCG score of 68% as compared to 62.6% of GPT3.5-Turbo (Iterative Tier). See Section 4.4.1 for details on the scoring methods. Weighted Tier method, OncoLLM achieved a score of 0.65, outperforming GPT3.5-Turbo\u2019s score of 0.59. Similarly, with the Iterative Tier method, OncoLLM achieved a score of 0.63, surpassing GPT-3.5-Turbo\u2019s score of 0.61. Additionally, with the Simple method, OncoLLM attained a score of 0.62 compared to GPT-3.5Turbo\u2019s score of 0.57. These findings are consistent with our observations regarding question-level accuracy (Fig. 2a); typically, a model exhibiting higher accuracy in answering eligibility questions tends to perform better in ranking tasks. Criteria/Question Level Analysis: To evaluate the utility of both GPT-3.5-Turbo and OncoLLM, we conducted an additional analysis for all patient and ground truth trials. Ideally, for ground truth trials where a patient is already enrolled, all eligibility criteria should be met. With this hypothesis in mind, we examined the statistics of all eligibility criteria for the ground truth trials. A superior system should meet a higher number of criteria, and produce fewer indecisive cases (instances where the system failed to produce a definitive Met/Not Met result for the given criteria, referred to as \u2019N/A\u2019 henceforth). We discovered that OncoLLM meets 62% of criteria on average across all 98 patients, compared to GPT3.5-Turbo\u2019s score of 55.4% (Fig. 5A). This analysis was extended to instances where the ground truth trial ranked within the top-3. Predictably, the overall criteria Met numbers increased for both models, with OncoLLM (66.7%) outperforming GPT3.5-Turbo (59%). Intriguingly, the overall average for OncoLLM across all ground truth trials remains higher than GPT3.5Turbo\u2019s score for top-3 ranked ground truth trials. When it comes to ranking trials, OncoLLM\u2019s outcomes prove more practical as GPT3.5-Turbo tends to return \u2019N/A\u2019 9 A. Criteria Stats (All GT Trials) B. Criteria Stats (GT Trials Rank <= 3) C. Question N/A Stats (GT Trials Rank <= 3) Fig. 5: Criteria/Question Level Analysis on 98 Patient, Ground Truth Trial Pairs. A. Criteria level Met/Not-Met/NA stats for all the ground truth trials. B. Criteria level Met/Not-Met/NA stats where the ground truth trial ranked within the top-3. C. Question level N/A stats where the ground truth trial ranked within the top-3 (lower is better). responses to simplified questions derived from top-3 ranked ground truth trials considerably more often than OncoLLM (Fig. 5C, Fig. 3). When a human is involved in the loop, this becomes problematic as N/A responses provide very little information to determine the patient\u2019s final eligibility. 3.2.2 Trial-Centric Ranking Evaluation Metric: For this evaluation, we assembled a set of 36 clinical trials, all of which recruited patients from the same institution. For each clinical trial, we compiled a group of patients who shared the same cancer disease type and were enrolled in some trial when the specified clinical trial was active. This patient set also included those who were actually enrolled in the specified trial. On average, each trial had 2 \u00b1 1 ground truth patients and 13 \u00b1 8 additional patients. Here, \u2019ground truth patients\u2019 refer to those who were enrolled in the specified trial. Based on this data, we computed the NDCG score for ranking patients for a given clinical trial. To calculate the NDCG score, we utilized a binary relevance score, where ground truth patients were assigned a relevance score of 1, and the remaining patients were assigned a relevance score of 0. Analysis: In our evaluation, OncoLLM exhibited superior performance compared to GPT3.5-Turbo across all scoring methods, except for the Simple method where both models achieved similar scores. Specifically, using the Weighted Tier method, OncoLLM attained a score of 0.68, outperforming GPT3.5-Turbo\u2019s score of 0.62. Similarly, with the Iterative Tier method, OncoLLM achieved a score of 0.66, surpassing GPT3.5-Turbo\u2019s score of 0.63. For the Simple method, both models scored 0.62. Once again, these results align with our observations regarding question-level accuracy (Fig. 2a); typically, a model demonstrating higher accuracy in answering 10 eligibility questions should excel in ranking tasks. 3.3 Error Analysis Trial Level: While the patients were enrolled in a specific trial, this did not preclude their eligibility for other concurrent trials. Consequently, the ranking numbers do not accurately reflect the model\u2019s performance strength. To further assess this, we engaged a qualified clinical data expert to manually check the eligibility of patients for 10 randomly chosen trials that OncoLLM ranked in the top-1, but which were not the trials in which the patients were actually enrolled. We discovered that out of these 10 trials, the patients were deemed eligible for 9, effectively increasing the actual top-3 accuracy to \u224895%. Interestingly, the only trial for which a patient was ineligible was due to the presence of tumors in both breasts; the patient was eligible for the trial concerning the tumor in the left breast but not for the one in the right breast. This highlights OncoLLM\u2019s effectiveness in identifying suitable trials for patients. Criteria Level: To assess the capability of OncoLLM in generating accurate interpretation and citations of corresponding trial criterion to a patient, a set of questions (provided in supplement) and their corresponding responses generated by the model were randomly selected and reviewed by qualified medical professionals. These reviewers were tasked with verifying the correctness of the responses (categorized as \u201dYes\u201d, \u201dNo\u201d, or \u201dN/A\u201d) and the accuracy of interpretation and citations provided with each answer. Our findings indicated that the accuracy of the final answers was 75.26%. When the final answer was deemed correct, the accompanying explanations were accurate 90.91% of the time. Additionally, citations included in the explanations were rated as correct 86.71% of the time and partially correct 6.29% of the time, highlighting OncoLLM\u2019s effectiveness in delivering reliable information and its utility in supporting further manual verification processes. 3.4 Cost-Benefit Analysis We also conducted cost analysis to estimate the expenses associated with running different OncoLLM vs Azure OpenAI\u2019s GPT-4 for patient-trial matching, considering 98 patients with 10 trials for each patient (refer to Section 3.2.1). For estimation, we calculated the input prompt token and expected generated token count, from which we derived the pricing. For OncoLLM, we benchmarked the input and output generation speed (tokens/sec) when hosted using vLLM and calculated the running time using the formula: running time = (# of input tokens input speed \u22173600 + # of output tokens output speed \u22173600 ) hours (1) The final cost is determined by multiplying the hourly cost of Google Cloud GPU VM by the running time. For Azure OpenAI\u2019s GPT-4 model, we estimated the price 11 using their pricing table. Our calculations indicate that the cost of operating OncoLLM is approximately $170, while running GPT-4 incurs an expense of around $6055. This implies an expense increase of about 35-fold. When we dissect the cost to compute the expense of a single patient-trial match, OncoLLM costs approximately $0.17 per patient-trial pair, whereas GPT-4 demands $6.18 per patient-trial pair. In addition, we have not considered several other optimizations available for LLMs [45, 46] in this assessment. Other factors not taken into account include the availability of various cloud services offering more affordable computation, as well as the potential for a local in-house setup. 4 Methods 4.1 Problem Formulation We formulate the problem of matching patients to clinical trials through a compositional question-answering framework. Consider a patient P, associated with a set of patient notes N = {n1, n2, . . . , ni}, where each ni contains distinct aspects of the patient\u2019s medical history. Each clinical trial is defined by its criteria text T. From T, a set of questions Q = {q1, q2, . . . , qj} is systematically derived, where each qj corresponds directly to specific inclusion or exclusion criteria. The set of possible answers to each question, denoted as A = {a1, a2, . . . , aj}, is extracted through our pipeline from the dataset N. Each element in A corresponds to an answer which can be \u2019Yes\u2019, \u2019No\u2019, or \u2019NA\u2019. Here, \u2019NA\u2019 indicates that the question is not applicable to the patient in question or that there is insufficient information available to provide a conclusive answer. Conversely, \u2019Yes\u2019 and \u2019No\u2019 explicitly indicate whether the patient meets or does not meet the specified criteria, respectively. A logical composition function C is then employed to integrate these answers into a single framework. This function C is defined by logical operators and possibly fuzzy logic controls that aggregate the individual answers based on the logical interdependencies specified in T. The output of this function is a composite score S, calculated as S = f(C(a1, a2, . . . , aj)), where f is a scoring function that quantitatively evaluates the overall compliance of the patient data N with the trial criteria T. The score S thus represents a quantified suitability of the clinical trial for patient P, enabling the ranking of potential trials. Higher values of S indicate a better match, facilitating prioritization of trials for which the patient is most eligible. 4.2 Dataset Preparation Patients were identified using an institutional clinical research data warehouse that includes deidentified clinical notes and clinical trial enrollment information. To develop the analytic dataset, we selected patient note types that include oncology-relevant information (See Section S3 of Supplementary). For each patient P, who may have been enrolled in one or multiple trials T1, T2, . . . , Tn, we selected a single trial based on the complexity of its criteria, as determined by the length of the text in 12 Category # of Patients Average # of Documents Breast Cancer 37 49 Lung Cancer 20 76 Prostrate Cancer 29 57 Colorectal Cancer 7 283 Skin Cancer 4 52 Fig. 6: Distribution of Patients by Cancer Type the inclusion and exclusion sections. This process resulted in identifying one \u2019ground truth\u2019 trial for each patient. The patient notes were limited to include information only until the enrollment date of the patient. This curated dataset comprised 98 patient-trial combinations. To facilitate the ranking experiments, it was necessary to identify potential negative trials for each patient. We achieved this by first filtering both patients and trials based on the type of cancer. We then searched the trial database and retained a trial in a patient\u2019s set of negatives if it satisfied all of the following criteria: (1) the patient was not previously enrolled in that trial, and (2) the trial was active according to ClinicalTrials.gov at the time of the patient\u2019s enrollment. This method yielded 980 patient-trial combinations. 4.3 PRISM Pipeline Our end-to-end PRISM pipeline comprises several modules as shown in Fig 1. It takes clinical trial information and patient notes as input and provides a scalar score for the match between patient and trial. Once we have the score, it is used to rank patients by calculating the score of that trial for multiple patients or vice-versa. 4.3.1 Trial Composition Module For each clinical trial T, we utilize our trial-to-questions module that can be developed with any LLMs. We used GPT-3.5 in this study as trial criteria is public information and GPT-3.5 is cost effective. This module is tasked with converting the textual content of the trial criteria, as extracted from ClinicalTrials.gov, into a set of simplified, independent questions. Each question is designed to be answerable with responses such as \u2019Yes\u2019, \u2019No\u2019, or \u2019NA\u2019. Given the inherent complexity of these questions, a single criterion from the trial criteria does not always translate directly into a single question. To address this, each criterion is decomposed into multiple questions that are interconnected through Boolean logic, specifically in disjunctive normal form (DNF) (See Fig 7). This transformation ensures that our downstream modules receive one simplified question at a time, facilitating more straightforward processing and evaluation. This is based on the approach used by [14]. We conducted a manual evaluation of the output quality of 50 trials based on three metrics: 1) The percentage of incorrectly formed questions; 2) The percentage of questions missed by the LLM model; and 3) The percentage of incorrectly formed Boolean logic. Only 1% of questions were missed, and 13 Progressive disease at study entry defined as 1 or more of the following 3 criteria: 1. A minimum of 3 rising PSA values with an interval of at least 1 week between determinations. The screening central laboratory PSA value must be \u2265 2 \u03bcg/L (2 ng/mL) if qualifying solely by PSA progression. 2. Soft tissue disease progression as defined by RECIST 1.1. 3. Bone disease progression defined by PCWG3 with 2 or more new metastatic lesions on bone scan. Eligibility Condition 1. Does the patient have a minimum of 3 rising PSA values with an interval of at least 1 week between determinations? 2. Is the screening central laboratory PSA value \u2265 2 \u03bcg/L (2 ng/mL)? 3. Does the patient have soft tissue disease progression as defined by RECIST 1.1? 4. Does the patient have bone disease progression defined by PCWG3 with 2 or more new metastatic lesions on bone scan? Simplified Questions (Yes/No) OR AND Q1 Q2 Q3 Q4 Fig. 7: Each criterion in the clinical trial\u2019s free text is condensed into one or more questions using a LLM via an appropriate prompt. For criteria leading to multiple questions, they are structured into a Disjunctive Normal Form (DNF). When assessing whether the patient meets that criteria on the basis of the answer to these questions, we employ these logical constraints to determine if the criteria are fulfilled, rather than burdening the LLM with interpreting all complex questions simultaneously. This approach enables the model to address one straightforward question sequentially. merely 2% of questions were incorrectly formed. Additionally, we achieved an accuracy rate of approximately 89% in correctly forming boolean logic at the criterion level. 4.3.2 Chunking and Retrieval Module Each patient\u2019s EHRs usually include hundreds of notes, which often surpass the context length capabilities of not only most open-source LLMs but also those of proprietary LLMs. This challenge requires the use of a semantic retrieval engine to manage and extract relevant information from each criterion [47]. We used the Python spaCy[48] tokenizer, configured with a one-sentence overlap, to chunk the patient notes efficiently. For the retrieval process, we utilized Azure OpenAI\u2019s Ada embedding-based model and leveraged cosine similarity between the embeddings of the chunks and the queries derived from the inclusion and exclusion criteria of trials. 4.3.3 Question-Answering Module This module utilizes the information retrieved to answer questions relevant to the clinical trial criteria. Unlike traditional Retrieval-Augmented Generation (RAG) pipelines, which typically arrange information chunks based on their relevance scores, our approach organizes the chunks chronologically. This sequence begins with patient age and the date of enrollment. Each chunk is further supplemented with the date of the note from which it was extracted and the type of note. We observed that this approach benefited the model by enabling it to interpret the patient\u2019s journey chronologically, which enhanced its ability to answer temporal questions accurately. We use zero-shot prompting and maintain a temperature setting of zero throughout our experiments to ensure deterministic outputs. We also limit the maximum 14 response length to 8K characters. For experiments utilizing GPT architectures, we employ Azure\u2019s HIPAA-compliant APIs. In other scenarios, we deploy models using the vLLM framework on our HIPAA-compliant cloud environment, powered by 4 A100s with 80GB GPU memory each. The question-answering process also uses a Chain of Thought (CoT) strategy and generates following key-value pairs in the output JSON. \u2022 Question Explanation: The model is prompted to explain the requirements of each question, delineate its strategy for answering, and identify additional information that may be required. This step ensures that the model comprehends and recalls the necessary medical concepts from the clinical trial criteria. \u2022 Answer Explanation: The model synthesizes and summarizes the information from the patient\u2019s records in a step-by-step manner before concluding whether the patient meets the specified criteria. \u2022 Answer: The model provides a definitive response of \u2019Yes\u2019, \u2019No\u2019, or \u2019N/A\u2019, based on the analysis conducted. \u2022 Confidence: The model quantifies the confidence of its answers on a scale from 1 to 5, where 1 indicates the least confidence and 5 the highest. 4.4 Answers \u2192Criteria Module The decision logic for determining whether a criteria is met is based on the outputs from a predefined logical tree (See Fig. 7) for that criteria. Each node in this tree corresponds to a specific question as extracted in the Trial composition module. The response to each question can influence the pathway taken through the decision tree, ultimately determining whether that criteria is met or not. In cases where one or more questions\u2019 answers are marked as N/A (Not Answerable) for a given criterion, the decision logic involves marginalizing over possible values for all the questions with N/A answers. Mathematically, this process is described as follows: P(criteria met|data) = X x\u2208Possible Answers P(criteria met|X = x) \u00d7 P(X = x|data) (2) Here, X represents the possible combinations of answers to all the questions with \u2019N/A\u2019 as the answer, and P(X = x|data) is the probability of each combination given the patient data. This probability is 1 2N , where N is the number of questions with \u2019N/A\u2019 answers. The final determination of whether a criteria is met is based on a threshold model: Criteria Met = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 Yes if P(criteria met|data) > 0.66 No if P(criteria met|data) < 0.34 N/A otherwise (3) 15 This approach allows for a probabilistic evaluation of criteria even when we do not know the answers to some questions corresponding to that criteria. This approach also takes care of the cases when it is possible to answer the Boolean logic even when answers to few questions are not available. For instance, if the criteria met expression is Q1&Q2, and we know the answer for Q1 is \u2019No\u2019, we do not need the answer to Q2. The expression should resolve to False regardless of the answer to Q2. This approach efficiently accommodates such scenarios and ensures robust evaluation of criteria. 4.4.1 Scoring Module This module implements an intelligent scoring module that can be used to rank clinical trials for a patient or vice-versa. Once we have determined the final answer for each criterion in the trial, we employ three strategies to finalize the answer. Simple Counting is a straightforward method where each criterion is evaluated independently and then the total number of fulfilled criteria is counted. The final score is then normalized by the total number of conditions (Equation 4, Fig. 1A): SCOREsimple = Number of Criteria Met Total Number of Criteria (4) Two other methods are based on Tier-based criteria categorization. These methods employ a more detailed scoring system. Each eligibility criteria is first categorized into different tiers (T1, T2, T3, and T4) according to their clinical importance for patient-trial matching task (See Section S2 of Supplementary) , and finally the scores are computed using two approaches: iterative tier and weighted tier. Iterative Tier method applies strict rules, traversing from higher to lower tiers until a condition is violated. The final score reflects the proportion of criteria met before encountering a violation, normalized by the total number of conditions (Equation 5, Fig. 1B): SCOREiterative = Number of Criteria Met Until Violation Total Number of Criteria (5) Weighted Tier method involves ranking criteria by their estimated clinical importance and applying different weights accordingly. It first calculates an intermediate score for each criterion at individual tier level (Equation 7). The final score calculation is based on weighted averages of the criteria level scores obtained at each tier (Equation 8, Fig. 1C): 16 0 +1 +1 0 0 0 0 0 0 0 0 0 A. B. C. +1 +1 +1 0 TIER 1 +1 +1 +1 0 TIER 2 TIER 3 TIER 4 Violation in TIER 2, so STOP! +0.5 +1 +1 +1 0 +1 +0.5 0 +1 +1 -0.5 +0.5 TIER 1 +1 +0.5 +1 0 TIER 2 TIER 3 TIER 4 (W1) x x (W2) x (W4) (W3) x W1 W4 Fig. 8: Scoring Methods For Patient Trial Matching Score. A. Simple Counting: In this approach, the total number of criteria met is counted and normalized by the total number of conditions. B. Iterative Tier: Here, we move through all the criteria by categorizing them into tiers. While traversing the nodes, if we encounter a violation (criteria unmet), we stop and use the number of criteria met till that point and normalize it by the total number of conditions. C. Weighted Tier: Similar to iterative tier, this method also utilizes tiers but instead of stopping at the first violation, it continues through the entire tree. It assigns greater importance to the first tier and progressively less weight to lower tiers. s(x) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 1 if x = 1 0.5 if x = \u22121 \u22120.5 if (x = 0) and (tier = T1) 0 if (x = 0) and (tier \u0338= T1) (6) K = 4 X k=1 sgn(|Tk|) (7) SCOREweighted = ( 1 K PK k=1 wk \u0010 P i\u2208Tk s(xi) |Tk| \u0011 if K \u0338= 0 0 if K = 0 (8) where s(xi) is the score and xi is the result for the ith criterion, and sgn(x) is the signum/sign function. Tk represents the set of criteria in tier k, wk represents the weight assigned to tier k, and K represents the number of tiers with non-zero criteria present in the clinical trial. For simplicity we assigned the following weights in this work: w1 = 2, w2 = 1.5, w3 = 1, and w4 = 0.5. However, in the future work, these weights can be tuned to achieve better performance. 17 4.5 OncoLLM Our model, OncoLLM, is a specialized oncology-focused LLM that has been fine-tuned on a single cancer center\u2019s oncology EHR datasets for question-answering tasks, using manually curated outputs. It builds upon the open-source Qwen-1.5 14B model[42], adapting it specifically for this field using both synthetic and real-world data. We chose this model because it was the top performer among similar-sized models on the LLM leaderboard when our experiments began[49]. Importantly, no patient data or clinical trials that have been used to report the scores here were added to the training data. The fine-tuning process was carefully chosen to enhance the model\u2019s ability to provide medical explanations and reference evidence within a RAG-based framework on EHR records. Annotators crafted ideal responses to EHR-related questions to facilitate the Chain of Thought (CoT) prompting strategy. The model was then trained via supervised fine-tuning on several thousand chunk-question pairs of proprietary data. 5 Discussion In this study, we developed and validated an end-to-end pipeline for matching patients to clinical trials based on inclusion and exclusion criteria and unstructured patient notes from real-world EHRs. Our findings indicate that leveraging LLMs, with carefully implemented controls, could significantly shift the paradigm for accruing and enrolling eligible patients to clinical trials. Unlike existing processes that rely on time and personnel-intensive manual EHR review, the proposed workflow-based platform can enhance clinical trial efficiency and improve cancer care. Deployment Readiness of the Pipeline: While there are certain limitations, the performance of our proposed pipeline closely matches that of qualified medical professionals in terms of criteria-level accuracy. This significant achievement highlights the models\u2019 near readiness for practical deployment. Although the model sometimes makes errors, our citation-based approach\u2014similar to techniques used in several LLM-based search engines\u2014offers a solid pipeline to help humans rectify these inaccuracies. Considerations on Model Propriety and Privacy: Our research challenges the necessity of relying exclusively on proprietary and centralized models in contexts where data privacy is paramount. We demonstrate that smaller models, when finetuned appropriately can surpass the performance of their proprietary counterparts. This opens the door to deploying these efficient models in environments where both privacy and cost are of concern. Notably, our model achieves performance metrics comparable to those of GPT4, yet with a significantly lower operational cost, showcasing its potential for scalable applications. Feasibility of LLMs in end-to-end Patient-Trial Matching: To address this question, we extended our pipeline to include a criteria-based ranking system for clinical trials. Our empirical evaluations confirm the feasibility of using LLMs to systematically rank clinical trials against patient records and vice versa. This approach not only facilitated the identification of trials in which patients were actively enrolled 18 but also ended up highlighting trials which the patient was eligible for but was not eventually enrolled. These results substantiate the practicality of employing large language models in the initial phases of clinical trial screening, significantly reducing the time and effort required in trial-patient matching processes. Limitations: While our approach has shown considerable promise, it has limitations that necessitate careful consideration for future development. One of the primary constraints is our reliance solely on unstructured data. Critical information, particularly laboratory values, are often recorded in structured formats and may not be consistently mentioned in patient notes. To address this issue, a hybrid data retrieval approach that integrates both structured and unstructured data could be more effective. Such a model would enhance the comprehensiveness and accuracy of the information retrieval, potentially leading to more precise patient-trial matching. Additionally, our current dependence on embedding-based retrievers presents challenges. Despite their advancements, these retrievers have inherent limitations in processing the complex nuances of medical data. It is imperative to rigorously evaluate the impact of these retrievers on the final outcomes of our pipeline. Although we conducted preliminary evaluations of different retrievers, we did not extend this to the fine-tuning or enhancement of these components. The accuracy of our end-to-end system, although improved, does not yet meet the ideal standards. Obtaining \u2019real\u2019 ground truth in clinical environments is challenging. We observed significant variations in the responses provided by different annotators, especially for questions where it is difficult to determine whether the available information suffices for accurate criteria assessment. This variation underscores the need for more robust annotation processes and perhaps a reevaluation of the criteria used for determining patient eligibility. Directions for Future Research: These insights pave the way for future work to focus on developing more integrated models that utilize a balanced mix of structured and unstructured data. Enhancing the capabilities and precision of embeddingbased retrievers, alongside a more rigorous evaluation framework, will be critical to advancing the technology for clinical application. Further, efforts to standardize the annotation process and refine the accuracy benchmarks will significantly contribute to the reliability of AI-driven clinical trial matching systems. Supplementary Information. Comprehensive analyses are provided for each component of our end-to-end pipeline. We conducted experiments on a limited dataset to evaluate the effectiveness of our chunking methods and the distribution of concepts within the inclusion and exclusion criteria of clinical trials. Detailed supplementary materials accompanying this paper include these analyses, offering deeper insights into the procedural nuances and methodological rigor of our study. Acknowledgements. We are grateful for the assistance provided by medical professionals, doctors, and research coordinators who shared insights into existing clinical trial matching processes and challenges. We extend special thanks to Harsh Jain, Aniket Jaiswal, and Sebastien Rhodes for their help in interpreting results and offering valuable prompt engineering suggestions. Additionally, we express our gratitude 19 to Sorena Nadaf, Warren Kibbe, and the entire Ci4CC (Cancer Informatics for Cancer Centers) community for facilitating this collaborative effort. Declarations This study was carried out with the approval of the Institutional Review Board, under PRO # : 00044894. The authors HS, SG, AB, JT, and MN are employed by Triomics. They were primarily responsible for code implementation, conducting experiments, and various evaluations. NW, AR, AK, and BT from the Medical College of Wisconsin supervised the overall quality and conceptualization of the experiments. They also aided in developing data pipelines, ensuring data quality for dataset preparation, and implementing ranking metrics. YW and TM contributed to conceptualizing experimental methods and supported the writing of the manuscript. The opinions and views expressed in this paper are solely those of the authors and do not reflect the positions or policies of their respective institutions."
},
{
"url": "http://arxiv.org/abs/2404.15238v1",
"title": "CultureBank: An Online Community-Driven Knowledge Base Towards Culturally Aware Language Technologies",
"abstract": "To enhance language models' cultural awareness, we design a generalizable\npipeline to construct cultural knowledge bases from different online\ncommunities on a massive scale. With the pipeline, we construct CultureBank, a\nknowledge base built upon users' self-narratives with 12K cultural descriptors\nsourced from TikTok and 11K from Reddit. Unlike previous cultural knowledge\nresources, CultureBank contains diverse views on cultural descriptors to allow\nflexible interpretation of cultural knowledge, and contextualized cultural\nscenarios to help grounded evaluation. With CultureBank, we evaluate different\nLLMs' cultural awareness, and identify areas for improvement. We also fine-tune\na language model on CultureBank: experiments show that it achieves better\nperformances on two downstream cultural tasks in a zero-shot setting. Finally,\nwe offer recommendations based on our findings for future culturally aware\nlanguage technologies. The project page is https://culturebank.github.io . The\ncode and model is at https://github.com/SALT-NLP/CultureBank . The released\nCultureBank dataset is at https://huggingface.co/datasets/SALT-NLP/CultureBank .",
"authors": "Weiyan Shi, Ryan Li, Yutong Zhang, Caleb Ziems, Chunhua yu, Raya Horesh, Rog\u00e9rio Abreu de Paula, Diyi Yang",
"published": "2024-04-23",
"updated": "2024-04-23",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"label": "Original Paper",
"paper_cat": "LLM Fairness",
"gt": "Figure 1: Overview. Our goal is culturally-aware language technologies. To do so, we develop a pipeline and construct CultureBank with structured cultural descriptors. Each descriptor comes with a grounded scenario, persona, and question to help evaluate LLMs. We fine-tune a model on CultureBank and improve its performance on two cultural tasks. \u201cGlobally, people express pride, celebrate, and respect cultural diversity, while acknowledging and working towards reducing cultural bias\u201d \u2014 CultureBank Large Language Models (LLMs) have become instrumental in various applications to interact with diverse user populations, such as in recommender systems (Li et al., 2023; Fan et al., 2023) and customer service (Pandya & Holia, 2023). However, these models often 1We release the CultureBank dataset, code, and models at github.com/SALT-NLP/CultureBank. 2Our project page is at culturebank.github.io. 1 arXiv:2404.15238v1 [cs.CL] 23 Apr 2024 Preprint. Under review. mirror Western-centric perspectives (Santurkar et al., 2023; Durmus et al., 2023b), as they are predominantly trained on data that reflect these values and behaviors. Such cultural bias can lead to unintended consequences (Ryan et al., 2024), e.g., reinforcing stereotypes, alienating non-Western users, hindering global deployment and so on. Therefore, it becomes increasingly important to develop language technologies that are aware of diverse cultures. To enhance LLMs\u2019 culture3 awareness, existing studies have developed cultural knowl- edge databases to represent culture-related knowledge and norms, but they have several limitations. (1) They often rely on formal knowledge sources like Wikipedia and online articles (Nguyen et al., 2023; Fung et al., 2024), which miss the rich, evolving and long-tailed cultural nuances experienced by local communities. (2) Secondly, these methods tend to present cultural knowledge in an assertive manner (Nguyen et al., 2023; Fung et al., 2024; Yin et al., 2022), failing to capture the fact that cultural practices and values can vary among individuals within the same cultural group. (3) Besides, their evaluation methods often rely on classification tasks and question answering (Naous et al., 2023; Afina Putri et al., 2024; Shafayat et al., 2024), which is very different from how LLMs are deployed in the real world and hence cannot reflect their cultural awareness in practice. To tackle these challenges, we utilize online communities where people share their cultural experiences, and develop a bottom-up approach to process noisy self-narratives on a mas- sive scale. Using this pipeline, we develop CultureBank, a cultural knowledge base with 12K cultural descriptors sourced from TikTok (Figure 1 shows one example). Besides, to address the limitation on assertiveness, we gather diverse views on similar cultural practices, and calculate an agreement level to enable inclusive cultural understanding. Moreover, to facili- tate contextualized evaluation on LLMs\u2019 cultural awareness, we provide a related situation grounded in real-world settings for each cultural descriptor (e.g., travel consultation in Figure 1). Then we evaluate state-of-the-art LLMs\u2019 cultural awareness on CultureBank, and the results show room for improvement. Additionally, we demonstrate that training LLMs on CultureBank enhances their performance on downstream culture-related tasks. We also show that our pipeline can be easily generalized to Reddit, another online community, illustrating its transferability and potential for future expansions. To summarize, we make the following contributions. \u2022 A general framework to collect cultural knowledge from online communities (\u00a74) \u2022 CultureBank, an open-source cultural knowledge base with 12K cultural descriptors from TikTok and 11K from Reddit (\u00a75 and \u00a78). \u2022 Grounded evaluation on existing LLMs\u2019 cultural awareness (\u00a76) and a more culturally-aware language model fine-tuned on CultureBank (\u00a77)",
"main_content": "Cultural knowledge bases. There have been many cultural knowledge base efforts in different domains (Lee et al., 2023; Kim et al., 2024; Jin et al., 2023; Fung et al., 2024). With traditional ethnographic methods, social scientists recorded cultural knowledge through existing historical accounts, ethnographic data, and cultural documents. For instance, behavioral scientists compiled a collection of cultural materials, and released an online database named eHRAF (the Human Relations Area Files) (eHR). In computer science studies, researchers employ computational methods to automatically construct datasets (Penta et al., 2011) from large sources or curate data from crowd source workers (Lee et al., 2023). Nguyen et al. (2023) built a pipeline to extract assertive cultural commonsense knowledge from C4 (Raffel et al., 2020), a large collection of Internet data, and Fung et al. (2024) used Wikipedia and navigated to related online documents to extract cultural knowledge. Data from these sources are much cleaner compared to online communities, and often focus more on normative cultural indicators. Since culture is highly heterogeneous, we also need descriptive cultural expressions from sources like online communities. StereoKG (Deshpande 3We acknowledge that culture is a broad concept, and prior work has attempted to operationalize culture via different proxies. We use culture to refer to the knowledge shared by a relatively large group of people from different backgrounds about their shared beliefs, practices and behaviors. 2 Preprint. Under review. et al., 2022) used Reddit and Twitter to extract cultural stereotypes for 5 religious groups and 5 nationalities, but due to the lack of proper filtering, the results are noisy. As an important complement to existing data sources, our work proposes a pipeline to process highly noisy online communities data on a large scale, and show that it can be easily generalized across different platforms, to provide valuable descriptive cultural knowledge. Cultural-awareness in language models. Previous works have studied cultural dimensions in language models (Guti\u00b4 errez et al., 2016; Ramezani & Xu, 2023; Jiang et al., 2020; Adewole et al., 2021; Yao et al., 2023; Li et al., 2024; Cao et al., 2023; Liu et al., 2021; H\u00a8 ammerl et al., 2022; Huang & Yang, 2023; Wang et al., 2023; K\u00a8 oksal et al., 2023). On the evaluation side, prior studies have measured subjective global opinions from LLMs (Durmus et al., 2023a; Santurkar et al., 2023), and probed cultural value differences in these models (Arora et al., 2022; Yin et al., 2022; Roberts et al., 2023). On the model side, CultureLLM (Li et al., 2024) proposed a cost-effective method to integrate cultural differences into language models with augmented data. This work proposes a grounded way to evaluate cultural awareness to match real-world use cases, and fine-tune a more culturally aware language model with descriptive cultural behaviors constructed from online communities. 3 CultureBank Taxonomy Prior efforts on cultural knowledge base (Nguyen et al., 2023) often represent cultural knowledge in free-text sentences. But free-text contents on online communities are often noisy, and such an unstructured representation hinders further computational operation such as search and filter. Therefore, we develop a taxonomy (shown in Table 1) for more structured cultural knowledge representation, based on the taxonomy of social factors (Hovy & Yang, 2021), and the taxonomy of social norms (Ziems et al., 2023; Goffman et al., 2002). It has the following fields: (1) cultural group, (2) context, (3) goal, (4) actor, (5) recipient, (6) relation, (7) actor\u2019s behavior, (8) recipient\u2019s behavior, (9) other description, (10) topic, and (11) agreement. For all these fields, we provide in-context examples and let the model extract any related information without constraint. This allows diversity and inclusivity in the data: for instance, examples for cultural group include typical cultural groups by countries such as \u201cAmerican\u201d, as well as more fine-grained ones by regions or ethnicity groups such as \u201cCalifornian\u201d and \u201cAsian American\u201d, and more broad social groups such as \u201cinternational students\u201d which can be overlooked before (Barth, 2010; Stenou, 2002). Field Definition Example Cultural group groups of people with similar cultural backgrounds American, Californian, Asian American, international student Context settings the behavior takes place in France, in public, 4th of July celebrations Goal what the behavior aims to achieve to adapt to different cultures, to celebrate Actor who exhibit the behavior people, customers, drivers Recipient recipient of the action kids, service staff, passengers Relation relation between the actor and the recipient parents to children, actor to audience, among friends Actor\u2019s behavior behavior of the actor dress casually, tip to express gratitude Recipient\u2019s behavior behavior of the recipient respond with thanks, accept card payments Other description anything that cannot fit into the other fields Bangkok is known for its chaotic traffic Topic topic education and technology, cultural exchange Agreement agreement level, % of people who agree an one-decimal float between 0 and 1, like 0.6 Table 1: Fields, definitions and examples in the CultureBank taxonomy. 4 Construction Pipeline Centering on the proposed taxonomy, we propose a bottom-up pipeline to construct cultural descriptors from online communities. Figure 2 gives an overview of the pipeline which 3 Preprint. Under review. has three parts: (1) descriptor extraction, (2) descriptor clustering, and (3) descriptor postprocessing. See Section C for more implementation details. 3.2 Content Moderation 1. Descriptor Extraction 2. Descriptor Clustering 3. Post-processing 1.1 Cultural Relevance Classifier \u201cI was traveling in Japan and left a tip as thanks, but the owner returned it!\u201d 1.2 Cultural Descriptor Extractor Cultural Group: Japan Context: In Japan Actor's Behavior: Attempt to tip Agreement: 0 Topic: Social Norms and Etiquette 2.2 Cluster Summarizer 2.1 Clustering 3.1 Agreement Calculator Comments on Online Communities Perspective API Score < 0.2 Classifier PII Info Human Annotation Actor: Customers Recipient: Service staff Goal: Express gratitude Human Evaluation Representative Summary Human Evaluation Human Evaluation Agreement 1 1 0 1 Agreement Level = 0.7 Figure 2: CultureBank construction pipeline. Starting from comments on online communities, we will (1) select culture-related comments and extract mentioned cultural descriptors, then (2) cluster these descriptors and summarize the clusters, and finally (3) post-process them to get agreement value and remove bad contents. Each step is validated by human evaluation. 4.1 Descriptor extraction In the first part of the pipeline, we extract cultural descriptors from self-narrative data like comments and posts on online communities, and organize them into our taxonomy. Culture relevance classifier. Given the large amounts of noisy data, the first step is to get the culturally-relevant portion. To do so, we annotate a subset with 280 training examples, and trained a distill-bert-based (Sanh et al., 2019) cultural relevance classifier. Then the classifier is applied on the entire dataset to get the subset related to culture. The classifier achieves an accuracy of 79% on a held out test set with 100 examples. Cultural descriptor extractor. After obtaining the cultural subset, we employ Llama-2-70B (Touvron et al., 2023), one of the best open-source LLMs at the task time, to extract values for each field in our taxonomy, by conditioning on the definition of fields and in-context examples. Listing 1 shows the prompt used. Human evaluation shows that this extractor achieves an accuracy of 82% across fields in the taxonomy on a test set with 240 examples. 4.2 Descriptor clustering After the extraction step, we have many cultural descriptors, but the same cultural behavior can be expressed in many different ways, for instance, \u201cJapanese do not tip service staff\u201d, or \u201cIn Japan, people do not give tips\u201d. So naturally in the second part, we need to first cluster the extracted descriptors, and summarize each cluster afterwards. Clustering. For the clustering step, we concatenate the extracted fields, encode the concatenated contents with SentenceBert (Reimers & Gurevych, 2019), and perform Hierarchical Agglomerative Clustering (HAC) clustering. We use the cluster size as the support value and remove clusters with less than 5 data points to ensure enough supporting evidences. The clustering parameters are chosen based on the performance on a validation set, and the clustered results achieve an average Silhouette score of 0.14 within the clusters. Cluster summarizer. After clustering, each cluster contains multiple cultural descriptors, so the next step is to summarize and generate a representative descriptor for each cluster. We use Mixtral-8X7B(Jiang et al., 2024), a state-of-the-art open-source language model at the task time, to summarize each cluster. Since the clusters contain noisy opinions, the vanilla model often fails to output a comprehensive summarization with in-context examples. To achieve a 4 Preprint. Under review. better performance, we ask GPT-4 to generate 1K high-quality summarizations, and distilled those samples to train our own Mixtral summarizer. Listing 2 shows the prompt used for the summarizer. Human evaluation shows the cluster summarizer achieves a fidelity score of 89.7% and coherence score of 96.6%. Definitions of these metrics are available in \u00a7C.2. 4.3 Post-processing The final step is to post-process the clustered data. Agreement calculator. People may have different opinions regarding the same cultural behaviors, so instead of assertive statements, we provide agreement levels for each cultural descriptor in our CultureBank. Each cluster now contains \u22655 data points, and each point is associated with an agreement score of 0 or 1, so we compute the average of these agreement scores as the agreement level. Besides, the cluster size can also reflect the agreement level. Content moderation. Finally, online platforms can contain controversial contents. So the last step is content moderation. To do so, we first use the perspective API 4, a machinelearning-based content moderation tool, and filter out contents with scores above 0.2 for every category (toxicity, profanity, insult, identity attack, threat, severe toxicity). For more nuanced controversial contents, we annotate 800 examples and train a distill-bert-based classifier (test acc=0.77 on 117 examples), and employ a list of keywords to further identify them. Next, we manually label these identified contents, and remove bad ones. Finally, we use the Presidio Analyzer5 to detect and remove Personal Identifiable Information (PII). 5 CultureBank Dataset TikTok is a popular social media platform with users from diverse cultural backgrounds, so we apply our pipeline on data from TikTok to construct our CultureBank dataset. We obtain TikTok data via their official research API 6 and collect a total of 34K posts and 720K English comments from 2019/05 to 2023/08 with the hashtags \u201c#culturaldifference\u201d and \u201c#cultureshock\u201d. Table 2 shows CultureBank basic statistics after construction: for Tiktok, there are 12K cultural descriptors, 730 cultural groups, and 36 topics. Table 9 shows the topic distribution. Table 7 shows the running time and the data volume after each step. Statistics TikTok Reddit # cultural descriptors 11,754 11,236 # cultural groups 730 1,850 # cultural topics 36 36 Table 2: CultureBank basic statistics. Metrics TikTok Reddit Well-formatted 98.5% 95.5% Traceable 93.3% 94.0% Meaningful 84.5% 85.0% Table 3: Annotated CultureBank quality. To assess the dataset quality quantitatively, we select a random subset with 200 samples, and four human annotators annotated them for their (1) format (if the descriptor is wellformatted), (2) traceability (if it is possible to trace the cultural knowledge on the Internet) and (3) meaningfulness (if the descriptor provides meaningful cultural insights rather than generic ones). We evaluate them on traceability instead of factualness, since these descriptors are self-reported and may be nuanced, so it is difficult to fact-check them; as long as there is related information online, we consider it traceable and meaningful to be included. The annotators achieved a Kappa score of 0.8. Table 3 shows that CultureBank has well-formatted, traceable and meaningful cultural descriptors with moderate noise levels. Table 6 shows qualitative examples in CultureBank. It presents interesting features, such as: (1) cross-culture behaviors: e.g., Americans in France experience culture shock in terms of electricity bills and driving habits; (2) linguistics variations: e.g., Americans use \u201cchickpeas\u201d or \u201cgarbanzo beans\u201d interchangeably; (3) diverse ethnic groups: e.g., Italian Americans identify themselves as Italian American with varying connection to Italy heritage; (4) recent cultural 4https://perspectiveapi.com/ 5https://microsoft.github.io/presidio/analyzer/ 6https://developers.tiktok.com/products/research-api/ 5 Preprint. Under review. information: e.g., Chinese people heavily rely on mobile payment; and (5) cultural nuances hard to obtain from formal sources like Wikipedia: e.g., in South Africa, some people express frustration over having to calculate prices and taxes separately while others do not think so. For the following evaluation (\u00a76) and fine-tuning (\u00a77) steps, we split CultureBank-TikTok by cultural descriptors into 9402 train, 1183 validation, and 1169 test samples. 6 Evaluating LLMs\u2019 Culture Awareness With CultureBank-TikTok, we evaluate LLMs\u2019 cultural awareness. Prior work asks LLMs to answer cultural true/false questions (Fung et al., 2024). But LLMs are used in contextualized settings like a dialogue agent. So we propose a grounded evaluation, that grounds cultural knowledge in a real-world scenario, to test LLMs\u2019 ability to integrate cultural knowledge into their responses. We also perform classification-based direct evaluation in \u00a7 D.2. Grounded data generation. For each descriptor in CultureBank, we first use a Mixtral-8x7B model fine-tuned on GPT-4-generated examples to generate a relevant consulting scenario, a client persona, and a grounded evaluation question. Then we employ a self-refinement method to improve the model generation based on two quality-control metrics at inference time. Figure 3 shows a generated example (For the \u201dNo tipping in Japan\u201d descriptor, the grounded question is \u201dwhat gesture says \u2019thanks you\u2019 in Japan?\u201d). Human annotation shows 86% questions are grounded on the original descriptor. See \u00a7D.3 for more details. Grounded evaluation. As shown in Figure 3, we present the generated grounded question to the LLM for an answer. Given the answer, we perform (1) automatic evaluation that uses GPT-4 to judge if the answer entails the original cultural descriptor (entailment score); and (2) human evaluation where two experts compare answers from two LLMs and select the more culturally-aware one (win rate). Figure 3: Workflow of grounded evaluation. We present the grounded question to an LLM and get an answer. Given the answer, we perform automatic evaluation and human evaluation. Model High Mid Low All Llama-2-7B-chat 71.2 66.0 61.2 62.5 Llama-2-70B-chat 74.9 66.2 64.2 65.1 Mistral-7B-Instruct 72.9 67.2 63.4 64.5 Mixtral-8x7B-Instruct 73.9 67.4 66.3 66.9 GPT-3.5 71.4 66.4 61.8 62.6 GPT-4 75.8 67.9 65.0 66.1 Llama2-7B-SFT (Ours) 75.7 67.1 63.8 64.7 Mixtral-8X7B-SFT (Ours) 73.3 70.3 66.6 67.5 Mixtral-8x7B-DPO (Ours) 72.4 70.5 68.1 68.7 Table 4: Automatic evaluation on LLMs\u2019 cultural awareness, evaluated by knowledge entailment scores on our grounded evaluation benchmark by support. High support: cluster size > 50 (70 examples). Mid: cluster size between 20 and 50 (175 examples). Low: cluster size \u226420 (924 examples). 20% 46% 36% 34% 76% 38% 50% 48% 4% 16% 14% 18% 0% 20% 40% 60% 80% 100% Win, Tie, and Lose Rates by Model Win Rate Tie Rate Loss Rate Mixtral-SFT (Ours) vs Mixtral-Vanilla Mixtral-DPO (Ours) vs Mixtral-Vanilla Mixtral-SFT (Ours) vs GPT4 Mixtral-DPO (Ours) vs GPT4 Figure 4: Human evaluation on win rates between different LLMs (50 examples per pair) evaluated by humans on cultural-awareness in grounded consulting scenarios. The two annotators achieved a Kappa score of 0.87. 6 Preprint. Under review. Evaluated models. We evaluate open-source (Llama-2, Mixtral), close-source (GPT families (Achiam et al., 2023)), and our own fine-tuned models (See \u00a77). See Table 10 for the model version details. All the results are on the test set. Automatic entailment results. Table 4 shows the average entailment score of each model split by the cluster size (level of support). Mixtral-8X7B and GPT-4 are the best but still has a relatively low overall score of 66.9 and 66.1, suggesting room for improvements. Larger models have slightly better performance than their smaller versions. For more long-tailed cultural descriptors with fewer supports, the performance drops as expected. Human-evaluated win rate. Figure 4 shows the human evaluation results. Compared to the base Mixtral model, Mixtral-DPO is strictly more culturally aware 36% of the time, and equally good 49% of the time; compared to GPT-4, Mixtral-SFT wins 46% of the time and ties 38% of the time. This trend also aligns with the automatic evaluation in Table 4, indicating that the automatic entailment evaluation makes sense. Qualitatively (Table 14), we find that our fine-tuned models generate shorter, and more culturally specific answers tailored to the user\u2019s inquiry (e.g., \u201cin France, you might seek out artisanal cheese shops\u201d), whereas standard RLHF-ed models like GPT-4 often give generic templated answers (e.g., \u201c1. Research local specialties, 2. Portion control...\u201d). \u00a7D.3.3 has more details on the human evaluation, models\u2019 win rates, and qualitative analysis. 7 Fine-tuning a More Culturally Aware Language Model Our ultimate goal is to develop more culturally-aware language technologies. So we train on our CultureBank dataset to see if such a resource can improve LLMs\u2019 cultural awareness. Training process. The training has two steps. First, we train a model on the 9402 cultural descriptors in the training set via supervised fine-tuning (SFT). In the second step, we select a 2K subset where the model performs poorly, and train on the grounded questions and answers augmented by golden cultural descriptors via SFT (\u201cmodel-SFT (Ours)\u201d in the tables) or DPO (Rafailov et al., 2024) (\u201cmodel-DPO (Ours)\u201d) . See \u00a7 E for more details. Results. Table 4 shows the results on the test set. The test set mostly contains out-of-domain cultural descriptors, but the fine-tuned models can still achieve a better performance than their vanilla versions, suggesting that CultureBank can improve models\u2019 cultural awareness. 7.1 Zero-shot transferability on downstream tasks Ideally, a more culturally-aware language model can achieve a better performance on different culture-related tasks. So, we also evaluate the fine-tuned models on two downstream tasks, to see if our CultureBank can help other cultural tasks in a zero-shot fashion. Downstream tasks. We choose two tasks: (1) GlobalOpinionQA (Durmus et al., 2023a), which has questions from world value survey (Inglehart et al., 2000) to measure how similar LLM-generated responses are towards different countries; (2) CultureNLI (Huang & Yang, 2023), which contains premise-hypothesis pairs labeled by two cultural groups, American and Indian. The detailed prompts and evaluation settings are in Appendix E.2. Results. Table 5 shows the results. On both datasets, the models fine-tuned on CultureBank achieves a better performance than the vanilla counterparts (e.g., in GlobalOpinionQA, 79.5 VS. 81.8 for Mixtral-SFT; in Culture NLI, 59.9 VS. 61.5 in the US and 60.8 VS. 61.3 in India for Mixtral-SFT). These results suggest that CultureBank can be used to improve models\u2019 cultural awareness in downstream tasks even in a zero-shot setting. 8 Generalizing to Reddit There are different online communities, so it is important to test if our pipeline can be transferred to other platforms. So we apply our pipeline on Reddit, another online community, with the following customization. Table 8 shows the running time and data volume. 7 Preprint. Under review. Model (zero-shot) GlobalOpinionQA CultureNLI Avg Sim (\u2191) Skew (\u2193) US (\u2191) IN (\u2191) Llama-2-7B-chat 83.6 2.2 39.2 39.5 Llama-2-70B-chat 83.6 2.2 69.7 68.9 Mistral-7B-Instruct 79.3 3.2 42.5 43.8 Mixtral-8x7B-Instruct 79.5 2.7 59.9 60.8 GPT-3.5 75.0 73.0 GPT-4 80.0 72.0 Llama2-7B-SFT (Ours) 85.4 1.5 39.2 39.6 Mixtral-8X7B-SFT (Ours) 81.8 2.8 61.5 61.3 Mixtral-8x7B-DPO (Ours) 80.5 2.6 56.3 55.4 Table 5: Zero-shot cultural awareness on GlobalOpinionQA and CultureNLI. A higher Avg Similarity means the model\u2019s output distribution is closer to the surveyed distribution for each country. A lower Skewness indicates the model\u2019s predictions are more balanced across countries (less variance). US and IN show the F1 score on US and India. In GlobalOpinionQA, GPTs\u2019 results are NA because we do not have access to their logit distributions. \u2022 Culture Relevance Classifier: similar to TikTok, we first search for tags like \u201d#culturaldifference\u201d on Reddit, but tags are not used frequently on Reddit, so we also directly search for cultural keywords on both submissions and comments to identify cultural contents. See detail in \u00a7 F. As Reddit comments are much longer than TikTok comments, the TikTok-based cultural relevance classifier does not work well. So we annotate 1K examples, and train a new cultural relevance classifier for Reddit to get the culturally-relevant portion from the keyword-curated subset. Finally, we obtain 2.6M cultural comments. Considering the computation cost, we take a random subset of 528K cultural comments for the following processing steps. \u2022 Descriptor Extractor: to achieve a better performance, instead of using few-shot Llama-2 extractor, we fine-tune a Mixtral-based extractor on 1K GPT-4-generated extraction examples to extract structured cultural descriptors from Reddit comments. Table 2 shows the basic statistics: CultureBank-Reddit contains 11K cultural descriptors and 2K cultural groups. Human annotation in Table 3 shows that it also contains high-quality data, suggesting that our pipeline can be easily generalized to a different platform. 9 Recommendation for Culturally aware Language Technologies Informed by results on CultureBank construction and analysis, cultural awareness evaluation, and fine-tuning, we outline insights towards future culturally-aware language technologies. 9.1 Cultural knowledge data We show that fine-tuning on CultureBank can improve the cultural-awareness on various downstream tasks, so it remain critical to keep developing cultural knowledge databases. Data source. Prior work often relies on formal data sources and collapse different sources together: e.g., Fung et al. (2024) started from WikiPedia and continued to scrape any related websites to construct cultural knowledge bases. But different data sources cover various aspects of culture: official documents like textbooks provide factual cultural knowledge, while online communities like social media offer insights on everyday cultural practices. So we should invite diverse data sources to capture the full spectrum of culture in the future. Besides, different data sources host different populations: Table 9 shows that topic-wise, 8 Preprint. Under review. CultureBank-Reddit contains more contents on community and identify, while CultureBankTikTok is more about daily life like social norms and etiquette. So future datasets should keep data source as an important attribute to allow further analysis. Data contents. Culture is multifaceted, so it is also important to factor in various dimensions in the data content. Here is an example list of attributes to consider. \u2022 Cross-culture behavior. In a globalized world, it is crucial to understand crossculture behaviors to facilitate effective communication (Watkins, 2012). Our CultureBank contains some cross-culture behaviors but we need more efforts on it. \u2022 Perspectives. It is also important to track through whose lens we are looking at a certain culture behavior, because different perspectives may lead to different understanding of the same cultural practice (Iyengar et al., 1999; Brewer, 1999). \u2022 Time. Culture changes over time. In CultureBank, we release the time range associated with the cultural descriptors. Future data efforts should also consider the time factor to enable temporal analysis. \u2022 Multilingual. Culture and language are deeply intertwined. But many existing cultural knowledge bases still rely on English. To capture the cultural nuances, in the future, we should develop multilingual multicultural knowledge banks. \u2022 Multimodality. Cultural knowledge goes much beyond text information. So it is essential to include different modalities to capture the full spectrum of culture, from non-verbal communication cues, to rituals and arts, and so on in the future. Data analysis. In terms of data analysis, future research should consider temporal change rather than focusing on static data, as culture is evolving over time. For instance, we perform preliminary temporal analysis in \u00a7 G.1 and find there are more discussions around studying abroad, LGBTQ+ rights, and technology over the years. Besides, existing research still categorizes culture by country, but we need to attend to more fine-grained cultural groups (ethnicity, generation, regions, ethnolinguistics, immigrants, socioeconomics, etc), to fully understand cultural diversity. Moreover, the study of cultural adaptation becomes increasingly important, as it reveals how culture changes in response to global influences. These focus areas \u2013 temporal dynamics, cultural group diversity, and adaptation processes \u2013 offers a comprehensive understanding of the fluid nature of culture in a globalized world. 9.2 Cultural awareness evaluation We highlight two findings in evaluation. First, in our evaluation, humans also find it difficult to decide which model response is more culturally aware, partly because they are not from the presented cultural group. As we spend more effort on cultural data resources, it is also increasingly important to involve global annotators to enable more accurate evaluation. Secondly, as shown in our findings, direct and grounded evaluations give different results. So during evaluation, it is important to be more grounded on the end applications. 9.3 Training culturally-aware language technologies We realize that when fine-tuning models for cultural awareness, training only on the cultural knowledge or the grounded QA tasks could be insufficient. Take training a culturally-aware conversational assistant as an example. First, it requires appropriate cultural data grounded in multi-turn conversational settings. In addition, it requires a well-designed training paradigm to attend to the cultural nuances potentially implicit in the dialogue context. It also needs a solid evaluation method to rate the culture awareness of the generated responses, to help the model improve and evolve. Such a model needs to have a holistic view of the user cultural background, a personalized recognition of individual differences, and an inclusive mind for new cultural concepts and practices. 9 Preprint. Under review. 10 Conclusion To conclude, our study introduces a generalizable pipeline for creating cultural knowledge bases from online communities. Using the pipeline, we develop CultureBank, a cultural knowledge database with 12K cultural descriptors sourced from TikTok, and 11K from Reddit. CultureBank features agreement levels for nuanced cultural interpretation and contextualized scenarios for grounded evaluation. With CultureBank, we assess the cultural awareness of various LLMs, showing room for improvement. Further, fine-tuning an LLM with CultureBank leads to better performance on downstream cultural tasks, which showcases the potential of CultureBank. Finally, drawing from our findings, we close the paper by presenting insights towards future culturally-aware language technologies. Ethical Statement In this work, we construct a cultural knowledge base from online communities. Given the large size of the dataset, we acknowledge that stereotypes, controversial, and negative content may still exist in our dataset, despite our rigorous efforts to filter the data and minimize the impact of such content. We want to emphasize that the cultural descriptors in CultureBank are not intended to reflect, nor should they be interpreted as reflecting, the personal views or opinions of the authors or the online platforms. We call for a better approach for content moderation in the future and hope that researchers will use our data with a discerning perspective, and always consider the broader implications of its application and the potential for reinforcing harmful biases. We also recognize the responsibility that comes with handling cultural data, especially from diverse and broad communities like those on TikTok and Reddit. In our method, we have strived not only for technological innovation but also for a conscious approach that respects the dignity, privacy, and cultural sensitivities of individuals and groups represented in the data. This includes anonymizing data where possible, ensuring compliance with platform terms of service, and engaging with ethical guidelines that govern research in social sciences and humanities. In conclusion, while we acknowledge the limitations and challenges inherent in our work, we believe in its potential to contribute positively to the field of culturally-aware language technology. We encourage the community to join us in these efforts, to promote cultural diversity, inclusivity, and sensitivity. We discuss limitations of this work in \u00a7A. Acknowledgement We thank feedback from Chunchen Xu, Emily Goodwin, Jing Huang, and members from the SALT lab at Stanford University. We also thank TikTok for providing the research API. Stanford processed the raw data internally. IBM provides high-level feedback and is not involved in the data processing."
},
{
"url": "http://arxiv.org/abs/2405.00705v1",
"title": "SHED: Shapley-Based Automated Dataset Refinement for Instruction Fine-Tuning",
"abstract": "The pre-trained Large Language Models (LLMs) can be adapted for many\ndownstream tasks and tailored to align with human preferences through\nfine-tuning. Recent studies have discovered that LLMs can achieve desirable\nperformance with only a small amount of high-quality data, suggesting that a\nlarge amount of the data in these extensive datasets is redundant or even\nharmful. Identifying high-quality data from vast datasets to curate small yet\neffective datasets has emerged as a critical challenge. In this paper, we\nintroduce SHED, an automated dataset refinement framework based on Shapley\nvalue for instruction fine-tuning. SHED eliminates the need for human\nintervention or the use of commercial LLMs. Moreover, the datasets curated\nthrough SHED exhibit transferability, indicating they can be reused across\ndifferent LLMs with consistently high performance. We conduct extensive\nexperiments to evaluate the datasets curated by SHED. The results demonstrate\nSHED's superiority over state-of-the-art methods across various tasks and LLMs;\nnotably, datasets comprising only 10% of the original data selected by SHED\nachieve performance comparable to or surpassing that of the full datasets.",
"authors": "Yexiao He, Ziyao Wang, Zheyu Shen, Guoheng Sun, Yucong Dai, Yongkai Wu, Hongyi Wang, Ang Li",
"published": "2024-04-23",
"updated": "2024-04-23",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.LG"
],
"label": "Original Paper",
"paper_cat": "LLM Fairness",
"gt": "The development of LLMs marks a major leap in machine learning, transforming how we approach natural language processing (NLP) and Artificial Intelligence (AI) research [1]. Models such as GPT- 3 [2] and LLaMA [3] highlight the benefits of pre-training on large and diverse datasets, empowering these LLMs with a wealth of knowledge. Moreover, one of the pivotal strengths of LLMs lies in their adaptability to specific tasks through fine-tuning. Fine-tuning, a process that involves updating models on a task-specific dataset, enables the pre-trained model to acquire task-specific information. Furthermore, it facilitates the alignment of models to more accurately follow human instructions through fine-tuning on a dataset comprised of instructions paired with appropriate responses, which is known as instruction tuning. However, fine-tuning LLMs also raises challenges. A primary concern is that the noisy data or harmful instances in the fine-tuning dataset can significantly degrade the performance of pre-trained LLMs [4]. Many works have developed large and diverse datasets for fine-tuning purposes. While the datasets for fine-tuning have grown in size, recent research suggests that meticulously curated datasets of high quality, even if smaller in size, can be more effective in harnessing the full potential of LLMs [5, 6, 7]. Indiscriminately increasing the volume of data can lead to ineffective performance improvements and might even deteriorate model performance as a result of introducing noisy and harmful instances. In addition, for instruction tuning, the model has already learned the necessary knowledge in the pre-training stage. The dataset used in the fine-tuning stage merely aims to better Preprint. Under review. arXiv:2405.00705v1 [cs.CL] 23 Apr 2024 align the models to follow human instructions, indicating this process does not necessitate extensive data [8]. Furthermore, fine-tuning LLMs on extensive datasets incurs significant computational costs. The necessity for considerable GPU resources presents a critical challenge [9]. Only those researchers and institutions equipped with sufficient computing resources are able to perform such tasks, limiting the broader applications and progress within the LLM community. Consequently, there is a pressing need to design a novel method for curating small and high-quality datasets that enable efficient fine-tuning. Previous efforts have employed various methods such as curation or generation through manual efforts or commercial LLMs [5, 10], identifying subsets from larger datasets via training dynamics or estimating marginal contributions [11, 12]. Most current methods for data selection neglect the potential influence that different combinations of samples can have on model performance. The Shapley value [13], introduced in cooperative game theory, provides a method for fairly evaluating the contribution of each participant by examining all possible combinations and their effects on the overall result. This principle has also been utilized in machine learning to assess the impact of individual data within a given dataset [14]. The Shapley value can serve as criteria to refine one or more large datasets to extract high-quality data points, enabling the curation of a smaller yet high-quality dataset. This method not only facilitates the selection of impactful data, but also considers the effectiveness of selected data combinations. Shapley value seems to be a promising toolkit for data selection. However, calculating the Shapley value for all the data samples in a dataset is computationally expensive, especially for large-scale fine-tuning datasets. Original dataset Model-agnostic Clustering Proxy-based Shapley calculator Optimization- aware sampling Curated dataset Figure 1: Overview of SHED. Motivated by the aforementioned challenges, we present SHED , a Shapley-based automated dataset refinement framework for fine-tuning LLMs. The key intuition behind SHED is to perform Shapley value evaluations on a small portion of representative sam- ples only, such that the computation complexity of the Shapley-based data refinement can be dramatically de- creased. Specifically, as Figure 1 illustrates, SHED consists of three key components: model-agnostic clus- tering, proxy-based Shapley calculator, and optimization-aware sampling. Initially, the model- agnostic clustering groups embeddings of the original dataset, and then selects representative data samples as a proxy for each cluster based on the distance of embeddings to the cluster centroid. These proxy data instances are then evaluated by the proxy-based Shapley calculator, which employs an approximation method to efficiently calculate their Shapley values, focusing on task-specific objectives (e.g., accuracy and fairness). This method involves iteratively removing groups of in- stances from the proxy dataset and assessing the performance variation of the model to estimate the collective contribution of these instances, thereby streamlining the computation of Shapley values. The derived Shapley values of these proxy data instances are used as the quality score for their respective clusters. Finally, optimization-aware sampling selects data from clusters to compile a compact yet high-quality dataset, employing strategies that may favor clusters with higher-quality scores. SHED only computes Shapley values for the cluster representatives rather than each data, drastically boosting the efficiency of data refinement. Moreover, SHED offers a unified yet flexible framework, catering to various user needs by providing multiple options within each component. For example, the optimization objective for Shapley value measurement can be tailored to specific tasks (e.g., fairness). Our key contributions can be summarized as follows: \u2022 We present SHED , a generic data refinement framework based on Shapley values, which can curate a small yet high-quality dataset for boosting the efficiency of fine-tuning LLMs. \u2022 We conducted extensive experiments on two benchmark datasets, i.e., MMLU and Wiz- zardLLM, the results demonstrate that fine-tuning LLMs with small datasets curated by SHED yields performance comparable to, or even better than, using the original large datasets. Remarkably, we observed that the datasets curated by SHED exhibit transferability, achieving robust performance across various LLMs models even from different families. This amortizes the computational cost for data selection when applying the curated dataset to fine-tune various LLM models. \u2022 All the code associated with the collection of high-quality datasets curated by SHED can be found at SHED: Shapley-Based Automated Dataset Refinement. 2",
"main_content": "2.1 Coreset Selection Coreset selection plays a critical role in machine learning by targeting the selection of a representative subset from a larger dataset. Various coreset selection methods use unique criteria for choosing samples. Geometry-based approaches focus on the geometric properties of the data points, striving to retain geometrically significant samples that represent the overall data distribution [15, 16, 17, 18]. Uncertainty-based methods choose samples based on the uncertainty they present to the model, typically engaging samples that the model finds challenging to classify [19, 20, 21]. Decisionboundary-based methods select samples that are close to the decision boundary of the classifier, ensuring that the nuances of the classification boundary are well-represented in the selected subset [22, 23]. Gradient-matching approaches involve selecting a subset that yields similar gradient distributions as the entire dataset when used in training [24, 25]. Bilevel Optimization optimizes the coreset selection in a way that the selected subset maximizes certain performance metrics [26]. Submodularity-based approaches consider both diversity and information richness, striving for a balanced representation of the dataset [27]. 2.2 Data Selection for Instruction Fine-tuning Due to the superiority of instruction tuning in enhancing the performance of LLMs, many recent studies focus on selecting high-quality instruction tuning data. Based on methods, it can be divided into the following categories. Indicators-based methods define multiple metrics, such as instruction length and perplexity, to compute quality scores for each instruction instance [10, 28, 29, 30]. Training-based methods leverage the performance improvement through fine-tuning to score and select instruction data suited for fine-tuning [31, 32, 33, 34, 35, 36]. Some other methods employ commercial LLMs like ChatGPT to assess quality, complexity, and diversity of instructions for selection [7, 37, 38, 39, 40]. 2.3 Limitations of Previous Work Most existing methods for data selection overlook the impact of various data combinations on model performance. As Table 1 illustrates, datasets formed by combining high-quality data, which are merely based on the independent quality score of each individual data sample, do not necessarily enhance model performance effectively. The combination of different data can impact the final performance of fine-tuning. Furthermore, many works are task-specific, limiting their broader applicability. 3 Proposed Method Motivated by the aforementioned challenges, we present SHED, a generic framework that exploits Shapley value to identify and select high-quality data to improve the performance and efficiency of fine-tuning LLMs. 3.1 Preliminary Table 1: We apply DSIR [41] to compile a high-quality dataset (10k instances), a random dataset (10k instances) from MMLU, and a mixed dataset samples 5k instances from each of the high-quality and random datasets. We fine-tune the LLaMA-7B model [3] on the curated dataset and evaluate them using the MMLU test set. Dataset High-quality Random Mixed MMLU 40.04 39.13 40.92 The motivation behind this work is underscored by the observation, as illustrated in Table 1, that naively aggregating high-quality data merely based on the independent importance of individual samples does not guarantee a performance improvement of fine-tuning. We believe this phenomenon is attributed to the complex interactions between different instances within the fine-tuning process. 3 Thus, there is a pressing need to design a novel data selection method, which accounts for the individual and collective contributions of instances to model performance. The Shapley value offers a compelling solution to this challenge. It quantifies the marginal contribution of each instance to the overall performance of the model, considering all possible combinations of instances. The formulation of the Shapley value for a data sample i in dataset D can be expressed as: Si = X P \u2208D\\{i} |P|!(|D| \u2212|P| \u22121)! |D|! (v(P \u222ai) \u2212v(P)), (1) where Si is the Shapley value of i, P is the subset of dataset D, |D| and |P| are the total number of instances in D and P, v(P) is the value function of P, which represents the performance of the LLM model fine-tuned on the subset P. As Eq. 1 indicates, the Shapley value of an instance i captures its average impact on model performance across all subsets it might be part of. This ensures a fair evaluation of the contribution of each instance in the original dataset, enabling the selected data is genuinely beneficial for enhancing model performance when integrated with other data samples. Additionally, the value function v(P) in Eq. 1 serves to calculate contributions from corresponding data. This value function can be tailored for various optimization objectives, such as accuracy and fairness, facilitating the selection of data that aligns with the task-specific requirements. However, computing the Shapley value, as depicted in Eq. 1, demands extensive computational efforts, because it requires evaluating the contribution of each instance across all possible combinations. For a dataset with |D| instances, there are a total of 2|D| \u22121 possible combinations. For each combination, two evaluations are needed, i.e., one includes a certain instance and the other one holds out that instance, doubling the computational workload to determine the contribution of that particular instance. Thus, the time complexity for measuring the Shapley value of each instance is O(2|D|). Given the need to perform this calculation for all |D| instances to determine their individual Shapley values, the overall time complexity for the dataset increases to O(|D| \u00b7 2|D|). This exponential complexity makes direct computation of Shapley values impractical for large datasets. 3.2 Design of SHED original dataset score score score selected dataset 1 2 3 Figure 2: Workflow of SHED: \u2460Clustering and determining proxy data; \u2461Calculating Shapley values as scores; and \u2462Sampling based on scores. To address the above challenges, we design SHED, comprising of three key components: modelagnostic clustering, proxy-based Shapley calculator, and optimization-aware sampling. We introduce each component in detail. Model-agnostic Clustering. Given the time complexity of computing the Shapley value, calculating the Shapley value for all instances in a large finetuning dataset is impractical. The model-agnostic clustering employs models from Sentence Transformers [42] to generate semantically meaningful embeddings for each sample in the original dataset. These embeddings facilitate the efficient and effective computation of semantic similarities between textual inputs, enabling the grouping of data with similar contexts. Moreover, those model-agnostic embeddings enhance the transferability of the curated dataset, as demonstrated in Table 7. Then, the model-agnostic clustering applies algorithms, such as K-means [43] and Agglomerative Clustering [44], to group the embeddings. It then selects the representative data, which is closest to the cluster centroids in the embedding space, for each cluster. In doing so, we use these representative samples as the proxy of the respective clusters. Subsequently, SHED only calculates the Shapley values of those proxy data, using their Shapley values as the quality scores for their respective clusters. Employing proxy data effectively captures the essence 4 of the diversity and complexity in the dataset. This strategy significantly reduces the computational burden associated with calculating Shapley values across vast datasets. Proxy-based Shapley Calculator. To further improve efficiency for Shapley value calculations, the proxy-based Shapley calculator employs an approximation method to estimate the Shapley values of the proxy data. This method iteratively removes groups of n instances from the proxy data Dp, followed by an evaluation of the model\u2019s performance to assess the impact of these instances. The performance variations before and after the removal of a specific group of instances quantify their collective contribution. Specifically, the contribution of the initial group of n instances, denoted as c(1..n)\u2208Dp, is computed by c(1..n)\u2208Dp = v(Dp) \u2212v(Dp \\ {1..n}). Similarly, the contribution for the subsequent group of n instances is determined by c(n+1..2n)\u2208Dp = v(Dp\\{1..n})\u2212v(Dp\\{1..2n}). This procedure is repeated, progressively removing groups of n instances until the entire proxy data has been visited, which marks the completion of a single iteration. This entire iteration process is then repeated k times to enhance the accuracy of the approximation. After completing k iterations, the Shapley value for a certain instance i of the proxy dataset is approximated using the average of its contributions across all iterations, defined as Si \u22481 k P k ci(k) n , where ci(k) denotes the contribution associated with instance i in the kth iteration. Optimization-aware Sampling. The Shapley value of each proxy data is assigned as the quality score of the corresponding cluster. Optimization-aware sampling utilizes these quality scores to sample data from these clusters, aiming to curate a small yet high-quality dataset. Optimization-aware Sampling offers two sampling methods: Quality-Ordered Cluster Sampling (QOCS) and Quality-Weighted Cluster Sampling (QWCS). QOCS prioritizes sampling from clusters with the highest quality scores. It selects instances starting from the most high-quality clusters until a predefined target sampling number is reached. QWCS adopts a probabilistic approach to sample instances across all clusters, with the probability of selection from a given cluster weighted by its quality score. This method aims to balance quality with diversity by allowing for the inclusion of instances from a broader array of clusters, thus potentially enriching the dataset with a wider variety of high-quality data points. The probability Pr(i) of selecting an instance from cluster i is defined in Eq. 2: Pr(i) = efSi P i efSi , (2) where Si represents the quality score of cluster i, and f is a scaling factor that modulates the emphasis on quality versus diversity within the sampled dataset. By adjusting f, users can tailor the sampling process to prioritize either quality or diversity to suit specific task goals. A higher f value tends towards selecting higher-quality instances, offering a versatile toolkit for dataset optimization. 4 Experiments 4.1 Experimental Setup Datasets. We conduct experiments on two famous benchmark datasets, MMLU (99.8k instances) [45] and WizardLM-evol-instruct-70k (70k instances) [46]. SHED Implementation. We use the K-means algorithm for the model-agnostic clustering and set the number of clusters to 3000. For the proxy-based Shapley calculator, the value function is set as the accuracy of the foundation model fine-tuned on the proxy data. We use LLaMA-7B [3] as the pre-trained foundation model and 10% instances in the MMLU test set calculating the Shapley values of proxy data. The number of iterations k is set to 10, and the number of instances n removed from the proxy data each step is set to 60. To conserve time and resources, instruction fine-tuning within the proxy-based Shapley calculator is conducted for one epoch. For optimization-aware sampling, we employ the QOCS and QWCS strategies with setting the scaling factor to 1, investigating their efficacy with a variety of target sampling sizes. These implementations are denoted as SHED-QOCS and SHED-QWCS. The target sampling size varies from 1, 000 to 20, 000 with increments of 1, 000, to thoroughly assess the impact of each sampling approach on fine-tuning performance. Baseline Methods. We compare SHED with three baseline methods. Specifically, we implement a random-sampling method, denoted as RS, which randomly selects a subset from a large dataset. We also use the Dataset Quantization method [29], denoted by DQ, and the Data Selection with 5 Importance Resampling [41], denoted by DSIR, for comparisons. In addition, we also consider fine-tuning models on the entire dataset, denoted as FULL, as a baseline. Evaluation Settings. After obtaining the curated datasets using SHED and baseline methods, we fine-tune the pre-trained models using each curated subset, respectively. We apply the LowRank Adaptation (LoRA) for fine-tuning and set the default LoRA rank to 128. For all curated datasets, the instruction fine-tuning was conducted for 3 epochs. We evaluate the performance of fine-tuned models on MMLU and ARC-challenge tasks using the lm-evaluation-harness testing framework [47]. To better evaluate the human preferences of fine-tuned models, we adopt MT-Bench [48] in our experiments. All the experiments are conducted on two A100 GPUs. 4.2 Experiment Results We summarize the experimental results for SHED and other baseline methods. For consistency, the bold numbers indicate the corresponding method outperforms the FULL method. Additionally, we underline the best result achieved among all the methods that curate subsets. For each method, the dataset from the curated collections that yields the optimal result across various sample sizes is referred to as the best-selected dataset. Table 2: Performance comparison of curated datasets of the same size by SHED and baseline methods. Original dataset MMLU WizardLM Method RS DQ DSIR QOCS QWCS RS DQ DSIR QOCS QWCS MMLU 38.94 39.88 40.24 44.80 43.87 33.12 33.20 33.86 35.43 34.91 ARC-challenge 45.10 46.35 45.67 47.10 47.23 46.01 48.71 47.66 49.47 49.92 Table 3: Performance of the best-selected datasets of SHED and baseline methods on the MMLU task. MMLU WizardLM QOCS 44.80 (10k) 35.92 (4k) QWCS 44.24 (13k) 35.76 (9k) RS 40.87 (15k) 34.33 (7k) DQ 43.50 (7k) 33.97 (7k) DSIR 40.23 (13k) 34.72 (10k) Full 45.56 (99.8k) 33.16 (70k) Table 4: Performance of the best-selected datasets of SHED and baselines on the ARC-challenge task. MMLU WizardLM QOCS 47.10 (10k) 51.36 (1k) QWCS 49.21 (9k) 50.26 (7k) RS 47.07 (13k) 49.33 (16k) DQ 46.50 (3k) 50.24 (5k) DSIR 46.90 (3k) 48.78 (12k) Full 45.99 (99.8k) 47.95 (70k) Effectiveness of SHED. Given the datasets generated from SHED and the baseline methods, we fine-tune the LLaMA-7B model, respectively, and evaluate the fine-tuned models on the MMLU and ARC-challenge tasks. We compare the results of the datasets of 10k instances curated by SHED and the baseline methods. As depicted in Table 2, when the number of total sampling instances is fixed (10k), the datasets curated by SHED consistently outperform those chosen by baseline methods. We also compare the performance of fine-tuned models using the best-selected dataset by each method. Table 3 shows the evaluation results for the MMLU task. Our method, SHED-QOCS, demonstrated superior performance on the MMLU dataset compared to baseline methods, achieving the highest results among the curated datasets. Furthermore, SHED-QOCS also led in performance 6 Table 5: MT-Bench evaluation of the best-selected datasets of SHED and baselines. Original dataset MMLU WizardLM Method Size Full 99.8k RS 10k QOCS 10k RS 13k QWCS 13k Full 70k RS 4k QOCS 4k RS 9k QWCS 9k LLaMA-7B 3.02 2.23 2.53 2.44 2.83 5.21 4.77 4.89 4.81 5.24 Table 6: Transferability evaluation using the best-selected datasets across different models on MMLU task. Original dataset MMLU WizardLM Method Size Full 99.8k RS 10k QOCS 10k RS 13k QWCS 13k Full 70k RS 4k QOCS 4k RS 9k QWCS 9k LLaMA-13B 53.22 50.04 52.95 50.12 51.54 45.63 45.77 45.93 45.81 46.36 VICUNA-7B 49.70 48.43 50.01 47.21 48.93 45.56 45.71 47.19 45.44 48.16 GPT-2 24.22 23.74 26.89 24.33 25.83 26.19 25.07 26.76 24.85 25.77 Table 7: Transferability evaluation using the best-selected datasets across different models on ARCchallenge task. Original dataset MMLU WizardLM Method Size Full 99.8k RS 10k QOCS 10k RS 13k QWCS 13k Full 70k RS 4k QOCS 4k RS 9k QWCS 9k LLaMA-13B 49.31 47.31 50.43 48.83 50.68 54.09 53.17 55.20 54.11 55.63 VICUNA-7B 44.88 44.86 45.23 43.24 44.91 49.91 47.72 50.26 47.98 48.72 GPT-2 19.45 18.77 19.81 19.02 20.05 19.19 17.98 19.28 18.72 19.54 when utilizing the WizardLM dataset. It is notable that SHED-QOCS outperforms the full dataset, achieving a 2.76% higher accuracy. In Table 4, we report the results of the ARC-challenge task. Similarly, among the datasets curated from the MMLU dataset, the selected dataset of our method SHED-QWCS achieves the best result compared with the baseline methods. It also surpasses the full dataset by 3.22%. Within the datasets derived from WizardLM, SHED-QOCS once again curated the dataset of best performance, which surpasses the full dataset by 3.41%. The results demonstrate the effectiveness of SHED. Although SHED demands more computational effort, its strength lies in creating high-performance datasets. Evaluations on MT-Bench. We also use MT-Bench to evaluate the performance of datasets curated by SHED in terms of human preferences. Table 5 demonstrates that the dataset curated by SHED aligns well with human preferences, not only enhancing accuracy but also enabling the model to better understand and follow human instructions, generating answers that are more favorable to humans. The dataset constructed through the SHED-QWCS method, sampled from WizardLM, achieved a remarkable score of 5.24 on the MT-Bench. Transferability Evaluation of Curated Datasets across Various Models. To evaluate the transferability of datasets curated by SHED , we first apply SHED to select data from the MMLU and WizardLM datasets based on LLaMA-7B. Then, we fine-tune LLaMA-13B, Vicuna-7B, and GPT-2 using the best-selected dataset curated by SHED and the baseline methods. As summarized in Table 6 and Table 7, datasets curated by SHED exhibit robust performance across various models, demonstrating their transferability and applicability across various tasks and even different model families. The strong transferability of the curated datasets indicates that SHED identifies generally high-quality data. In addition, the computational cost for data selection can be significantly amortized across various models. Impact of Number of Clusters. The number of clusters in K-means affects the computational cost needed for Shapley value calculations and the relevance of proxy data to its cluster. An increase in the number of clusters leads to smaller and more homogeneous groups, thereby improving the proxy data\u2019s representativeness for its respective clusters. However, this comes at the cost of increased computational overhead, highlighting a balance that must be struck to optimize both efficiency and representativeness. In this experiment, we evaluate the best-selected dataset by SHED across varying numbers of clusters using LLaMA-7B on the MMLU test set. Guided by the findings in [49], our investigation begins with a baseline cluster count of C = p |D|. We present the computation time for Shapley value computations across different settings, maintaining consistency with the experimental setup outlined in Section 4.1. 7 QWCS QOCS Accuracy (%) 40 42 44 Number of clusters 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 (a) Subsets selected from MMLU. QWCS QOCS Accuracy (%) 33 34 35 36 Number of clusters 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 (b) Subsets selected from WizardLM. MMLU WizardLM Time (hours) 6 8 10 Number of clusters 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 (c) Computational time for one iteration of Shapley value calculation. Figure 3: Performance of subsets with varying numbers of clusters in SHED. As Figure 3(a) and Figure 3(b) show, the results reveal that performance improvements of curated dataset reach a plateau when the number of clusters exceeds 3 p |D|. Meanwhile, Figure 3(c) demonstrates a proportional increase in computation time for Shapley value calculations as the number of clusters rises. Notably, at very low cluster counts (e.g., below 1000), Shapley value computation times are largely dictated by the evaluation, with the time spent remaining relatively constant across varying datasets. In such cases, the computation time is more significantly affected by the size of pre-trained models rather than the number of clusters itself. Given the transferability of datasets curated using the SHED, it is feasible to employ a smaller foundational model than the target model within the proxy-based Shapley value calculator. In doing so, the computation overhead for evaluation can be significantly reduced, making SHED a practical approach in real-world settings. QWCS QOCS Accuracy (%) 42 43 44 45 Number of iterations 5 10 15 20 25 30 (a) Subsets selected from MMLU. QWCS QOCS Accuracy (%) 32 33 34 35 36 Number of iterations 5 10 15 20 25 30 (b) Subsets selected from WizardLM. Figure 4: Performance of subsets with varying iterations in SHED. Impact of Number of Iterations on Proxy-based Shapley Calculator. The precision of Shapley value estimates increases with the number of iterations k, providing a more accurate measurement of each data sample\u2019s contribution to the model performance. However, this increment also leads to a proportional rise in computational cost, leading to a contrasting relationship between computational efficiency and the accuracy of Shapley value estimations. To seek the optimal number of iterations for Shapley value calculations, we analyzed the performance of datasets curated by SHED under varying iteration settings. The experiments are conducted with the LLaMA-7B model on the MMLU test set, following the experimental settings detailed in Section 4.1. Figures 4(a) and 4(b) illustrate that the performance of the curated datasets by QOCS and QWCS are stable once the iteration number surpasses 10. This result highlights the stability of our methods beyond 10 iterations, showing that further iterations beyond this threshold do not significantly improve dataset quality. Given the balance between computational cost and performance, setting the number of iterations to 10 is recommended for optimal efficiency and robustness. 5 Discussion 5.1 Data Selection for Multiple Tasks. In our experiments, we thoroughly evaluate methods regarding accuracy. It is notable that our framework is readily adaptable. By setting different value functions v(P), SHED can select any 8 subset using arbitrary criteria. This adaptability allows SHED to customize its data selection process to produce a small dataset while improving specific objectives, such as model fairness [50]. In particular, if we aim to curate a dataset using the common fairness notion, i.e., demographic parity, we can define v(P) the disparity in positive prediction rates between groups with protected attributes (e.g., males vs. females), calculated as the negative absolute difference \u2212|XMale \u2212XFemale|, where XMale and XFemale are the positive prediction rates for male and female groups, respectively. 5.2 Complexity Analysis We assume that the running time required to fine-tune the model using a single instance is denoted by t, and the time needed to evaluate the model on a test set consisting of m instances is represented by Tm. Let C denote the number of clusters, n denote the number of instances within a group and k signifies the number of iterations utilized in the proxy-based Shapley calculator as illustrated in Section 3.2. The total number of evaluations and fine-tuning per iteration would be proportional to C n . For simplicity, we assume that C is evenly divisible by n for simplicity. Given k iterations, the overall time complexity of this approximation method can be expressed as O \u0010 Ck n h (C+n)t 2 + Tm i \u0011 . 6 Conclusion In this work, we introduced SHED, an innovative Shapley value-based framework designed to refine datasets for the efficient fine-tuning of LLMs, addressing the computational hurdles commonly associated with Shapley value calculations through a novel clustering and proxy-based approach. Through extensive experiments conducted on benchmark datasets such as MMLU and WizardLLM, we have shown that LLMs fine-tuned with datasets curated by SHED not only match but, in some cases, surpass the performance of those trained with the original, larger datasets. Significantly, SHEDcurated datasets have demonstrated a high degree of transferability, maintaining robust performance across various models. Furthermore, SHED\u2019s flexibility and efficiency underscore its potential to revolutionize LLM fine-tuning by allowing for the creation of compact, high-quality datasets. 7 Limitations This research, while presenting significant advancements, encounters certain limitations that merit attention for future work. Firstly, the proposed method\u2019s reliance on clustering and proxy data may inadvertently diminish data diversity, potentially overlooking rare but critical samples. To address this, future research will investigate the adoption of novel clustering algorithms specifically designed to enhance the visibility and inclusion of rare samples. Secondly, the framework\u2019s current objective focuses predominantly on model performance, which may inadvertently lead to model bias. This singular focus overlooks the equally important aspect of model fairness, which is crucial for ensuring that the models perform equitably across diverse groups. Recognizing this, our framework is designed to be extensible and objective-agnostic, laying the groundwork for incorporating additional criteria. In subsequent research, we plan to integrate considerations of model fairness alongside performance. 8 Ethics Statement In this work, we present SHED, a generic data refinement framework for data refinement utilizing Shapley values, aimed at assembling a compact yet effective dataset to boost the efficiency of the fine-tuning process of LLMs. This study carefully avoids ethical issues beyond standard AI concerns, leveraging properly cited publicly available Internet text data. This approach ensures adherence to ethical data use standards, reflecting our commitment to responsible research practices in the AI field."
},
{
"url": "http://arxiv.org/abs/2404.15104v2",
"title": "Identifying Fairness Issues in Automatically Generated Testing Content",
"abstract": "Natural language generation tools are powerful and effective for generating\ncontent. However, language models are known to display bias and fairness\nissues, making them impractical to deploy for many use cases. We here focus on\nhow fairness issues impact automatically generated test content, which can have\nstringent requirements to ensure the test measures only what it was intended to\nmeasure. Specifically, we review test content generated for a large-scale\nstandardized English proficiency test with the goal of identifying content that\nonly pertains to a certain subset of the test population as well as content\nthat has the potential to be upsetting or distracting to some test takers.\nIssues like these could inadvertently impact a test taker's score and thus\nshould be avoided. This kind of content does not reflect the more\ncommonly-acknowledged biases, making it challenging even for modern models that\ncontain safeguards. We build a dataset of 601 generated texts annotated for\nfairness and explore a variety of methods for classification: fine-tuning,\ntopic-based classification, and prompting, including few-shot and\nself-correcting prompts. We find that combining prompt self-correction and\nfew-shot learning performs best, yielding an F1 score of 0.79 on our held-out\ntest set, while much smaller BERT- and topic-based models have competitive\nperformance on out-of-domain data.",
"authors": "Kevin Stowe, Benny Longwill, Alyssa Francis, Tatsuya Aoyama, Debanjan Ghosh, Swapna Somasundaran",
"published": "2024-04-23",
"updated": "2024-05-01",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"I.2.7"
],
"label": "Original Paper",
"paper_cat": "LLM Fairness",
"gt": "Large language models (LLMs) have become ubiq- uitous in the space of natural language generation (NLG) due to recent advances in model capability (Minaee et al., 2024). However, these improve- ments come with the potential for various negative societal impacts. These negative impacts include \u2217 * Work done while at ETS 1Code and dataset available at https:// github.com/EducationalTestingService/ fairness-detection. Q: You went to one of The Eras Tour shows, didn\u2019t you? Is \u201cYes\u2014I love Taylor Swift!\u201d the right answer? Who is that? (A) (B) Q: You went to the music concert, didn\u2019t you? Ah, I see the correct answer: \u201cYes\u2014it was a great performance!\u201d Figure 1: In (A), the generated question requires knowl- edge of what The Eras Tour is to identify the correct answer. Even native English speakers would likely not be able to identify the correct response if they were not familiar with Taylor Swift. In (B), the generated ques- tion does not require specific background knowledge, so test takers would not need to use specialized knowledge to identify the correct answer. Our goal is to identify and filter content like (A) to help ensure fair testing. the generation of misinformation/propaganda, al- location harms of systems providing benefits only to certain groups of people, and representational harms revolving around bias and stereotyping. Nat- ural language processing (NLP) models\u2013including LLMs\u2013are known to reflect and repeat harmful bi- ases and stereotypes (Hosseini et al., 2023; Bender et al., 2021; Hovy and Prabhumoye, 2021; Nadeem et al., 2021), and research into how the community addresses the societal harms engendered by NLP technology is critical (Wang et al., 2024; Dev et al., 2022; Blodgett et al., 2020). Many of these types of bias in language gen- eration are well-studied. Biases based on gen- der (Nemani et al., 2024; Devinney et al., 2022; Strengers et al., 2020; Wan et al., 2023), race (Das and Balke, 2022; Field et al., 2021), nationality (Venkit et al., 2023), and disability (Venkit et al., 2022) have been identified in language models, and many modern LLMs incorporate deliberate safe- guarding measures in an attempt to alleviate these arXiv:2404.15104v2 [cs.CL] 1 May 2024 issues (OpenAI et al., 2023; Anil et al., 2023). In the area of language assessment, there exists a tangential set of issues regarding fairness to test takers and score users (Educational Testing Ser- vice, 2022). These issues are particularly danger- ous when applied to language learning and assess- ment; tests with inherent biases have the potential to compromise the validity of the test. Therefore, content that is irrelevant to the skills and abilities the test is intended to measure should be avoided (Figure 1). This includes content that could dis- advantage anyone based on their culture, location, or experiences (e.g., focusing on barbeques on the 4th of July could disadvantage test-takers who are unfamiliar with U.S. culture); their emotions (e.g., health hazards and diseases can evoke negative emotional responses among some people); their worldviews (e.g., luxury cruises or designer cloth- ing may make some people feel excluded); and other factors. We refer to these types of issues as fairness issues. Knowing how to better understand, detect, and mitigate bias related to fairness in NLG not only raises awareness of the issue but also en- ables researchers and developers to create more fair and inclusive NLP systems, evaluation metrics, and datasets in the language assessment space. Our goal is to build a system for identifying fairness-violating content in automatically gener- ated texts. It is of course still necessary to have human review and revision of the content, but by adding a filtering process after generation and be- fore manual review, we can significantly reduce the time taken for reviewing and the chance that fairness-related content is mistakenly allowed. To accomplish this goal, we explore four different ap- proaches: fine-tuning, topic-based classification, few-shot prompting, and prompt-self correction. Our methods need to adapt to new contexts: our definition of fairness is operationally defined by the particular testing context, and may not apply to others, so the guidelines, prompts, and models may not apply generally to new contexts. For this reason, we assess our methods on two held-out test sets and analyze how our methods could be applied to new contexts. We release our resulting dataset, consisting of 620 samples, of which 19.4% contain fairness issues2, to facilitate improvements in the fairness-detection community. 2Each sample we used was rejected for deployment in actual tests. Using rejected samples for our experiments allows us to release the dataset: accepted stimuli cannot be made public. Our contribution consists of the following: 1. We define a new fairness problem around is- sues faced in developing fair testing content. 2. We release a dataset of 601 samples for use in evaluating fairness detection methods. 3. We analyze the relative effectiveness of a vari- ety of well-known classification techniques. 4. We provide a new mechanism for prompting self-correction, which yields significant im- provements over other prompting strategies. We start with data collection and analysis. We collect 620 samples over seven different types of content generated using LLM prompting. We anno- tate each sample and assess whether it contains a fairness issue, and if it does, whether that fairness issue pertains to knowledge, skill, or expertise or emotion (more on these categories and how they relate to fairness in Section 3). We then use this dataset to experiment with a series of models for classifying fairness issues. We show that fine-tuning and filtering by topic can be cheap and effective options, although prompting strategies with GPT4 tend to be more effective. Few-shot prompting along with self- correcting prompt strategies yield strong perfor- mance with relatively little data, and combining both yields the best results on our in-domain test set, with an F1 score of .773. Interestingly, using a shorter, more generic prompt combined with our self-correction method yields the best result on our out-of-domain test set, with an F1 score of .462.",
"main_content": "Bias, fairness, and responsible AI has been at the forefront of education technology, with contemporary research focusing on automated scoring, writing assistance, and other nuances of applying NLP technology to this sensitive domain (Mayfield et al., 2019; Loukina et al., 2019). Baffour et al. (2023) find that assisted writing tools may exhibit moderate bias depending on the task, while Wambsganss et al. (2023) found no significant gender bias difference in writing done with and without automated assistance. Wambsganss et al. (2022) explore bias in educational tools for German peer review, and Kwako et al. (2023, 2022) propose novel methods for detecting bias in automated scoring algorithms. We are specifically interested in applications to language generation, and there is also substantial work in using LLMs and other NLP technology to generate content for educational assessments (Laverghetta Jr. and Licato, 2023; Gonzalez et al., 2023; Heck and Meurers, 2023; Uto et al., 2023; Tack et al., 2023; Stowe et al., 2022). However, this work largely fails to address bias and fairness issues in content generation. Our work is specifically focused on fairness issues in automatically generated language testing content. In the context of language models, fairness and bias have emerged as critical concerns. Existing detection and mitigation tools generally diverge from our work: some are overly domain-specific like the focus on news articles in Raza et al. (2024), while others are focused on assessing issues within the language models and datasets (Bellamy et al., 2018), rather than the outputs. Other works rely on retrospective metrics that assess a model\u2019s fairness through aggregated predictions and subgroup analysis, and/or focus on classification rather than generation problems (Weerts et al., 2023; Wi\u00b4 sniewski and Biecek, 2022; Saleiro et al., 2019). Although these tools enhance transparency and accountability for evaluating language model issues, they fundamentally differ from our bias detection approach tailored for evaluating generated text in real-time for a production environment. 3 Problem Motivation In the language testing context, we face a unique set of fairness challenges in generating content. Specifically, fair testing requires content that does not contain irrelevant factors that negatively impact the assessment of a test taker. A primary concern is to ensure that the test content measures only what it is intended to measure. For English-language proficiency tests, this means that the test must measure only the skills and abilities needed to communicate effectively in English, and not other constructs such as background knowledge of specific jobs, events, or cultures. Consider the following question and an example of a response to that question: \u2022 Question: You went to one of The Eras Tour shows, didn\u2019t you? \u2022 Response: Yes\u2013I love Taylor Swift! If the task were to identify whether the response is an appropriate response to the question, even some native English speakers would likely get it wrong. This is because, in addition to needing to know features of English proficiency (in this case, the ability to infer gist, purpose, and basic context based on information stated in short spoken texts), one would also need to know about Taylor Swift and her concert tour. Thus, those familiar with Taylor Swift would have an unfair advantage in identifying the correct answer. Eliminating the fairness issue for this type of question would result in the following revision: \u2022 Question: You went to the music concert, didn\u2019t you? \u2022 Response: Yes\u2013it was a great performance! In addition to avoiding testing outside knowledge, it is also important that language proficiency tests do not include content that is offensive or disturbing. For example, the following question and response refer to serious health issues, which have the potential to evoke deep negative emotions. \u2022 Question: Did you hear that Luis has been hospitalized? \u2022 Response: No, but I knew he had a bad case of Covid-19. Content like this that could prompt strong feelings of anger, sadness, or anxiety should be avoided because it could derail a test taker\u2019s concentration, resulting in lower performance on the test. How a test taker interacts with this test content may tell more about their ability to concentrate under emotional strain than about their ability to identify a response\u2019s linguistic appropriateness. Eliminating this construct-irrelevant content helps to ensure that the test measures only the skills and abilities it is intended to measure. 4 Methods Our goal is to detect whether a generated stimulus contains an issue as a binary classification task. We build a dataset of texts labeled for potential fairness issues and explore potential detection methods. 4.1 Dataset Our goal is to identify and mitigate these fairness issues in testing content. We build a dataset spanning seven different item or task types from standardized English language proficiency tests all generated using GPT4 (OpenAI et al., 2023). Item and task types can contain up to four components: the stimulus (main text the question is based on), stem Item/Task Type Total Fairness KSA Emotion Read a Text Aloud 304 55 24 39 Talks 91 12 6 6 Text completion 84 26 11 19 Respond to Questions Using Information Provided 56 10 5 5 *Conversations 41 8 5 4 *Respond to a Written Request 25 7 6 1 Total 601 118 57 74 Table 1: Item/task types and annotations for fairness issues. Each has a binary annotation (fairness issue/no fairness issue) and is tagged as containing a KSA issue or an Emotion issue. Types marked with \u2019*\u2019 are held out for testing as an \"out-of-domain\" dataset, and not used for any training/evaluation. (question asked about the stimulus), key (the correct answer to the stem), and distractors (a set of alternative answers that are incorrect). Fairness issues are possible in all components, but we focus on only the stimuli, which are typically the longest, most feature-rich components of the test content, and thus are most likely to reflect fairness and bias issues. Issues in the stimuli can leak through to other components, making the stimulus the source of the majority of fairness issues. Annotation For each stimulus, we aim to identify whether or not the stimulus contains fairness/bias issues, and if so, what type of issue is present. We start with a dataset of automatically generated stimuli. These stimuli were generated using prompting and different versions of GPT: the prompts were iteratively improved with the goal of improving the overall quality of the stimuli. During this process, each stimulus was evaluated by the test\u2019s content development experts. For this work, the stimuli used were rejected by the reviewers, allowing us to provide them publicly and explore their use for fairness detection. These rejected stimuli typically have the relevant language and structure, so our goal is to identify which of those stimuli were rejected (at least in part) for fairness reasons. We employ content development experts to annotate these samples, yielding a binary classification between non-fairness and fairness-related rejections. However, there are different ways for bias and fairness considerations to impact individual stimuli. To better understand and mitigate these issues, we separated them into two main categories: \u2022 Knowledge, Skill, and Ability (KSA): content that contains construct-irrelevant information that may be unavailable to test takers in different environments or with different experiences and abilities. These include content with reference to specific skills, regionalisms, or unfamiliar contexts. \u2022 Emotion: content in which language, scenarios, or images are likely to cause strong emotions that may interfere with the ability of some groups of test takers to respond. These include offensive, controversial, upsetting, or overly negative content. Each sample that is flagged for fairness is annotated for one or both of these categories. This allows further analysis to address these specific fairness categories and to better understand the impact of specific fairness issues. Our dataset is comprised of stimuli from seven different item and task types: a summary of the collected data is shown in Table 1, with examples for each type in Appendix A. These stimuli represent various structures, depending on the item/task type: Read a Text Aloud, Talks, and Text Completion stimuli are short text paragraphs, while Conversation stimuli involve turns between two or more speakers. Respond to Questions Using Information Provided and Respond to a Written Request task stimuli are structured content: the generation process creates text that is filled into a structured template; we use only the raw text. Overall we collect 601 samples, of which 19.6% exhibit evidence of fairness issues, with 9.5% reflecting KSA issues and 12.3% Emotion issues. We build a validation set of 48 samples reflecting a balance of the item and task types from the training types (Read a Text Aloud, Talks, Text Completion, and Respond to Questions Using Information Provided), and an equal-sized \"in-domain\" dataset from these stimuli is held separately for testing. These datasets contain an even number of positive and negative classes for fairness evaluations. As our goal is to be able to identify positive cases where fairness issues exist, we intend for our validation and test sets to have a substantial number of this class. We use the two remaining types (Conversations, Respond to a Written Request) as a separate \"out-of-domain\" test set to evaluate performance on unseen content. 4.2 Experiments We experiment with standard transformer-based classification baselines, topic detection, and a variety of GPT4-based prompting, including methods for automatic prompt-self correction. We describe each method below: each is tuned on the validation set, and we report the best model performance on that set. We then evaluate model performance on two separate test sets in Section 5. Classification with Fine-Tuning We fine-tune standard pre-trained transformer models for sequence classification. We experiment with bert-base-cased, bert-large-cased (Devlin et al., 2019), roberta-base, (Liu et al., 2019) and deberta-base (He et al., 2021) models. We perform a hyperparameter search on our validation set for each model, finding that a learning rate of 2e-5 over 2-4 epochs generally performs best, and report results using the model with the best performance. Topic-Based Filtering We observe that many samples are flagged for fairness due to the topic of the material: many topics contain content that violates our fairness guidelines directly, while others are simply more likely to include unacceptable content. Motivated by this, we explore topic detection as a method for identifying fairness issues. We first identify topics found within the data. We use the topic modeling framework BERTopic (Grootendorst, 2022) to extract topic representations from two sources of training data: (1) all samples from the training partition of our dataset and (2) our fairness guidelines. In this method, SentBERT (Reimers and Gurevych, 2019) converts each training document into a dense vector representation which are then grouped by semantic similarity, creating clusters that represent different topics. For each of the two training sets, topic descriptions made up of the most important words in a cluster are generated for the clusters containing at least five supporting documents. We manually assess each topic description for themes that should be avoided based on their relation to known fairness issues and which topics are acceptable. Finally, for each unseen sample in test and validation datasets, we make predictions based on the single nearest topic cluster. If a sample falls within the boundaries of restricted topics, it is classified as a violation. Results for these methods are shown in Table 2. The fine-tuned bert-based models perform fairly Fine-tuning Model Prec Rec F1 bert-base-cased 1.00 0.29 0.45 bert-large-cased 0.92 0.50 0.65 roberta-base 0.92 0.50 0.65 deberta-base 1.00 0.63 0.77 Topic-based Filtering Model Prec Rec F1 Topic-data 0.79 0.46 0.58 Topic-guidelines 1.00 0.04 0.10 Table 2: Results for fine-tuning (above) and topic detection (below) on the validation set. well, with F1 scores for bert-large-cased and roberta-base both around 0.65, and deberta-base showing exceptional performance with an F1 score of 0.77. The Topic-Based Filtering models are worse, with the data-based system yielding an F1 score of 0.58. In all cases, precision is much higher than recall; these models are conservative with predictions. 4.3 Prompting We initially experiment with five different \u201cbase\u201d prompts. We pair these with stimuli and use GPT4 to return \u201cTrue\u201d if the stimulus contains a fairness issue and \u201cFalse\u201d otherwise. These prompts represent different strategies3: \u2022 GENERIC (SHORT) 53 tokens: Drawing from general knowledge of fairness and bias in LLMs, we write a generic prompt designed to combat attested LLM biases. This prompt is designed as a weak baseline. Our goal is to determine if a short, simple prompt can capture relevant issues, and whether or not it can be easily improved via self-correction or few-shot learning (Sections 4.3 and 4.3) \u2022 GENERIC (LONG) 191 tokens: This is a longer, more detailed version of the above, containing nearly 200 tokens. \u2022 GUIDELINE (SHORT) 197 tokens: We craft a prompt based on guidelines for writing fair assessments. Using documentation that defines what constitutes fair assessment items and how to write them, we build a prompt capturing the important components of a fair question. The goal of this prompt is to determine whether human-written guidelines based on theoretical issues will accurately capture these issues in real data. 3Prompts in Appendix B. \u2022 GUIDELINE (LONG) 1081 tokens: We construct a \u201clong\u201d version of the previous guidelines by summarizing the entire fairness guidelines with the help of GPT4, asking for concise versions of relevant sections and combining them into a document that fully captures all the relevant aspects of the guidelines. This prompt is our longest, but still fully based on documentation. The goal of this prompt is to determine the efficacy of a longer, more comprehensive prompt. \u2022 DATA-DRIVEN 142 tokens: We craft a prompt based on annotations in our data. We identify which topics and language cause fairness issues and build the prompt to reflect how they might generalize to unseen item/task types and topics. This method is hypothesized to be the most effective, as it will address known issues in the data but may not extend to unseen data, as it is built specifically around the given training samples. These prompts are run through GPT4 via the Azure interface (OpenAI et al., 2023). Each prompt was updated manually to correct obvious potential issues. Our goal here is not to overoptimize prompt writing, which could lead to overfitting the validation set, but rather to develop a generic prompt likely to be effective for both known fairness issues and novel issues possible in generated content. Initial experiments on the validation set revealed two insights: the GENERIC (LONG) prompt performs similarly to the GENERIC (SHORT) in all cases, and the GUIDELINE (LONG) prompt is ineffective. We therefore focus our efforts on the three other prompts: GENERIC (SHORT) GUIDELINE (SHORT) and DATA-DRIVEN. A primary issue in identifying the fairness issues we are concerned with is that they are only applicable in the context of certain testing environments. The guidelines and data we use are specific to a certain test; for other tests, other environments, and other contexts, different definitions of fairness and different alignments will be applicable. For this reason, the ability to adapt to new environments is critical in applying fairness constraints. We aim to build a system that can learn to adapt to new guidelines with minimal information. We explore several methods to improve the robustness of prompting: few-shot prompting and self-correcting prompting techniques. Few-Shot Prompting We follow few-shot prompting strategies that have been successful for LLM classification tasks (Logan IV et al., 2022; Brown et al., 2020). For each of our base prompts, we provide a set number n of additional samples for both the positive and negative classes. Each of the samples contains the text of the stimulus along with a brief description of why it does or does not violate our fairness guidelines. Prompt Self-Correction We leverage our annotated data using a prompt self-correction strategy (Figure 2). We start with a base prompt and iterate through a batch of samples from our training data. For each sample, we insert text into the prompt, perform a pass through GPT4, and evaluate the resulting output. If it is correct, we continue. If not, we instruct the LLM to make adjustments to the prompt, either to disallow the sample (when the error is a false negative) or to allow it (when the error is a false positive), using the following prompts: \u2022 Positive sample incorrectly classified as negative: The previous text was classified as not having any fairness issues, returning False. However, it did have an issue. Think about what topics or issues in the text might be unfair. Improve the original fairness instructions by adding or modifying restrictions so to account for this text. \u2022 Negative sample incorrectly classified as positive: The previous text was classified as having a fairness issue, returning True. However, this item has no issue. Think about why this text was classified as unfair. Improve the original fairness instruction to allow this item by removing or revising restrictions. This process is run up to e epochs, or stopped early if accuracy reaches 1 or the predictions are stable. We run over b batches of n samples randomly drawn from the training data, using the best-scoring prompt from the final batch for evaluation.4 Combining Few-Shot and Self-Correction Few-shot and self-correction are inherently complimentary, as the self-correction method returns an optimized prompt and few-shot learning reinforces it by providing in-domain examples. We combine them by concatenating additional few-shot learning samples to the self-correcting prompts. 4For an example of the process, see Appendix C. Prompt Training Sample LLM (GPT4) Prediction Incorrect Correct Rewrite prompt Training Data Evaluate Run epoch Return final prompt Finished New prompt Resample Figure 2: Self-correcting prompt strategy. Data is run through the prompt. If the result is correct, we continue; otherwise, we instruct the LLM to correct the prompt. Figure 3: F1 scores on the validation set for each prompting method. Note that for GENERIC (SHORT) the F1 score was 0. Full results in Appendix D. For each of these improvements to prompting, we perform a hyperparameter search over the number of total training/few-shot samples and batch size. We experiment with the GENERIC (SHORT) GUIDELINE (SHORT) and DATA-DRIVEN prompts.5 We hypothesize the GENERIC (SHORT) and GUIDELINE (SHORT) prompts should be able to benefit quickly from adaptive methods, while the DATA-DRIVEN prompt should be nearly optimized, as it is already based on observations from the data. We use the validation set to tune the prompts and parameters to optimize the F1 score for each method. Note that for all prompting strategies, the temperature is set to zero; the prompts should only return True or False. Figure 3 shows the best results on the validation set. We explore each model\u2019s effectiveness on unseen data in Section 5. 5Experiments with the longer guideline-based prompt were unsuccessful: the LLM invariably returns either a commentary on a single testing procedure or rewrites the prompt entirely to handle a single sample. The base generic prompt fails, as the traditional bias and stereotyping issues are less likely to occur in our generated content, and the fairness issues we are concerned with are unlikely to be deemed as problematic out of context. Using a simplified version of our guidelines yields a 0.36 F1 score for identifying fairness issues. The DATA-DRIVEN based on observations in the training data yields much better results (0.70 F1). However, this may not extend well to novel cases, as the prompt is driven purely by our validation data. Few-shot learning displays some interesting properties: we see significant improvements across all three prompts, using three samples. (This yielded the best results across all validation runs). Even the minimal GENERIC (SHORT) prompt rises to over 0.60 F1 with minimal few-shot prompting. We see small improvements over the baseline using prompt self-correction for all three prompts. For the DATA-DRIVEN prompt, results using selfcorrection equal those using few-shot learning. This aligns with previous work showing that language models themselves tend to write better prompts (Fernando et al., 2023): after only a few iterations of self-correction, the DATA-DRIVEN prompt surpasses the performance of a humanwritten prompt, even in cases where the human describes the dataset explicitly. Combining self-correction and few-shot learning yields improvements over base prompts and fewshot prompting alone. This approach yields the best results for all three prompts, with the bestperforming model being the DATA-DRIVEN prompt with self-correction and few-shot learning. This may be due to overfitting, however: the prompt is written to reflect the data. To explore the efficacy of these methods on unseen data, we evaluate them on our two held-out test sets. 5 Test Results The previous experiments describe our attempts to identify the best-performing model for fairness classification on our validation set. Our goal is to develop a system that generalizes. For this, we evaluate the best-performing of the above model types on two held-out test sets: 1. In-domain: The 48 held-out samples drawn from the item/task types used for training. 2. Out-of-domain: All samples (66) from the two held-out types: Conversations, Respond to a Written Request. Figure 4: F1 scores on two test sets for each proposed method. Note that for bert-large-cased and GENERIC (SHORT), the scores were 0.00 on the unknown test set. Full results in Appendix E. Figure 4 shows the results on the test set. We evaluate the best-performing models of each type: fine-tuned transformer models, topic-based classification, base prompts, few-shot learning, self-correction, and combining few-shot and selfcorrection. We here note some key facts about model performance on our test set. Best Performance Combining the DATADRIVEN prompt with self-correction and few-shot learning performs the best on the in-domain test. This shows this is the best approach if there is available data and expertise to support hand-crafting a DATA-DRIVEN prompt and running self-correction. On the out-of-domain data, the smaller initial prompts, GENERIC (SHORT) and GUIDELINE (SHORT) both outperform the DATA-DRIVEN prompt, perhaps due to their more generic nature: the DATA-DRIVEN prompt is too specific to this dataset, and understandably doesn\u2019t generalize well. The self-correct+few-shot methodology performs the best in both cases: few-shot learning alone is better than self-correction alone, but the combination is typically the best. Strong Results from Small Models Traditional transformer-based classification performs remarkably well, especially in generalizing to the out-ofdomain data. On the in-domain data, the best performing model deberta-base performs on par with the best base prompting model (0.58 compared to 0.60 F1 score), although this is a significant drop from the validation performance of 0.77, and performs quite poorly on out-of-domain data (0.20), indicating the model may overfit during training. On the out-of-domain data, roberta-base performs nearly as well as the best-performing overall model, just 0.04 behind the GENERIC (SHORT) prompt with self-correction and few-shot learning. If the goal is to quickly and cheaply build a system that is applicable to a wide variety of domains, there appears to be significant value in relying on these relatively small transformer-based classification models. The Topic (data) approach is also competitive on out-of-domain data, and does not even require model training; it lags only slightly behind the roberta-base model. Self-Correction We found significant success in our proposed self-correction mechanism. While it typically does not outperform few-shot learning in isolation, the methods are naturally complementary, and the combination often yields the best-performing model. In examining the models\u2019 self-corrections, we find that when asked to become more restrictive, the model tends to add sentences with new constraints, which nicely reflect the issue that was missed. When asked to become less restrictive, the model tends to add hedges to currently existing constraints. In our experiments, we noted some issues. First, when run using too many samples or batches, the prompts tend to degrade: once the LLM makes an error and returns a prompt that doesn\u2019t match the specifications, the run needs to be aborted. Even when the LLM sticks to the instructions, after many iterations the prompts become unwieldy and selfcontradictory, and performance rapidly declines. We suggest using somewhere between six and 20 total samples for prompt self-correction; it is best to avoid making corrections indefinitely. Model Type KSA Emotion Fine-tuned bert-base-cased 0.07 0.57 bert-large-cased 0.00 0.00 roberta-base 0.06 0.56 deberta-base 0.08 0.75 Topic-based Data 0.26 0.59 Guideline-based 0.20 0.06 Base Prompting GENERIC (SHORT) 0.00 0.00 GUIDELINE (SHORT) 0.29 0.09 DATA-DRIVEN 0.47 0.50 Self-correction GENERIC (SHORT) 0.35 0.30 GUIDELINE (SHORT) 0.35 0.27 DATA-DRIVEN 0.47 0.41 Few-shot GENERIC (SHORT) 0.18 0.24 GUIDELINE (SHORT) 0.30 0.24 DATA-DRIVEN 0.36 0.56 Few-shot + Self-correction GENERIC (SHORT) 0.18 0.21 GUIDELINE (SHORT) 0.23 0.21 DATA-DRIVEN 0.24 0.59 Table 3: Recall scores for KSA and Emotion-labeled data across both test sets. Use-Cases and Metrics We here report F1 score as a balance between precision and recall. (For full scores, see Appendix E.) Depending on the end use case, other metrics may be more appropriate. In our case, we advocate for always including humans in the evaluation process to ensure that only fair content is accepted. We then value both precision (as we do not want to excessively flag content for fairness issues, which could reduce diversity) and recall (as we do not want to let fairness issues through). Optimizing for recall seems reasonable, as it is likely more important to prevent fairness issues from being released, but it is critical to note that no system is perfect: even optimizing for recall, these fairness issues are likely to persist, and the models should not be used as failproof safeguards. KSA and Emotion We evaluate performance on the test set for the two subcategories: Knowledge, Skill, and Ability (KSA) and Emotion (Table 3). The deberta-base model performs exceptionally well on the KSA subcategory, capturing 75% of the fairness-flagged samples. Data-based methods (the DATA-DRIVEN prompts (0.59) and Topics from Data (0.59)) also perform well, likely due to the inclusion of negative emotional issues in the text. They perform much worse on KSA classification, although the DATA-DRIVEN prompts still yield the best performance (0.47): KSA-related issues are especially difficult as they generally involve only specific knowledge, and would not normally be considered fairness issues in other contexts. 6 Conclusions This work delivers four key contributions: an exploration of a novel fairness detection task, a dataset of 601 samples annotated for fairness issues, evaluation of a variety of classification models for this task, including fine-tuning, topic-based approaches, and prompting, and a novel prompting strategy, which, combined with few-shot learning, achieves state-of-the-art performance on the task. This work is aimed to explore the space of fairness and bias issues in generated content, especially in the education context. We aim to highlight the difficulties of accounting for fairness, particularly in specific contexts unlikely to be accounted for by traditional model guardrails. As language model usage becomes more prevalent, the need for proper bias and fairness strategies from people training, deploying, and using these models is paramount. 7 Ethics Content generation comes with inherent ethical concerns relating to fairness, bias, factuality, and sensitivity. Our work aims to mitigate these issues with regard to fairness, but it is important to consider potential issues that might arise from using LLMs and other NLP technology in generating assessment content. Models may introduce subtle biases against disadvantaged groups, or produce content that appears to be factual, but is not. These are critical failures that need to be accounted for. In practice, the generation of assessment content requires human intervention: large language model generations are not at the point where they are immune to these negative impacts, and thus for any content that goes into production, a human with relevant expertise needs to evaluate it. The methods we propose support this human intervention, as they can remove obviously offensive content before the human review stage, or assist in human reviews by flagging potentially harmful content. While our dataset is unlikely to contain any content that is triggering (our framework of fairness is focused on more nuanced contexts), it must be noted that there is potential for it to be used maliciously; for example, by someone designing a system to adapt to and deceive a fairness detection system. In releasing this data, we hope to bring awareness to this issue and better understand the potential negative impacts. Primarily, we stress that any fairness detection system should not be used in isolation or without supervision as a catchall for potential issues. 8 Limitations Our work is limited largely by the type of content evaluation and the models used. We focus on a small number of item and task types that fall under very specific fairness constraints: the evaluation of the methods used specifically applies to these items under these constraints. This is apparent in the evaluation on the \"unseen\" item types in Section 5. Applying these methods to new item and task types, even those annotated under the same fairness guidelines, yields significantly reduced results. This is evidence that the methods and models we designed work only for the specific contexts in which they are trained and developed. Similarly, we explore a small space of models and approaches. We use relatively basic prompt strategies; there exist many other approaches and improvements that are likely to be valuable that we do not evaluate. The same is true of fine-tuned models and topic classification. We present relatively basic, well-known strategies to better understand the difficulty of our data, with the understanding that there are substantial improvements that could be applied."
}
]
}