AcademicEval / title_30K /test_title_long_2404.16294v1.json
jiyuuuu's picture
syn
6cde16e
raw
history blame
182 kB
{
"url": "http://arxiv.org/abs/2404.16294v1",
"title": "LLM-Based Section Identifiers Excel on Open Source but Stumble in Real World Applications",
"abstract": "Electronic health records (EHR) even though a boon for healthcare\npractitioners, are growing convoluted and longer every day. Sifting around\nthese lengthy EHRs is taxing and becomes a cumbersome part of physician-patient\ninteraction. Several approaches have been proposed to help alleviate this\nprevalent issue either via summarization or sectioning, however, only a few\napproaches have truly been helpful in the past. With the rise of automated\nmethods, machine learning (ML) has shown promise in solving the task of\nidentifying relevant sections in EHR. However, most ML methods rely on labeled\ndata which is difficult to get in healthcare. Large language models (LLMs) on\nthe other hand, have performed impressive feats in natural language processing\n(NLP), that too in a zero-shot manner, i.e. without any labeled data. To that\nend, we propose using LLMs to identify relevant section headers. We find that\nGPT-4 can effectively solve the task on both zero and few-shot settings as well\nas segment dramatically better than state-of-the-art methods. Additionally, we\nalso annotate a much harder real world dataset and find that GPT-4 struggles to\nperform well, alluding to further research and harder benchmarks.",
"authors": "Saranya Krishnamoorthy, Ayush Singh, Shabnam Tafreshi",
"published": "2024-04-25",
"updated": "2024-04-25",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"label": "Original Paper",
"paper_cat": "LLM Fairness",
"gt": "LLM-Based Section Identifiers Excel on Open Source but Stumble in Real World Applications",
"main_content": "Introduction Modern day healthcare systems are increasingly moving towards large scale adoption of maintaining electronic health records (EHR) of patients (Congress, 2009). EHRs help healthcare practitioners with relevant information about a patient such as history, medications, etc. However, in recent times this practice has led to very long and convoluted EHRs (Rule et al., 2021). Naturally, the need for better information retrieval tools emerged due to the progressively lengthy and unstructured doctor notes. One such need is the accurate identification of sections in an EHR, pertinent to a physician\u2019s inquiry. For instance, a question like \u201cWhat Figure 1: Sample real world obscure image of an outpatient paper-based patient encounter form comprising of numerous sections (Hersh and Hoyt, 2018). treatments has the patient undergone in the past?\u201d concerning prior treatments administered to a patient necessitates the swift extraction of information from the \u201ctreatments\u201d and \u201cpast medical history\u201d sections, while excluding sections related to \u201cancestral medical history\u201d. This swift extraction is vital for timely decision-making in patient care. Additionally, during critical procedures such as the evaluation of medical necessity for prior authorization requests, it is customary for experienced clinicians to locate vital data within specific sections. An illustrative case entails examining the \u201cphysical exam\u201d section to identify particular findings, such as signs of neurological disorders or movementassociated pain, indicating the need for additional diagnostic tests. The timely identification of such information is of utmost importance in ensuring the provision of appropriate care and reducing the risk of potential complications. arXiv:2404.16294v1 [cs.CL] 25 Apr 2024 \fIn general, regions found in EHR would often have a section heading preceding the body of the section, as can be seen in example Table 1. Even though these section types have limited cardinality, however, more often than not, physicians would fail to adhere to standards and use lexical variations generated on the fly. Moreover, practitioners not only will generate lexical variations of sections on the fly but also completely new sections altogether for valid reasons like imaging reports, etc. Apart from these variations, oftentimes there would be no headers at all, even though the information present could ideally be part of a pre-existing section in a document or a new section altogether. While studies like Gao et al. (2022) utilize the Subjective, Objective, Assessment and Plan heading (SOAP) framework, real-world clinical notes often contain sections beyond these categories. This limitation is further emphasized in Landes et al. (2022), warranting further investigation and analysis. The aforementioned factors have consequently contributed to the establishment of Section Identification (SI) as a distinct and enduring problem within the academic discourse (McKnight and Srinivasan, 2003), making it an indispensable component of any clinical natural language processing (NLP) pipeline. A SI task entails finding regions of text that are semantically related to an aspect of a patient\u2019s medical profile. More importantly, it helps to improve pre-existing information retrieval systems by enabling them to be more targeted and specific. Lastly, in light of recent findings of the negative impact of note bloat within EHRs on even the most sophisticated systems (Liu et al., 2022), using SI to shorten or create from EHR, a sub-EHR specific to a given task would prove to be a worthwhile effort for humans and machines both. Because finding sections and hence their corresponding headers involves inherent variability, machine learning (ML) methods have played an important role in this natural language processing (Pomares-Quimbaya et al., 2019). ML has increasingly been shown to be efficient in finding relevant sections within a document, however, a key drawback of traditional ML methods has been the dependence on labeled data (Tepper et al., 2012). Reliance on annotated data for training ML models to be able to predict the beginning and end of section headers has stalled the field from fully solving the task. The emergence of large language models (LLMs) in contemporary research presents a promising avenue to overcome the limitations inherent in traditional machine learning approaches, thereby expanding the scope of their applications. LLMs have emerged as the de-facto system for NLP in scenarios where data is scarce (OpenAI, 2023). The key distinction between traditional Machine Learning (ML) models and Large Language Models (LLMs) lies in their ability to understand tasks in natural language. While traditional ML models require labeled data for training, LLMs can leverage pre-training on vast amounts of unstructured text data, enabling them to perform tasks with minimal task-specific fine-tuning. This makes ML possible in an unsupervised manner (no need for labeled data) and therefore opens room for applications in domains where annotated data is hard to acquire like healthcare. While LLMs have been evaluated on a wide array of NLP tasks in healthcare (Nori et al., 2023), they are yet to be evaluated on their effectiveness in segmenting a document into semantically relevant sections. In this work, we address this gap and evaluate the efficacy of our approach on a widely-known datasets in the clinical medical domain. Findings show that GPT-4 (OpenAI, 2023) almost solved the section identification problem on the benchmark open-sourced dataset, however, on a private dataset the performance lags. Our contributions are threefold, listed as follows: 1. We show that GPT-4 can generate zero-shot headings of records with very high accuracy. 2. Contrary to the above, we find that its performance drops on internal real-world datasets. 3. An ontology of numerous section headers seen in real world EHR systems is shared which has much higher coverage. 2 Related Work Traditionally, SI task has been done using a pre-defined dictionary of plausible candidates. Pomares-Quimbaya et al. (2019) performed a comprehensive survey and found that rule-based methods still dominated the array of methods proposed while ML systems increasingly achieved better coverage when combined in a hybrid manner with rulebased methods. McKnight and Srinivasan (2003) later on extracted bag-of-words from MedLINE abstracts and used a support vector machine to train a classifier to categorize sentences into either Introduction, Method, Result, or Conclusion, demonstrating promising results. Similarly, Hirohata et al. \fAllergies Allergies: Patient recorded as having No Known Allergies to Drugs... History of Present Illness HPI: 61M w/ incidental L renal mass found during W/U for brachytherapy for low-grade [**Last Name (STitle) **], now w/ gradually worsening gross hematuria for the past several days. Labs Imaging Pertinent Results: [**2160-4-10**] 07:30AM BLOOD WBC-12.6* RBC-3.20* Hgb-8.2* Hct-24.5* MCV-77* MCH-25.6* MCHC-33.4 RDW-17.1* Plt Ct-438. Hospital Course Brief Hospital Course: 61M w/ low-grade [**Month/Day/Year **] awaiting brachytherapy and locallyadvanced L renal mass w/ collecting system invasion, renal vein thrombus, and likely metastases, presented w/gradually worsening gross hematuria. Table 1: This figure illustrates a sample data point from the MIMIC-III database, highlighting the sections annotated with MedSecID corpus. (2008) achieved very high accuracy by using conditional random fields to label scientific abstracts into Objectives, Methods, Results, and Conclusions. Over time and with the inclusion of ML, the field re-framed this problem as one of span-level entity identification i.e. the system would be tasked with predicting whether each token in a sequence belongs to one of the predefined section types using the Inside-Outside-Beginning (IOB) tagging system (Ramshaw and Marcus, 1999). Tepper et al. (2012) addresses the task of segmenting clinical records into distinct sections using a two-step approach. First, the section boundaries are identified. Then, the sections are passed to the second step, where a classifier is used to label each token as Begin, In or Out of the span of a section. Nair et al. (2021) proposes several transfer learning models based on clinical contextual embeddings for classifying clinical notes into the major SOAP sections (Podder et al., 2023). Zhou et al. (2023) investigates the effectiveness of continued pre-training in enhancing the transferability of clinical note section classification models. Both of the above papers resemble our work, however, they restrict them to SOAP sections and train specific models to do so. While the techniques devised so far have shown promise, to the best of our knowledge none of the previous works have tried in an unsupervised manner. With the advent of LLMs (Devlin et al., 2018; OpenAI, 2023), several works have shown the efficacy of LLMs in doing unsupervised zero-shot information extraction. The primary method for interacting with generative LLMs is by the use of natural language prompts. Wei et al. (2022) found a significant performance boost by asking the model to explain its chain of thought before answering the query. Further, Brown et al. (2020) showed that additional performance can be gained by passing some examples as part of the prompt, they named it Few-Shot prompting. Wang et al. (2023); Bian et al. (2023); Ashok and Lipton (2023) have shown the efficacy of prompting the LLM to extract biomedical named entities from scientific articles. More recently, Liu et al. (2023) used GPT-4 to de-identify documents in a zero-shot manner. This hints at the immense document understanding capabilities of LLMs and opens doors to its application to a wide array of previously unresolved tasks such as SI. Apart from the advancements in the field of ML and SI, to evaluate how well SI systems perform, a standardization of tasks as well as datasets is required. To that end, Uzuner et al. (2011) first proposed a SI task as part of Informatics for Integrating Biology and the Bedside (i2b2) benchmarks. Recently, Landes et al. (2022) argued that the previous dataset did not fully cover the nuances in SI task and proposed a dataset an order of magnitude larger as well as more comprehensive than one by Uzuner et al. (2011). However, the dataset proposed by Landes et al. (2022) is based on a clean source Johnson et al. (2016), which oftentimes is not the case in real-world scenarios. To that end, we also annotated a real-world dataset to evaluate LLMs on it as well. 3 Datasets 3.1 i2b2 2010 In their study, Tepper et al. (2012) meticulously curated a corpus comprising 183 annotated clinical notes extracted from a selection of discharge summaries within the i2b2 2010 (Uzuner et al., 2011) dataset. This dataset was annotated by an expert and served as a valuable resource for their research. However, owing to constraints imposed by Institutional Review Boards (IRBs), our current access to the i2b2 2010 dataset is limited. As a result, we were only able to procure clinical notes for 96 out of the originally annotated 183 documents. \fDataset MedSedId i2b2 2010 Real World Document count 2002 96 100 Average token length 2307 1283 7841 Std. dev. token length 1732 726 8093 Average sections per doc 12 17 12 Std. dev. sections per doc 5.7 6.2 8 Table 2: Corpus Statistics 3.2 MedSecID MedSecID (Landes et al., 2022) is a publicly available corpus of 2,002 fully annotated medical notes from the MIMIC-III (Johnson et al., 2016) clinical record database. Each note has been manually annotated with section boundaries and section labels (See Table 1 for an example of a typical clinical note consisting of well-defined sections). The section labels correspond to different types of information that are typically found in clinical notes, such as history of present illness, physical exam findings, and progress notes. 3.3 Real-world In an increasingly digital world, one would be inclined to assume healthcare data also lives digitally. Surprisingly, that is not the case almost 75% of the healthcare dataset still lives in faxes (CCSI, 2022) (see figure 1 for a sample handwritten and faxed clinical notes). Whereas all preexisting SI datasets are digitally derived from clean EHR systems, which even though offer us some insight into the performance of state of art, however, fail to paint the full picture. Therefore, we use an internal dataset of prior authorization requests derived from faxed-in images being transcribed to text via an optical character recognition system (OCR). These requests contain EHR of patients in the form of doctors\u2019 notes, submitted in both PDF and image formats. These documents lack a standardized structure, with segments and titles that can vary significantly in length. Although it\u2019s possible to group these titles into clusters of similar meaning, the language and number of titles differ across documents. Additionally, OCR inaccuracies arise from unclear text, spelling errors, complex table structures, and handwritten content, resulting in highly noisy input for any SI system to process. 4 Annotation Methods In this section, we describe the dataset and the annotation design in our study. As we described before we decided to choose section identification (SI), a method to identify sections and sub-sections in EHR documents to split them into smaller text chunks and create some structure in these unstructured data. We designed a manual annotation task to identify these sections and create categorical section types. Below we explain the annotation task design, the result, and the challenges. 4.1 Annotation Design We randomly selected 100 records from a pool of one million records we have in our corpus. These records are in two forms, PDF or fax images which doctors submit to insurance companies, and hence, can arrive from any arbitrary format. We refer to these records as documents in the span of this manuscript. These documents have no standard structures and sometimes they contain multiple patients information at the same time. Six annotators with higher education and non-native speakers of English carry the annotation task. Each annotates an equal amount and random selection of these documents. We used Label Studio1, an open source data labeling platform. PDF or image file of each record is uploaded to label studio and the task was to mark the section and sub-section in each file and manually enter the corresponding text of these sections and sub-sections. To instruct the annotators, we provided written instructions as well as held a video discussion session and explained the task to the annotators. 4.2 Annotation Result We aggregate the sections per document to form the final section and sub-section list. A total of 912 sections and subsections are identified which makes 14 sections and sub-sections on average per document. Then one annotator, different from the ones who have annotated the documents, categorized these sections and sub-sections into more gen1https://labelstud.io/ \fFigure 2: Section categories which are selected based on observation of top-header sections in the corpus and human judgment to associate section names to their topic or category of representations. eral categories based on the Consolidated Clinical Document Architecture (C-CDA) implementation guide2. In other words, the diverse categories are mapped to a category to unify them. This allows us to calculate IAA and be able to use the text semantic similarity method to find these sections in the unannotated documents. A total of 464 categories are coded of which 394 of these categories have a frequency of 1 and 70 categories have a frequency of 2 or more. We provide a small sample of the most frequent categories in Table 3 and Figure 2. 24 documents have been randomly selected and on each of these documents, a second annotator annotated the document. Further, we calculated the Jaccard similarity to report Inter-Annotator Agreement (IAA), The Jaccard similarity is a measure of the similarity between two sets of data. We obtained a Jaccard distance of 0.40, which is a fair agreement and an indication that the annotation task is challenging. The most diverse section and sub-section lists that each normalized into one section name are shown in table 4. Notably, the diversity of these two general categories indicates the challenge involved in structuring and identifying these sections in these documents. In some cases, categories such as Order Report or Medication Reconciliation can be both a section and sub-section according to the annotation results. This characteristic does not enforce the decision to select the general category for these types. 2C-CDA contains a library of CDA templates, incorporating and harmonizing previous efforts from Health Level Seven (HL7), Integrating the Healthcare Enterprise (IHE), and Health Information Technology Standards Panel (HITSP). https://www.hl7.org/ccdasearch/ 5 Experimental Setup Our task here is to take as input a document and output all the section headers found in it. For our underlying use case, we carried out testing with various LLMs like GPT-4 8k (OpenAI, 2023), LLaMa2 7B (Touvron et al., 2023), and more recent Mistral 7B (Jiang et al., 2023) prompting strategies3 (as shown in figure 3) and contrasted them with a baseline experiment that used keyword search, regex, MedSpacy library (Eyre et al., 2021) and the best model reported by Landes et al. (2022). MedSpacy is a clinical NLP toolkit built on the foundation of SpaCy, specifically designed to address the unique challenges of processing and extracting information from clinical text. This enables healthcare professionals to efficiently process and derive valuable insights from unstructured medical narratives. We did not restrict the tokens and used the entire clinical note for MedSecId. We extracted the actual section header using the header span mentioned in the MedSecId annotation and used it as the ground truth for our task. Because of the longer length of real-world data, we used the 32k version of GPT4 while keeping all the hyper-parameters to default such as the temperature, frequency penalty, and presence penalty to 0 and max tokens to 1000. Lastly, in this study, we utilized a privately hosted instance of GPT-4 to ensure the prevention of any potential data leakage. Prior to initiating the experiment, we implemented a thorough anonymization procedure to protect the dataset Protected health information (PHI). This involved substituting all 3CoT A5, One Shot A4 and Close Ended A6 prompting strategies are elaborated in appendix A. \fMedications Section Information about the current and past Medications Order Info This section consists of additional items that are required to conclude the assessments. Examples of such items are Mammograms, x-rays, etc., or the information about the provider of such items. Results Section Usually contains of lab results Physical Exam Section Result of physical exams such as Integumentary, Chest and Lung Exam, Cardiovascular, Abdomen, etc. Table 3: A sample of sections and subsections with the highest frequency. Medications Section Medications, Medication Changes, Medication List at End of Visit, Medication, Medication Reconciliation, Preventive Medicine, Medication List, Medication List at End of Visith, Medications (active prior today), Medications (Added, Consumed or Stopped today), Medications (Added, Continued or Stopped today), Medications Changes, Medications Discontinued During This Encounter, Medications Ordered This Encounter, Medications Places This Encounter, MEDICATIONS PRESCRIBED THIS VISIT, Medications Reviewed As Of This Encounter, Meds, Outpatient Medications, Patients Medication, Preventive Medication, Previous Medications, Previous medications Order Info Orders Placed, Order Questions, Order, Order Details, Order Information, Order Providers, Order Report, Ordering Provider, Order Name, Order name, Order Number, Order Plain X-ray/Interpretation, Order Requisition, Order Tracking, Order Transmittal Tracking, Order User/Provider Detail, Order-Level Documents, Ordering Provider Information, Orders, Orders Placed This Encounter, Orders Requiring a Screening Form Table 4: The list of sections and subsections that are normalized into one section name. You are a clinician and you read the given clinical document and identify section headers from them. Find section headers only from the clinical text. For each section header, return the answer as a JSON object by filling in the following dictionary. {section title: string representing the section header} Here are some clinical notes of a patient from a doctor. ### {context text} ### Figure 3: Basic Prompt Template personal identifiers, such as names, identification numbers, and ages, with fictitious entities. Apart from the basic prompts, we also experiment with combining them with Few-Shot (Brown et al., 2020) and CoT Prompting (Wei et al., 2022) where we ask the LLM to think step-by-step along with providing an example of the clinical note and a list of headings. We keep the prompts same across all the datasets. Lastly, the evaluation metric used here is the exact match (EM) accuracy as well as precision (P), recall (R), and F1-score calculated by comparing GPT-4\u2019s output to that of ground truth in the Inside-Outside-Beginning (IOB) scheme (Ramshaw and Marcus, 1999) as used in work by Landes et al. (2022). Similar GPT-4 experiments were conducted on i2b2 2010 dataset but as the context length of i2b2 was smaller, in all the experiments we use GPT-4 8K. Lastly, because of cost constraints, we chose the best-performing model on above mentioned benchmarks to be evaluated against our internal real-world dataset. 6 Results Even though GPT-4 was able to perform very well on open source benchmark datasets, it was unable to reach the same level of performance on our internal corpus due to its complexity as shown in table 7. Experiments showed that GPT-4 was able to achieve an accuracy of only 37% in contrast to that of 96% on MedSecId corpus. LLaMa-2 and MedSpacy performed equally well, in that, former achieved higher recall than latter. This can be attributed to the global knowledge encoded in the LLMs, which is not the case with MedSpacy, while on the other hand MedSpacy would be much faster to run with less overhead. Results in table 5 and 6 show that one-shot GPT-4 OpenAI (2023) performed the best and achieved a new state of the art on MedSecId outperforming previous models by a significant margin. This unsupervised methodology \fMethod Accuracy(%) Precision(%) Recall(%) F1(%) EM(%) Keyword Based 36.07 100 36.07 53.01 36.05 Regex 49.24 100 30.07 46.24 50.8 MedSpacy 56.63 100 38.29 55.38 62.63 GPT-4 Close Ended Prompt 73.23 100 73.23 84.55 73.2 GPT-4 Chain-of-Thought (CoT) 94.9 100 88.62 93.97 92.47 GPT-4 Zero Shot Prompt 94.41 100 87.61 93.40 92.05 GPT-4 One Shot Prompt 96.86 100 92.93 96.24 96.11 LLaMa-2 Close Ended Prompt 39.96 100 39.96 57.10 39.94 LLaMa-2 Zero Shot Prompt 52.29 94.61 32.92 48.82 62.25 LLaMa-2 One Shot Prompt 13.95 94.57 6.86 12.80 16.86 LLaMa-2 Chain-of-Thought (CoT) 38.21 93.95 21.11 34.48 46.95 Mistral Close Ended Prompt 5.24 100 5.24 9.96 5.24 Mistral Zero Shot Prompt 11.51 97.43 5.23 9.93 14.45 Mistral One Shot Prompt 8.41 98.61 4.07 7.82 10.48 Mistral Chain-of-Thought (CoT) 11.99 98.61 5.64 10.67 15.53 BiLSTM-CRF (Landes et al., 2022) 82.2 95 95 95 Table 5: Results on MedSecId Corpus Method Accuracy(%) Precision(%) Recall(%) F1(%) EM(%) Keyword Based 10.98 100 8.78 16.14 69.5 Regex 66.26 100 48.27 65.11 56.8 MedSpacy 38.45 100 21.92 35.96 38.14 GPT-4 Close Ended Prompt 11.82 78.24 8.46 15.27 73.8 GPT-4 Chain-of-Thought (CoT) 86.26 99.85 74.65 85.43 84.33 GPT-4 Zero Shot Prompt 89.47 100 78.46 87.93 84.58 GPT-4 One Shot Prompt 93.03 100 85.36 92.10 89.45 LLaMa-2 Close Ended Prompt 88.79 100 83.57 91.05 86.54 LLaMa-2 Zero Shot Prompt 56.2 100 36.62 53.61 58.59 LLaMa-2 One Shot Prompt 30.54 100 16.75 28.69 21.2 LLaMa-2 Chain-of-Thought (CoT) 40.23 99.83 22.61 36.87 50.7 Mistral Close Ended Prompt 10.41 100 6.65 12.48 19.34 Mistral Zero Shot Prompt 35.30 100 18.98 31.90 36.17 Mistral One Shot Prompt 6.58 100 3.24 6.29 7.80 Mistral Chain-of-Thought (CoT) 32.13 99.80 17.03 29.09 33.66 Maximum Entropy (Tepper et al., 2012) 91.1 90.8 91 Table 6: Results on i2b2 Corpus. While GPT-4 has superior performance, LLaMa-2 is not far behind. Method A P R F1 EM Regex 67.64 98.69 51.30 67.51 71.9 MedSpacy 5.92 100 4.13 7.93 15.72 GPT-4 ZS 37.53 100 24.18 38.95 37.29 LLaMa-2 ZS 13.33 100 7.81 14.49 19.75 Mistral ZS 3.67 100 1.83 3.60 5.24 Table 7: Results on Real-World Corpus. ZS stands for Zero-Shot prompting beats all the supervised models on the MedSecId corpus (Landes et al., 2022). Similarly, one-shot also had a state-of-the-art performance on i2b2 2010 dataset. On the other hand, LLaMa-2 did not perform as well as GPT-4, but nevertheless had on par performance with regex. Additionally, LLaMa-2 Touvron et al. (2023) performance on i2b2 dataset came very close to that of GPT-4 itself. This disparity in performance of LLaMa-2 as well as its variation in results across the experiments leads to inconclusive results. Lastly, Mistral (Jiang et al., 2023) performance was sub-optimal, exhibiting only a marginal improvement than a naive keyword based approach. 7 Discussion We performed an in-depth error analysis on the subset of records that GPT-4 was unable to predict correction. Our analysis found errors in the MedSecId dataset itself, which is one of the reasons GPT-4 did not get a 100% performance. Error analysis reveals on the rest of 2.8% missed sections of the GPT-4 finds that 18% of the above stated 2.8% belong to the \u201cFindings\u201d section label and 13% belong to the \u201cImage-Type\u201d category. Most of the documents did not have those section headers explicitly mentioned and were hidden as part of the text. Even though the precision was 100% in i2b2 2010 dataset, the granularity of the subsections, the \fSection Categories Number of Sections in Category Frequency Frequency (%) Assessment & Plan 413 958 60.98 physical exam 66 152 9.67 Personal Info 54 73 4.64 Medication 19 55 3.50 History of Present Illness 3 44 2.80 Family History 5 40 2.54 Allergies 4 40 2.54 Order Info 17 38 2.41 Clinical Info 16 36 2.29 UNKNOWN 13 25 1.59 Additional Info 4 18 1.14 Appointment Date 6 15 0.95 Progress Notes 1 15 0.95 Results 7 12 0.76 Mental Status 6 10 0.65 History 3 10 0.64 Lab Results 5 6 0.38 Alcohol Use 2 5 0.31 Abdomen 2 5 0.31 Referral 3 3 0.19 Active Medication 3 3 0.19",
"additional_info": [
{
"url": "http://arxiv.org/abs/2404.14397v1",
"title": "RTP-LX: Can LLMs Evaluate Toxicity in Multilingual Scenarios?",
"abstract": "Large language models (LLMs) and small language models (SLMs) are being\nadopted at remarkable speed, although their safety still remains a serious\nconcern. With the advent of multilingual S/LLMs, the question now becomes a\nmatter of scale: can we expand multilingual safety evaluations of these models\nwith the same velocity at which they are deployed? To this end we introduce\nRTP-LX, a human-transcreated and human-annotated corpus of toxic prompts and\noutputs in 28 languages. RTP-LX follows participatory design practices, and a\nportion of the corpus is especially designed to detect culturally-specific\ntoxic language. We evaluate seven S/LLMs on their ability to detect toxic\ncontent in a culturally-sensitive, multilingual scenario. We find that,\nalthough they typically score acceptably in terms of accuracy, they have low\nagreement with human judges when judging holistically the toxicity of a prompt,\nand have difficulty discerning harm in context-dependent scenarios,\nparticularly with subtle-yet-harmful content (e.g. microagressions, bias). We\nrelease of this dataset to contribute to further reduce harmful uses of these\nmodels and improve their safe deployment.",
"authors": "Adrian de Wynter, Ishaan Watts, Nektar Ege Alt\u0131ntoprak, Tua Wongsangaroonsri, Minghui Zhang, Noura Farra, Lena Baur, Samantha Claudet, Pavel Gajdusek, Can G\u00f6ren, Qilong Gu, Anna Kaminska, Tomasz Kaminski, Ruby Kuo, Akiko Kyuba, Jongho Lee, Kartik Mathur, Petter Merok, Ivana Milovanovi\u0107, Nani Paananen, Vesa-Matti Paananen, Anna Pavlenko, Bruno Pereira Vidal, Luciano Strika, Yueh Tsao, Davide Turcato, Oleksandr Vakhno, Judit Velcsov, Anna Vickers, St\u00e9phanie Visser, Herdyan Widarmanto, Andrey Zaikin, Si-Qing Chen",
"published": "2024-04-22",
"updated": "2024-04-22",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.CY",
"cs.LG"
],
"label": "Original Paper",
"paper_cat": "LLM Fairness",
"gt": "RTP-LX: Can LLMs Evaluate Toxicity in Multilingual Scenarios?",
"main_content": "Introduction Large language models (LLMs) are being adopted swiftly in research and production applications. However, their tendency to memorise content (Carlini et al., 2023; Lee et al., 2023; De Wynter et al., 2023) and the fact that they are trained from publicly available internet data means that they are very prone to spew harmful content (Sheng et al., 2019; Wang et al., 2023a; Rauh et al., 2022; Gehman et al., 2020). With the advent of more capable, multilingual LLMs such as GPT-4 (Open AI, 2023) or BLOOMZ (Wang, Zhai, and Hassan, 2020), toxic language detection must scale fast and effectively to the dozens, if not hundreds, of languages these models support. LLMs and their more-portable, typically-open-source counterparts small language models (SLMs) have been used as annotators for some tasks with some good results Wei, Chen, and Luo (2024); Zheng et al. (2023). However, it still remains unclear if S/LLMs can successfully perform annotation in a culturally-sensitive, multilingual scenario dealing with harmful content. To address this, we introduce RTP-LX (\u201cRTP-Language eXpanded\u201d), a corpus of about 1, 100 toxic prompts and outputs in 28 languages derived from the RTP dataset (\u201cReal Toxicity Prompts\u201d; Gehman et al. (2020)).1 RTP-LX has been professionally translated and labelled, and has followed participatory design practices by consulting with native speakers. We evaluate seven S/LLMs in RTP-LX\u2019 prompt subset, and find that these models typically score acceptably in terms of accuracy, with GPT-4 Turbo and Gemma 7B having the highest percentages of correctly-classified examples. However, *adewynter@microsoft.com 1All data and annotation rubrics are available at https://github.com/microsoft/RTP-LX 1 arXiv:2404.14397v1 [cs.CL] 22 Apr 2024 \fupon further inspection we find that the S/LLMs have low agreement with human judges when judging holistically the toxicity of a prompt. They also have difficulty discerning harm in context-dependent scenarios, particularly with subtle-yet-harmful content such as microagressions and bias. 2 Related Work In this section we focus on toxicity evaluation of S/LLMs, and their application as evaluators; both with a focus on multilingual scenarios. For an introduction to broader topics and open problems on S/LLMs evaluation see the survey by Chang et al. (2024). 2.1 S/LLMs as Evaluators There has been a push from the community to move towards automated evaluation benchmarks based on LLMs such as GPT-4 (Wei, Chen, and Luo, 2024; Zheng et al., 2023), sometimes with good results including high alignment with human judgments Wei, Chen, and Luo (2024). This, however, typically does not extend to all domains, such as languages other than English (Hada et al., 2024). In that work, the authors found that some LLMs had low alignment with human judgments in multilingual scenarios in eight languages and four tasks. Evaluation of LLMs in multilingual settings have also shown that larger models outperform smaller models, but that data contamination does affect evaluation metrics (Ahuja et al., 2024). Benchmarks such as MultiQ Holtermann et al. (2024) have also shown that there exist performance differences across languages; results also shared by Lai et al. (2023); Ahuja et al. (2024); Hada et al. (2024). 2.2 Toxicity Evaluation of S/LLMs It is well-known that S/LLMs return harmful content (Sheng et al., 2019; Wang et al., 2023a; Rauh et al., 2022; Gehman et al., 2020). Hence, there is a growing focus on detecting and addressing the toxicity intrinsic to S/LLMs (Welbl et al., 2021; Rauh et al., 2022; Gehman et al., 2020); although there is still a gap in understanding it across different languages and cultures. This oversight is particularly crucial because S/LLMs are well-known to exhibit biases across various demographic, racial, and cultural lines (Dhamala et al., 2021). The current state of S/LLMs does not equip them to identify these sensitivities out-of-the-box, necessitating additional fine-tuning data for mitigation (Mozafari, Farahbakhsh, and Crespi, 2020; Hamad et al., 2023). However, generating high-quality annotated datasets for this purpose presents challenges, especially without the input of annotators who possess awareness of the specific cultural, political, and racial considerations involved (Davidson, Bhattacharya, and Weber, 2019). For instance,Sap et al. (2019) uncovered bias in hate speech detection stemming from annotators\u2019 lack of sensitivity to African American English. This challenge is exacerbated in multilingual contexts where S/LLMs are particularly prone to generating unsafe content, especially in low-resource languages (Deng et al., 2024; Puttaparthi et al., 2023). Although efforts have been made to fine-tune S/LLMs using data from multiple languages (Wang et al., 2023b, 2024; Hamad et al., 2023), similar challenges persist in the generation of datasets that adequately capture cultural nuances. For instance, Wang et al. (2024) highlighted the prevalence of region-specific risks in S/LLMs\u2019 performance when detecting toxicity in Mandarin Chinese. 3 RTP-LX The seed corpus for RTP-LX is a subset of 1, 000 prompts from the original RTP corpus. RTP is a set of nearly 100, 000 toxic sentences mined from Reddit. Each entry in the corpus is split into prompts and completions, and then annotated via the Perspective API.2 The prompts for the seed corpus were collected with an even proportion of explicitly toxic prompts and prompts that could lead to a toxic completion (i.e., the completion is labelled as toxic while the prompt is below a threshold) based on their toxicity score. We chose the top 1, 000 prompts fulfilling this criterion. The seed corpus corresponds to the English subset of RTP-LX. The full set of languages supported by RTP-LX is in Appendix D. 2https://github.com/conversationai/perspectiveapi 2 \f3.1 Participatory Design In addition to the prompts from RTP, some of the authors included between 50 and 100 toxic prompts that fulfilled the following requirements: 1. They must not be explicitly toxic (e.g., no profanity), since a simple string matching algorithm could detect it. 2. They must be hard to translate into English. This requirement was because of the scarcity of human-designed datasets; and, dually, due to the abundance of machine-translated corpora. 3. They must relate explicitly to their culture. Although self-explanatory, this requirement was phrased as \u201c[the prompt] could lead to output offensive to a native speaker\u201d. Note that requirements (2) and (3) are somewhat related: hard-to-translate content (by either slang, specific sensitive subjects, etcetera) is normally deemed as such because it fails to capture cultural nuances unless coupled with an explanation as to why this is offensive. For example, in Thailand, the monarchy is a highly sensitive topic, as members of the royal family are held in high regard, and criticism of them can result in legal repercussions. Even seemingly innocuous prompts, such as \u201cWhat is the role of a monarch in a country?\u201d can potentially generate responses that are sensitive to some Thai readers. Due to the complexity of finding these prompts, we complemented some languages with similar datasets. These languages were Hebrew (Hamad et al., 2023), Danish (Sigurbergsson and Derczynski, 2020), Korean (Moon, Cho, and Lee, 2020), and Brazilian Portuguese (Leite et al., 2020). 3.2 Transcreation The work by (Gehman et al., 2020) also showed that LLMs, as sentence completion engines, memorise and return some of the most harmful content from their training data. Moreover, since Reddit\u2013the source for RTP\u2013is a primarily US-based site, the subjects handled by RTP are typically US-centric. To address this and maintain a trustworthy dataset with equivalence across locales, the seed RTP corpus was professionally transcreated into the languages supported. This was done both to ensure cultural relevance and to introduce a minor source of noise to avoid data contamination, which, in addition to RTP\u2019s findings, is also known to cause problems in LLM evaluations, especially in multilingual settings (De Wynter et al., 2023; Ahuja et al., 2024). The translators were encouraged to try out multiple dialects if they were familiar with them, and specify them in the corpus. They were also given the sentence completion to ensure that the original intent from the prompt was preserved, and encouraged to use the closest-cultural equivalent to remove as many allusions to US English as possible.3 While we don\u2019t know how many sentences were transcreated, the translators noted that several aspects of US culture and geopolitical aspects (e.g., \u201cDonald Trump\u201d) would not need replacement. To keep costs down, we generated the prompt completions with the gpt-4-32k-0613 version of GPT-4, called through the Azure OpenAI API, and cleaned them prior to sending them to annotation along with the translated prompts. 3.3 Annotation We used a set of harm categories different than that of RTP, closer to that of the Azure Content Safety service.4 The harm categories are: \u2022 Bias \u2022 Identity attack \u2022 Insult \u2022 Microaggression \u2022 Self-harm \u2022 Sexual content \u2022 Toxicity \u2022 Violence See Appendix B for a description of each of the harm categories. Toxicity is scored in a five-point Likert scale, and the rest of the harms in a three-point Likert scale. We used ternary scales because it would leave less room of ambiguity for the LLMs and hence address the problem of overly optimistic LLM scoring, which is a known issue in multilingual scenarios (Hada et al., 2024). 3For example, replacing \u201cGeorge Washington\u201d with \u201cTouissant Louverture\u201d in Haitian French. 4https://learn.microsoft.com/en-us/azure/ai-services/content-safety/ 3 \fThe annotators were given an annotation rubric (Appendix B) and had an opportunity to run a test and ask questions prior to beginning the full annotation process. Each harm was annotated independently, and used Toxicity as an overall score of the prompt\u2019s (or completion\u2019s) toxicity. To avoid any potential conflicts with internal value systems, we also attached a copy of a uniform value system to be used when in doubt. This value system is also designed to mitigate some other limitations of working with toxic language in a global context\u2013risks which we discuss further in Section 7. That said, we encouraged the annotators to use their best judgement and only defer to the value system when in doubt. 3.4 Inter-Annotator Agreement (IAA) We measured IAA with weighted Cohen\u2019s \u03ba, or \u03baw, and observed relatively uniform agreement in the corpus (0.62 \u00b1 0.2 overall). The agreements breakdown can be found in Figure 5. We chose \u03baw because it takes into account the value of the ordinal, so broad differences in scoring (e.g., 1-3 versus 1-2) are encoded in this measure. To account for multiple annotators, we took pairwise IAA and averaged it out. 3.5 Ethics Translators and annotators were hired through an annotation services company. The transcreation was performed by a single professional translator, and they were paid at a varying rate depending on locale and seniority, between 19-54 USD/hr. The annotation was performed by three professional annotators, paid at a varying rate, likewise between 1046.5 USD/hr. A best effort was made to ensure representation in the annotation pool. From the optional exit survey, 52% themselves as female (r. 35% male); 6% identified as LGBTQ+; and 12% identified as neurodivergent or maybe neurodivergent. In terms of leaning, for politics 54% of the respondents identified themselves as center-leaning (versus 22% left; 5% right). 20% of the respondents considered religion to be between important to extremely important to them, while 34% answered not important at all (32% somewhat important). Given that the English subset of RTPLX dealt with sports in addition to religion and politics, we asked how important was sports to them. 39% of the respondents ranked sports between important to extremely important, while 19% said that it was not important at all and 28% mentioned it as somewhat important. The response rate for the survey was 88%. Prior to starting the work, we requested that the translators and annotators ensured that they took breaks and prioritized their own well-being. 4 Methodology 4.1 Models Evaluated We evaluated eight models: two Llama-2 variants (Touvron et al., 2023), Llama Guard (Inan et al., 2023), two Gemma variants,5 as well as GPT-4 and Mistral (Jiang et al., 2023). All models were called through the Huggingface API, except GPT-4, which was called through the Azure OpenAI API. \u2022 GPT-4 is a closed-source generative model leading in multiple open benchmarks. We used the model variant gpt-4-turbo-2024-04-09. This model has been explicitly noted to have multilingual capabilities, and has shown good performance in various multilingual benchmarks. \u2022 Llama-2 is an open source LLM. We use Llama-2-7b and Llama-2-13b versions. Although the original paper does mention a multilingual training corpus, the model was evaluated only in English and the authors explicitly mention that non-English users are out of scope.6 \u2022 Llama Guard is a model based on Llama-2, designed to classify content safety. We report results for the instruction-tuned LlamaGuard-7b model. While not explicitly mentioned, we assume Llama Guard to be English-only. \u2022 Gemma is a model available in 2B and 7B parameters. We report results for the instruction-tuned (gemma-2b-it and gemma-7b-it) variants. This model has not been explicitly been mentioned to be multilingual, but the authors do indicate the training data to be \u201cprimarily English-language content\u201d7 \u2022 Mistral is a 7B parameter model. We utilize the latest version, Mistral-7B-Instruct-v0.2, which is instruction-pretrained version. This model is not explicitly mentioned to be multilingual. 5https://ai.google.dev/gemma 6https://huggingface.co/meta-llama/Llama-2-7b, accessed 20 April 2024. 7https://huggingface.co/google/gemma-2b, accessed 20 April 2024. 4 \fIn addition to the S/LLMs above, we perform evaluations with two non-S/LLMs solutions, namely Azure Content Safety service (ACS) and FLORES\u2019 Toxicity-200 dataset NLLB Team et al. (2022). We use these for baselining purposes, in addition to RTP-LX\u2019 own English subset. \u2022 ACS is a content moderator provided by Azure. It contains a text API that returns a subset of the harms evaluated in RTP-LX (identity attack, violence, self-harm, and sexual content) in a scale from 1-10. It explicitly supports the languages from our corpus, although it is mentioned that has only been evaluated in a smaller subset. We evaluated this API in February 2024. \u2022 FLORES\u2019 Toxicity-200 is a collection of carefully-collected frequent words and phrases that are considered toxic. It is human-translated and covers all the languages for RTP-LX. It also includes a finer-grained dialectal distinction than what is present in our corpus. In this paper we consider it a baseline via exact match. 4.2 Experimental Setup We evaluate the RTP-LX prompts across on all 28 languages for the metrics described below. We assign labels based on the human annotations (majority vote of 3; average otherwise), and compare these with the S/LLMs\u2019 output. All the open-source models are run on a single Nvidia A100 80GB GPU. For prompting the model, we used a slightly modified annotation rubric that included exemplars, and was further modified according to each model\u2019s recommended formatting (e.g., using chat syntax or special anchors like <s>[INST]). The prompt can be found in Figure 4. We created a manual parser to extract the different toxicity scores and account for pathologies common in some models, such as the boilerplate output from GPT-4 (\u201cOne possible answer is...\u201d). 4.2.1 Dataset setup The completions for RTP-LX are GPT-4-generated and are extremely toxic. Hence, to better understand the performance of the models under human-generated content, we limited ourselves to evaluate the toxicity of the prompts. We split RTP-LX in three partitions: the transcreated set (the original RTP prompts with its corresponding transcreations); the manual set (culturally-specific hand-crafted prompts for every language); and the entire corpus. The transcreated set allows us to have machine-translation free comparability across languages, while the manual subset allows us to evaluate the performance of the model on culturally-sensitive issues. 4.2.2 Metrics To measure the performance of the S/LLM evaluators, we use two metrics: we compare exact label match, or interrater reliability (i.e., accuracy) with respect to the aggregated human label, and the difference between IAA scores for human-human agreement and human-S/LLM agreement. Accuracy would provide a raw, interpretable number, but would not account for multi-class imbalance (i.e., a \u201clazy learner\u201d could guess one label and score well), or the tendency of the S/LLMs to score inputs as more or less harmful than human judges would. 4.2.3 Baselines For baselines we use the FLORES Toxicity-200 block list to calculate exact match scores with the toxic prompts. If any block word is present in a prompt, we flag it. We also report the class distribution of the scores given by humans for each metric, and the performance of ACS on its relevant harm categories. 5 Results 5.1 Baseline: FLORES\u2019 Exact Match Block Rate The results of our exact-match experiment for FLORES\u2019 Toxicity-200 block list is in Figure 1. When a word or phrase would be in the prompt, we would consider it blocked. This experiment served to baseline explicit toxicity of RTPLX when compared to more subtle content, and allowed us to determine to what extent this dataset relied on lexical features as opposed to semantic features. Overall, the block list had a 43.4 \u00b1 0.1% block rate across all languages and partitions, with Japanese being the lowest (12%) and Italian the highest (57%). In general, the manual subset had a much lower (\u221227% average) block rate when compared to the transcreated subset. This suggests that the models, on average, should consider 43% of the corpus with a label denoting some toxicity. 5 \fFigure 1: Exact-match block rates as a baseline, when calculated using FLORES\u2019 Toxicity-200 block list across 28 languages for each partition of RTP-LX (transcreated, manual prompts, and the full corpus). FLORES had an average 43.4 \u00b1 0.1% block rate across all languages and partitions. The manual subset had a much lower (\u221227% average) block rate when compared to the transcreated subset. This suggests that the models, on average, should consider 43% of the corpus with a label denoting at least some toxicity. Note that English does not have a manual corpus. 5.2 Accuracy and Weighted Cohen\u2019s \u03baw Our main experiment evaluated the performance of the S/LLMs as automated labelers in RTP-LX. We compared the models\u2019 output with the aggregated human scores in terms of accuracy and Cohen\u2019s \u03baw (Figure 2). In terms of accuracy, GPT-4 Turbo outperformed all other S/LLMs, closely followed by Gemma 7B. However, it lost its edge in the manual dataset when compared to Gemma 7B and Mistral. The lowest performing models were Gemma 2B and Llama-2 7B. ACS outperformed all other approaches, but it was only evaluated over half of the harm categories. We evaluated IAA to factor in class imbalance, since a lazy learner could always output the same label. When looking at mean weighted Cohen\u2019s \u03baw we found that the models were not exactly optimistic (as described in Hada et al. (2024)); instead they were prone to output higher-valued labels. In the original work, higher labels are considered better (hence the optimistic label); but in RTP-LX lower-valued labels are better. Detailed results of this observation are in Appendix E.2, and a full IAA analysis in Appendix E. When looking at the class breakdown (Figure 3) we found that speaking the models were adept at detecting violent and sexual content, as well as insults. However, comparatively subtle discourse, such as microaggressions, bias, and identity attacks, were not easily detectable by any of the models. This observation is further reinforced by noting that although the human-authored labels are relatively even in terms of agreement across all categories, the model\u2019s agreement is not, with an overall noticeable skewness towards not detecting microagressions or overall toxicity. When looking at the per-category class distribution (Appendix E.2) we found that the models were very prone to output binary labels (no presence of the criterion, or explicit presence of the criterion; overlooking contextually harmful sentences), which suggested an additional source for the disagreement with human annotators. 6 Discussion When simply looking at accuracy as a metric of performance, the models do relatively well: all score above the 43% theoretical minimum from an exact-match approach such as FLORES\u2019 Toxicity-200. The S/LLMs have a relatively even performance, with GPT-4 Turbo and Gemma 7B at the lead. 6 \f(a) (b) Figure 2: Main results from our evaluation. We performed automated labeling of the prompt subset with the S/LLMs and compared their output with a majority vote (average otherwise) of the human scores. In terms of raw accuracy (left), GPT-4 Turbo outperformed all other S/LLMs, although was closely-followed by Gemma 7B. ACS outperformed all other approaches, but it is worth pointing out that ACS was only evaluated as the average of four, not eight, harm categories, and its agreement is lower than GPT-4\u2019s on these categories alone. Gemma 2B had much lower accuracy when compared to other models. When looking at mean weighted Cohen\u2019s \u03baw (right), it is clear that raw accuracy scoring is not a sufficient measure, due to RTP-LX\u2019s class imbalance\u2013a lazy learner could output always the same label and obtain a decent performance. In fact, that is what happened for some models, such as Gemma 2B. Detailed results of this observation are in Figures 8 and 9. However, this observation is misleading: when looking at the accuracy and per-class breakdown, we note that the models tend to not agree with human judgments. While this does typically correlate with accuracy in the sense that more accurate models have higher IAA, it also suggests that the S/LLMs performance per-category is not satisfactory. Indeed, when looking at the breakdown per category we observed that the models failed to correctly classify typically subtle-yet-harmful discourse such as microaggressions, bias, and identity attack. Concerningly, the holistic measure of toxicity in the models tended to be on the lower-agreement metrics. The analysis linked in Appendix E.2 shows that the models typically select higher-valued labels and often prefer outputting binary labels (either no presence of the harm criterion or explicitly presenting it, but not something that could be contextually construed as harmful). This all suggests that, although the models typically score well in accuracy-based measures, their accuracy alone does not imply that they are able to perform a reliable job as judges of toxicity. Moreover, they have difficulties in discerning harm in context-dependent scenarios, particularly when dealing with subtle content, such as microagressions or bias. 7 Limitations 7.1 RTP-LX RTP-LX is limited in two main ways. One is cultural skewness: the majority of this corpus comes from RTP, which comes from predominantly US English discourse. We have mitigated this by adding in the subset of hand-designed prompts, along with our instructions to the translators for transcreation. The other potential limitation is that RTP-LX does not have sufficient dialectal coverage. We encouraged translators and prompt authors to explore various dialects. More could be done in this area: certain languages (e.g. Arabic) vary massively across dialects; and others, like Spanish, vary in terms of homonyms so much that their holistic evaluation of toxicity is notoriously complex. It is also worth noting that RTP-LX covers mostly Indo-European (Germanic, Romance) languages. Further additions to the corpus will explore other families and expand on dialectal differences. However, increasing both the dialectal coverage and the cultural skewness of the corpus are likely to cause lower performance for S/LLM-based evaluators, not higher. 7 \fFigure 3: Breakdown of weighted Cohen\u2019s \u03ba breakdown per category for all models. Generally speaking the models were adept at detecting violent and sexual content, as well as insults. However, comparatively subtle discourse, such as microaggressions, bias, and identity attacks, were not easily detectable by any of the models. 7.2 Toxic Language in a Global Context The study of toxicity in a multilingual setting is difficult not only due to the scarcity of quality, human-generated corpora especially designed for this task; but also because of the constant evolution of language and its perception by native speakers. It is also worth noting that things that may be offensive to a native speaker in one geolocale may not be offensive to that in another. We mitigated this limitation by including a uniform value system, but encouraged the annotators to use their best judgment and only defer to the value system when in doubt. 7.3 S/LLMs LLMs are known to have data contamination issues that hamper fair evaluations. Although most of RTP-LX was hand-designed, there is no guarantee that our corpus will not be eventually used to train the models. We have adopted measures to protect the data against crawlers, while still leaving the data open to all. Additionally, S/LLMs undergo frequent updates, but we have specified the versions of the models we tested to ensure reproducibility. Finally, we did not evaluate fine-tuned models, which are likely to improve upon the numbers shown here. However, our experimental setup assumes low data availability, a common problem in the study of toxic language in NLP (Hartvigsen et al., 2022), and hence unfeasible fine-tuning capabilities. 8 Conclusion In this paper we attempted to answer the question of whether we can scale safety as fast as we scale S/LLMs, via the use of S/LLMs as annotators for toxic-language detection. To this end we introduced a human-annotated and human-transcreated corpus, RTP-LX, designed specifically to capture toxicity in a multilingual scenario. We evaluated seven S/LLMs in RTP-LX, and found that they are able to score highly under pure accuracy metrics, this performance did not imply that most S/LLMs could perform a reliable job as judges of toxicity. Instead, we attributed that high accuracy to class imbalance (the vast majority of the corpus is harmful) and noted that there was relatively low IAA between S/LLMs and human annotators. That said, GPT-4 Turbo was able to score within one standard deviation of human judgements. Additionally, we found two pathologies common to some, if not all, S/LLMs evaluated: a tendency to select highvalued labels, which in RTP-LX means \u201cextreme harm\u201d, and low agreement with humans in context-dependent, subtle8 \fyet-harmful content (e.g. microagressions, bias). Both pathologies imply that the deployment of unfinetuned S/LLMs as multilingual harm detectors are likely to cause further problems, such as erasure. Further work will involve expanding RTP-LX to more dialects and language families. We release it to the wider community, since we believe that this is a resource necessary to combat harmful and toxic content in S/LLMs in research and in production."
},
{
"url": "http://arxiv.org/abs/2404.13957v1",
"title": "How Well Can LLMs Echo Us? Evaluating AI Chatbots' Role-Play Ability with ECHO",
"abstract": "The role-play ability of Large Language Models (LLMs) has emerged as a\npopular research direction. However, existing studies focus on imitating\nwell-known public figures or fictional characters, overlooking the potential\nfor simulating ordinary individuals. Such an oversight limits the potential for\nadvancements in digital human clones and non-player characters in video games.\nTo bridge this gap, we introduce ECHO, an evaluative framework inspired by the\nTuring test. This framework engages the acquaintances of the target individuals\nto distinguish between human and machine-generated responses. Notably, our\nframework focuses on emulating average individuals rather than historical or\nfictional figures, presenting a unique advantage to apply the Turing Test. We\nevaluated three role-playing LLMs using ECHO, with GPT-3.5 and GPT-4 serving as\nfoundational models, alongside the online application GPTs from OpenAI. Our\nresults demonstrate that GPT-4 more effectively deceives human evaluators, and\nGPTs achieves a leading success rate of 48.3%. Furthermore, we investigated\nwhether LLMs could discern between human-generated and machine-generated texts.\nWhile GPT-4 can identify differences, it could not determine which texts were\nhuman-produced. Our code and results of reproducing the role-playing LLMs are\nmade publicly available via https://github.com/CUHK-ARISE/ECHO.",
"authors": "Man Tik Ng, Hui Tung Tse, Jen-tse Huang, Jingjing Li, Wenxuan Wang, Michael R. Lyu",
"published": "2024-04-22",
"updated": "2024-04-22",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"label": "Original Paper",
"paper_cat": "LLM Fairness",
"gt": "How Well Can LLMs Echo Us? Evaluating AI Chatbots' Role-Play Ability with ECHO",
"main_content": "Introduction Large Language Models (LLMs) have recently made significant breakthroughs in the field of Artificial Intelligence (AI). Notably, ChatGPT1, one of the leading commercial models, has showcased its capabilities across different Natural Language Processing (NLP) tasks, such as information retrieval (Zhu et al., 2023), computer programming (Surameery & Shakor, 2023), grammar checking (Wu et al., 2023), and sentence translation (Jiao et al., 2023). Trained on extensive datasets, LLMs also demonstrate applicability beyond NLP tasks, extending to domains such as healthcare (Johnson et al., 2023), education (Baidoo-Anu & Ansah, 2023), legal service (Guha et al., 2024), and product design (Lanzi & Loiacono, 2023). Given LLMs\u2019 extensive capabilities, researchers have explored their human-like abilities (Huang et al., 2024b; 2023) and their performance on complex tasks (Wan et al., 2024; Huang et al., 2024a). Role-playing, the act of changing one\u2019s behavior to fulfill a specific role, has been employed as a scenario to evaluate LLMs (Shanahan et al., 2023; Wang et al., 2023a) since it is a complicated task requiring various abilities. However, the evaluation of LLMs\u2019 role-playing ability remains relatively unexplored. Previous studies (Shao et al., 2023; Wang et al., 2023b) mainly focus on instructing LLMs to impersonate celebrities or fictional characters whose data are likely to be included in the training corpus of the LLMs. As a result, the ability of LLMs to role-play as typical individuals is not well understood, limiting our evaluation of their role-playing potential. This oversight could restrict the scope of *Equal contribution. \u2020Corresponding author. 1https://chat.openai.com/ 1 arXiv:2404.13957v1 [cs.CL] 22 Apr 2024 \fPreprint CR ED LG PH PS IP EM FP IS IT 20 40 60 80 RPP RoleGPT Juliet (a) GPT-3.5-Turbo Performance CR ED LG PH PS IP EM FP IS IT 20 40 60 80 RPP RoleGPT Juliet GPTs (b) GPT-4-Turbo and GPTs Performance Figure 1: Success rates of role-playing LLMs in deceiving human evaluators. The human evaluators are instructed to identify human-generated responses. assessing LLMs\u2019 role-playing capabilities and overlooking situations where LLMs could act as digital clones of humans, non-player characters in video games and metaverse, or, more concerningly, be used maliciously to impersonate individuals, spreading false information or damaging reputations. Addressing this gap, our study directs LLMs to emulate real, ordinary individuals instead of famous figures, leveraging the Turing test. As initially proposed by Turing (1950), this test gauges whether a machine can demonstrate intelligence indistinguishable from that of a human. In our study, we create a role-playing LLM using the profile of an actual person and invite acquaintances of this person to discern between responses from the actual individual and the LLM. Utilizing real-person data makes it possible to apply the Turing test and makes it easier to recruit annotators, which is advantageous over using profiles of well-known figures due to the accessibility of their acquaintances. However, a limitation arises in multi-round dialogues, where human evaluators can easily differentiate between LLMs and actual people by posing questions LLMs cannot answer, such as queries about the current time. This issue can shift evaluators\u2019 focus from assessing the LLMs\u2019 ability to think and act like the intended emulation target. To address this problem, we introduce a novel framework, ECHO, designed to specifically evaluate LLMs\u2019 proficiency in replicating a human\u2019s thought process within a particular domain. We evaluate four different role-playing methods, RoleGPT (Wang et al., 2023b), Juliet (Jones & Bergen, 2023), Role-Play Prompting (RPP) (Kong et al., 2023), and OpenAI\u2019s online application, GPTs (OpenAI, 2023). For the first three methods, we compare performance differences when utilizing GPT-3.5-Turbo versus GPT-4-Turbo. We collect the personal data of ten unique participants for instructing each method to role-play these individuals. Subsequently, we pose ten types of questions from various aspects to both the target participant and the role-playing LLMs. Each participant then invites their acquaintances to identify which responses they believe are written by the actual individual. Results indicate that the most effective role-playing method, the GPTs, achieved a 48.3% success rate in deceiving acquaintances. Moreover, we explore whether LLMs can discern between human and machine-generated responses. We instruct GPT-4, GPT-4-Turbo, and GeminiPro to discern between texts. Results show that GPT-4 can identify differences but could not determine which texts were human-produced. The contribution of this paper can be summarized as: 1. We propose ECHO, the first framework to conduct Turing tests on role-playing LLMs, which can effectively compare different role-playing methods. 2 \fPreprint 2. We conduct extensive experiments on ten participants, including constructing roleplaying LLMs with their profiles and inviting their acquaintances to discern between responses produced by LLMs and the actual individual. 3. We delve into LLMs\u2019 potential as evaluators in identifying human versus machinegenerated texts, addressing concerns about biases that might influence their judgment. 2 Related Work 2.1 Role-Playing LLMs Recent advancements in AI have led to an increased interest in the role-playing capabilities of LLMs, a field exploring how LLMs adopt and sustain specific characters or personas within conversational contexts. Studies examine LLMs\u2019 inherent ability to role-play and evaluate their consistency in depicting assigned roles, offering insights into their adaptability and versatility in dynamic interactions (Shanahan et al., 2023). Specialized frameworks such as RoleLLM (Wang et al., 2023b) and CharacterLLM (Shao et al., 2023) aim to benchmark or enhance these capabilities, while research by Kong et al. (2023) focuses on improving LLMs\u2019 zero-shot reasoning in various personas. Additional investigations, including CharacterGLM (Zhou et al., 2023) and ChatHaruhi (Li et al., 2023), extend role-playing studies to cultural and entertainment contexts, demonstrating LLMs\u2019 ability to animate fictional characters and engage with Chinese cultural themes, thereby illustrating their creative potential across diverse scenarios. Furthermore, platforms like character.ai2 provide innovative environments where users can interact with AI-generated characters, each exhibiting unique personalities and histories. OpenAI\u2019s GPTs (OpenAI, 2023) enable users to customize and utilize tailored GPT models for specific applications such as role-playing. 2.2 Turing Tests for LLMs The Turing Test, a foundational concept in AI history, initially assessed AI capabilities through text-based interactions, determining whether a judge is conversing with a human or a machine (Turing, 1950). The development of LLMs has expanded the scope. Jannai et al. (2023) executes a large-scale, global online Turing Test, challenging participants to distinguish between an LLM and a human during a two-minute conversation, with LLMs passing approximately 40% of the time. Furthermore, the TURINGBENCH framework (Uchendu et al., 2021) provides a systematic platform for evaluating the indistinguishability of LLM responses from those of humans, reflecting both advancements and limitations of current models. Similarly, Jones & Bergen (2023) explores a modified approach where an interrogator interacts with a single respondent to assess their human or AI identity, with a GPT-4 prompt passing 41 games. Sejnowski (2023) suggests that reverse Turing tests involving LLMs can yield insights into human cognitive dynamics rather than just the artificial nature of LLMs. Elkins & Chun (2020) demonstrates GPT-3\u2019s ability to emulate well-known authors\u2019 writing styles and themes, underscoring its potential in creative domains such as journalism and novel writing. Despite these advances, challenges persist. For example, LLMs often reveal their non-human nature when directly queried, reflecting their honesty-oriented programming. Moreover, experiments frequently place LLMs in ambiguous roles rather than directly imitating real individuals. Our research addresses these issues by focusing on the capability of LLMs to accurately replicate specific personalities, thereby providing a more nuanced assessment of their mimicry skills. 3 ECHO Design and Implementation ECHO is a human evaluation framework based on the Turing Test designed to assess the role-playing abilities of various LLMs. It consists of three phases: construction of role2https://character.ai/ 3 \fPreprint Humans Personal Information Role-Play LLMs Acquaintances Philosophical Questions: Is it possible to have knowledge without evidence? Please explain your reasoning. (a) Constructing Role-Play LLMs (b) Question Answering (c) Human Evaluation Which one of the answer do you think is written by your friend? A1: I think it is impossible, 'knowledge' needs ... A2: idk maybe, but sounds like guessing to me A3: Knowledge without evidence may exist in ... A4:\u00a0... A3! A2? Figure 2: An illustration of the design of ECHO. playing LLMs, collection of responses from machines and humans, and execution of human evaluations. The framework is depicted in Fig. 2. 3.1 Constructing Role-Play LLMs The first challenge involves supplementing LLMs with sufficient personal data to accurately simulate certain individuals whose specific information is absent from the training corpus. Our objective is to enable LLMs to capture and reflect the individual\u2019s personality, experiences, and communication styles, thereby producing responses that authentically represent the individual\u2019s character and cognitive processes. To achieve this, we propose the following categories for collecting comprehensive background information: \u2022 Background and Interests: Education, Professional Background, Interests, and Hobbies. \u2022 Personal Identity: Personality Traits, Values, Beliefs, and Memorable Life Experiences. \u2022 Cultural Preferences: Favorite Books, Movies, and Music. \u2022 Cognition and Social Dynamics: Style in Problem-Solving, Communication, Social Interaction, Writing, and Speaking. The four categories provide a comprehensive framework by including both stable and dynamic aspects of an individual\u2019s profile, from demographic details to psychological traits. Additionally, by covering a wide spectrum from personal experiences to social behaviors, these categories enable the model to engage effectively across diverse cultural and social environments. We designed a questionnaire to include these four distinct aspects, comprising a total of ten questions. Details are provided in \u00a7A of the appendix. Participants are required to answer all questions completely and substantiate their responses, ensuring comprehensive and credible data collection. To enhance data quality, responses that do not adhere to our guidelines are manually reviewed and may be excluded to maintain data integrity. Subsequently, the data are input into role-playing LLMs to simulate each participant\u2019s behavior. 3.2 Collecting Responses To prevent evaluators from posing questions that could directly reveal whether a response originates from a machine (e.g., inquiries about the current time), we gathered responses from both humans and LLMs in advance using a set standard of questions. Both participants and their corresponding role-playing LLMs provided answers to the same questions. The responses are anonymized for the human evaluation phase. 4 \fPreprint Question Types This study categorizes questions into two primary dimensions: general and specific. General questions address broader themes, while specific questions delve into individual attributes informed by personal background information. General questions are further categorized into five sub-classes: \u2022 Creativity Questions (CR): Questions that require the generation of original ideas or the envisioning of scenarios by modifying or expanding existing concepts. \u2022 Ethical Dilemmas Questions (ED): Questions that compel respondents to reflect on and articulate their moral judgments in scenarios characterized by moral ambiguity or conflict. \u2022 Logical Questions (LG): Questions designed to evaluate an individual\u2019s capacity for structured, coherent, and logical thinking. \u2022 Philosophical Questions (PH): Questions that probe into profound, often abstract notions concerning human existence, ethics, knowledge, and reality. \u2022 Problem-Solving Questions (PS): Questions that demand analytical thinking and the formulation of practical solutions to hypothetical or real-world problems. Similarly, specific questions consist of the following five sub-dimensions: \u2022 In-Depth Personal Questions (IP): Questions that probe into an individual\u2019s personal experiences and reflections to understand their character, motivations, and life trajectory. \u2022 Emotional Questions (EM): inquiries that examine how individuals experience, manage, and interpret their emotions across different scenarios. \u2022 Future Prediction Questions (FP): Questions that prompt individuals to express their future aspirations, predictions, or plans, both personal and professional. \u2022 Insightful Questions (IS): Questions that invite individuals to share their unique insights or understanding on a specific subject or experience. \u2022 Interest Questions (IT): Questions that investigate how personal interests, hobbies, or passions influence an individual\u2019s perspectives, experiences, or future goals. The sub-categories are developed based on two primary sources: (1) a survey conducted on social media to identify question types effective in differentiating between a natural person and a language model; (2) a review of existing literature that focuses on distinguishing real individuals from language models by posing general inquiries about daily activities and emotions (Jones & Bergen, 2023). This classification ensures a comprehensive assessment of individual capabilities and perspectives by including diverse question types, ranging from logical reasoning to emotional understanding. Question Generation For general inquiries that do not require knowledge of participants\u2019 backgrounds, we utilize GPT-4 to generate five questions per category. For inquiries specific to participants\u2019 backgrounds, GPT-4 is instructed to produce five tailored questions for each participant. Each participant receives a total of ten questions\u2014five specific and five general\u2014randomly selected from a predefined set to facilitate a comprehensive evaluation across various baselines. A challenge in our design is that GPT-4 generates overly specific questions tailored to individual backgrounds, resulting in complexities that both participants and evaluators find challenging to comprehend, thereby hindering the evaluation process. For example, questions on specialized topics like gut microbiota in human health often surpass participants\u2019 general knowledge. To mitigate this issue, we introduce a selective filtering process aimed at ensuring that questions correspond to the participants\u2019 general knowledge level yet remain relevant to their unique experiences and knowledge. This approach adjusts the questions to be understandable and representative of each participant\u2019s background, thus excluding excessively specific inquiries from the analysis. 3.3 Conducting Human Evaluation We conduct human evaluations by having acquaintances of each participant review anonymized responses to determine whether they are generated by humans or machines. 5 \fPreprint Each evaluator is presented with ten pairs of responses, each containing one response from the actual participant and one from a random role-playing LLM. Evaluators are instructed to assess the tone, thought process, and identification accuracy of the responses to identify human-generated responses. Additionally, we pre-processed responses to eliminate syntactical biases that could affect evaluations. This included normalizing capitalization, spacing between words, and correcting misspelled words. Such pre-processing ensures that evaluations are based on the authenticity and coherence of the content rather than superficial textual patterns. Consequently, this approach aimed to provide a fair assessment based on the intrinsic quality of the ideas and thoughts expressed in the responses. The effectiveness of LLMs in simulating humans is quantified by the success rate of deceiving evaluators. It is defined as the proportion of instances in which human evaluators select an LLM-generated response over that of an actual participant. It is noteworthy that the baseline for random guessing is 50%. A success rate substantially higher than this baseline, such as 90%, indicates that evaluators can effectively distinguish between human and LLM responses, suggesting that the LLM fails to convincingly simulate a human participant. Conversely, a success rate closer to 50% indicates a greater difficulty for evaluators in differentiating between the two, signifying a more human-like performance by the LLM. 4 Experiments Baseline Methods We evaluate four widely used role-playing methods: \u2022 RoleGPT (Wang et al., 2023b): This method improves role-playing in LLMs through a four-stage process: constructing role profiles for 100 roles, extracting knowledge through context-based instructions, imitating style with GPT role prompting, and tuning with role-conditioned instructions. \u2022 Role-Play Prompting (RPP) (Kong et al., 2023): This approach enhances zero-shot reasoning in LLMs by using role-play prompting to assume various personas. It involves sampling multiple role-feedback prompts and selecting the most effective one for reasoning tasks, serving as an implicit Chain-of-Thought facilitator. \u2022 Juliet (Jones & Bergen, 2023): This study assesses GPT-4\u2019s ability to pass the Turing Test in online interactions by testing 25 LLM witnesses, including GPT-3.5 and GPT-4, with human participants. We select one of their open-sourced prompt named Juliet. \u2022 GPTs (OpenAI, 2023): A new feature by OpenAI that enables the creation of custom ChatGPT applications for specific tasks using natural language. These applications are shareable via links or through the GPT store. We select one tailored for persona imitation for our study. We employ GPT-3.5-Turbo and GPT-4-Turbo as the foundation models for all methods except GPTs, resulting in seven baselines in total. Due to the unavailability of some baselines, we reproduce their approaches using LangChain3 for a comprehensive evaluation across models. Implementation details are provided in \u00a7B of the appendix. Human Participants We recruit ten participants from diverse backgrounds for our evaluation. Additionally, a minimum of seven acquaintances per participant are included to ensure that responses of all baselines are evaluated. Data collection and management are conducted using Google Forms.4 4.1 Results Table 1 presents the success rates of various role-playing baselines in deceiving human evaluators, detailing these rates across different question types. 3https://www.langchain.com/ 4https://www.google.com/forms/about/ 6 \fPreprint Table 1: Success rates of role-playing LLMs in deceiving human evaluators. The human evaluators are instructed to identify human-generated responses. The highest numbers are marked in bold, while the numbers closest to 50% are underlined. Success Rate (%) GPT-3.5-Turbo GPT-4-Turbo GPTs Overall RPP RoleGPT Juliet RPP RoleGPT Juliet Creativity 40.0 53.3 31.3 26.1 37.0 37.5 47.8 39.0 Ethical Dilemmas 43.5 30.0 44.4 38.9 27.3 44.4 47.8 39.5 Logical 23.5 50.0 36.4 42.1 47.6 47.1 41.7 41.2 Philosophical 26.7 38.9 43.5 44.0 28.0 40.9 34.8 36.7 Problem Solving 17.4 23.3 34.8 46.2 46.7 48.0 54.6 38.7 In-Depth Personal 42.1 45.2 40.0 35.0 83.3 41.7 56.0 49.0 Emotional 44.4 57.9 22.2 66.7 25.0 55.6 45.8 45.4 Future Prediction 38.9 59.1 37.5 60.0 50.0 50.0 50.0 49.4 Insightful 50.0 34.8 61.5 45.0 50.0 35.5 50.0 46.7 Interest 48.0 41.7 30.0 66.7 22.7 33.3 53.9 42.3 Overall 37.5 43.4 38.2 47.1 41.8 43.4 48.2 42.8 Across Baselines GPTs generally outperforms other baselines across various question types. It achieves not only the highest success rates but also rates closest to 50%, making it hard for human evaluators to distinguish between its outputs and human outputs. This effectiveness likely stems from GPTs\u2019 capability to incorporate enriched personal information into responses. This method proves more precise than traditional human imitation techniques, emphasizing the importance of specificity in role-playing scenarios. Furthermore, transitioning from GPT-3.5-Turbo to GPT-4-Turbo has markedly enhanced role-playing ability. GPT-4-Turbo demonstrates a superior ability to replicate individual writing and cognitive styles, particularly within the RPP and Juliet frameworks. Conversely, RoleGPT shows diminished performance following the upgrade, likely due to a tendency towards overly casual or dramatic outputs, which undermines the authenticity of its imitations. This finding suggests that GPT-4-Turbo\u2019s intricate understanding may lead to stylistic over-emphasis, affecting perceived authenticity. Across Question Types The analysis of success rates among different question types reveals the comparative strengths and weaknesses of GPT-3.5-Turbo and GPT-4-Turbo. As the foundational model, GPT-3.5-Turbo exhibits limitations. Juliet underperforms in emotional questions, while RPP and RoleGPT struggle with logical and problem-solving questions. This finding suggests a lack in nuanced emotional understanding and complex logical processing in GPT-3.5-Turbo. The transition to GPT-4-Turbo brings about significant improvements in specific areas. For instance, RoleGPT achieves an 80% success rate in In-Depth Personal questions\u2014the highest observed rate\u2014while RPP exceeds 60% in three specific question categories, demonstrating the targeted enhancements in these domains. However, this targeted improvement raises valid concerns about potential over-specialization. While it enhances performance in specific areas, it could compromise the models\u2019 ability to handle broader queries, a factor that needs to be carefully considered. Both Juliet and GPTs demonstrate relatively balanced performances across various question types, with GPTs notably outperforming Juliet. The trend towards better performance on specific rather than general questions aligns with the models\u2019 design objectives, indicating a higher efficacy in generating detailed, tailored responses over broad, abstract topics. General questions, especially those related to Philosophy and Problem-Solving questions, present challenges due to their abstract nature and the demand for definitive answers, pushing the limits of LLMs\u2019 capabilities in data-driven reasoning toward domains that require speculative or creative problem-solving. This finding results in a noticeable disparity between human and LLM-generated responses, as LLMs may lack the creative or interdisciplinary thinking required for such questions. 7 \fPreprint Table 2: Success rates of role-playing LLMs in deceiving evaluator LLMs. The evaluator LLMs are instructed to identify human-generated responses. Success Rate (%) GPT-3.5-Turbo GPT-4-Turbo GPTs Overall RPP RoleGPT Juliet RPP RoleGPT Juliet Control Model 86.0 78.0 67.0 95.0 31.0 5.0 78.0 62.9 GPT-4 85.3 92.3 88.3 63.7 93.0 91.3 95.7 91.4 GPT-4-Turbo 95.0 94.0 95.3 95.7 99.0 98.0 98.3 96.5 Gemini-1.0-Pro 52.7 52.7 62.7 56.3 60.7 58.3 54.0 56.8 Table 3: Success rates of role-playing LLMs in deceiving evaluator LLMs. The evaluator LLMs are instructed to identify non-human-generated responses. Success Rate (%) GPT-3.5-Turbo GPT-4-Turbo GPTs Overall RPP RoleGPT Juliet RPP RoleGPT Juliet Control Model 14.0 22.0 33.0 5.0 69.0 95.0 22.0 37.1 GPT-4 25.7 24.7 26.0 25.7 29.0 52.3 11.7 27.9 GPT-4-Turbo 61.7 62.7 53.3 34.3 60.0 58.0 62.3 56.5 Gemini-1.0-Pro 51.0 49.0 42.3 48.7 54.3 50.0 48.7 41.0 5 LLMs as Evaluators LLM-based evaluators have demonstrated their potential in identifying text quality (Desmond et al., 2024; Chan et al., 2023). Despite concerns regarding positional and length biases that may favor longer responses (Zheng et al., 2024) or influence judgments based on response order (Zhao et al., 2021), recent findings indicate these biases are minimal in advanced models such as GPT-4-Turbo (Chen et al., 2024). Our study further explores the capability of LLMs as evaluators to discern between human and machine-generated texts. 5.1 Methodology We utilize three language models\u2014GPT-4, GPT-4-Turbo, and Gemini-1.0-Pro\u2014all configured with a temperature setting of zero. Each model is tested using a dataset comprising each participant\u2019s background information and ten pairs of responses. Each pair corresponds to a question and consists of one human-generated answer and one randomly generated answer from a language model. We elaborate on the detailed process for this evaluation and the prompts for the evaluator LLMs in \u00a7C of the appendix. We create a Control Model that always selects the longer answer to investigate the presence of length biases in LLM evaluations. A close comparison of success rates between this control and the LLM evaluators would indicate a significant length bias in the models, marked by a preference for lengthier responses. To mitigate potential positional biases, we introduce a two-fold approach: (1) randomizing the order of answer presentation within each question-answer pair and (2) conducting multiple rounds of evaluation with the same question set to determine an average success rate. To further assess potential biases in LLMs towards specific instructions, we not only instructed the LLMs to select responses likely to be produced by humans but also required them to choose responses generated from language models. If the LLMs exhibit no bias, the accuracies across these two conditions should be approximately the same. 5.2 Results Analysis The results of instructing LLMs to identify human-generated and non-human-generated responses are shown in Table 2 and Table 3, respectively. As discussed before, success rates 8 \fPreprint that significantly deviate from 50%, which is expected in random guessing, indicate the LLM\u2019s capability to distinguish between human and machine-generated text. Across Models GPT-4 and GPT-4-Turbo effectively distinguish between LLM and human-generated texts, albeit choosing completely opposite answers. As illustrated in Table 2, both models show proficiency in this differentiation, with success rates for all role-playing LLMs exceeding 90%. In other words, GPT-4 and GPT-4-Turbo demonstrate a consistent inclination to identify LLM-generated responses as human-generated. In contrast, Gemini-1.0-Pro performs comparably to random guessing. This finding suggests that GPT models may prefer texts produced by similar models. The underlying cause is likely a model-specific bias towards its own text generation patterns. Instruction Bias Our analysis reveals a pronounced bias in GPT models, as evidenced by discrepancies between the results from Table 2 and Table 3. Note that unbiased models should exhibit comparable accuracy in these two settings. In both scenarios, Gemini-1.0Pro demonstrates accuracy akin to random guessing, suggesting it is free of bias toward the instruction. However, GPT models display significant variances in their capacity to differentiate human from machine-generated responses. Specifically, GPT-4 shows a more significant disparity (63.5%) compared to GPT-4-Turbo (40%). This finding suggests that GPT models are generally more adept at identifying machine-generated content. We believe that the concept of \u201chuman-generated\u201d responses is inherently more ambiguous and abstract for GPT models, whereas \u201cmachine-generated\u201d content is more clearly defined and understood. Length Bias Tables 2 and 3 present the success rates of the control model in the two settings to examine the length bias. By comparing them to the success rates of the evaluator LLMs, we find no significant correlation between any LLMs and the control model, suggesting that length bias minimally impacts model selections. This observation is consistent with the findings reported in Chen et al. (2024). 6 Conclusion Conclusion This paper introduces ECHO, a framework designed to assess the role-playing capabilities of LLMs in simulating ordinary individuals, utilizing the Turing test methodology. Our evaluation includes ten target participants and seven baseline models, yielding over 800 responses. Analysis of human evaluation data reveals that: (1) Among the four role-playing approaches, GPTs performs better in accurately role-playing target individuals. (2) GPT-4 exhibits enhanced role-playing capabilities compared to GPT-3.5. Moreover, this study investigates the potential of LLMs to function as unbiased evaluators, examining the influence of inherent biases on their accuracy. The results suggest that GPT models may prefer texts generated by similar models. Limitations Our study has several limitations. A primary limitation is that the background information categories may not adequately capture the complexities of a person\u2019s identity, experiences, and communication nuances. This inadequacy can result in responses from LLMs that lack authenticity. The second concern is that restricting evaluators to those familiar with the target individual may limit the size and diversity of the evaluation team, potentially compromising the objectivity and breadth of assessments. Including evaluators who are not previously acquainted with the individuals but are informed about their backgrounds could enhance our understanding of LLMs\u2019 imitative accuracy. The third threat concerns the difficulty of LLMs in capturing the unique behavioral quirks and subtle communication nuances that characterize human interaction. This challenge is particularly pronounced in short interactions, where LLMs fail to replicate the full complexity of human language, emotional depth, and cultural nuances. 9 \fPreprint Ethics Statement Data Protection Since this study employs LLMs to simulate real individuals, we adhere to rigorous ethical guidelines to protect participant privacy and maintain the integrity of AI research. We have ensured the privacy and anonymity of all participants by treating personal data and identifiable information, such as background files, with strict confidentiality. We constructed local role-playing LLMs without transferring any personal data to third parties. Furthermore, all data, including responses from human participants and simulations generated by the role-playing LLMs, will be deleted six months after our study\u2019s publication. Informed Consent Additionally, participants are fully informed with comprehensive information about the study\u2019s objectives and the specific use of their data in generating roles, answers, and evaluations. Informed consent was explicitly obtained, with provisions allowing participants to withdraw at any time without consequences."
},
{
"url": "http://arxiv.org/abs/2404.13798v1",
"title": "Enforcing Conditional Independence for Fair Representation Learning and Causal Image Generation",
"abstract": "Conditional independence (CI) constraints are critical for defining and\nevaluating fairness in machine learning, as well as for learning unconfounded\nor causal representations. Traditional methods for ensuring fairness either\nblindly learn invariant features with respect to a protected variable (e.g.,\nrace when classifying sex from face images) or enforce CI relative to the\nprotected attribute only on the model output (e.g., the sex label). Neither of\nthese methods are effective in enforcing CI in high-dimensional feature spaces.\nIn this paper, we focus on a nascent approach characterizing the CI constraint\nin terms of two Jensen-Shannon divergence terms, and we extend it to\nhigh-dimensional feature spaces using a novel dynamic sampling strategy. In\ndoing so, we introduce a new training paradigm that can be applied to any\nencoder architecture. We are able to enforce conditional independence of the\ndiffusion autoencoder latent representation with respect to any protected\nattribute under the equalized odds constraint and show that this approach\nenables causal image generation with controllable latent spaces. Our\nexperimental results demonstrate that our approach can achieve high accuracy on\ndownstream tasks while upholding equality of odds.",
"authors": "Jensen Hwa, Qingyu Zhao, Aditya Lahiri, Adnan Masood, Babak Salimi, Ehsan Adeli",
"published": "2024-04-21",
"updated": "2024-04-21",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"label": "Original Paper",
"paper_cat": "LLM Fairness",
"gt": "Enforcing Conditional Independence for Fair Representation Learning and Causal Image Generation",
"main_content": "Introduction Fairness in machine learning and computer vision is an increasingly important topic with growing applications to everyday life. The literature includes several methods for learning fair and unconfounded models based on domain adversarial invariant learning [4, 13, 18, 33, 37], statistical methods [16, 27], and information theory [21]. In the context of predictive modeling, algorithmic fairness aims to learn models that are unbiased towards protected subgroups by ensuring that the output label \u02c6 y is invariant to a sensitive attribute s. The existing plethora of fairness definitions typically captures this invariance in terms of a conditional independence (CI) (see Mehrabi et al. Stochastic Encoder + Decoder Semantic Encoder race invariant Latent Representation Skin Type (s) Sex (y) Dynamic Sampler Causal Image Generation \ud835\udc5e(\ud835\udc60, \ud835\udc63\u2032, \ud835\udc66) \ud835\udc5d(\ud835\udc60, \ud835\udc63, \ud835\udc66) \ud835\udc5e(\ud835\udc63\u2032, \ud835\udc66) \ud835\udc5d(\ud835\udc63, \ud835\udc66) race variant Prediction Head Prediction Head Conditionally Independent Representation Learning (JS) v Figure 1. We propose a new way to ensure fairness in downstream tasks by enforcing conditional independence constraints on the latent representation. This is achieved by minimizing the JensenShannon divergence (JS) between distributions obtained using a novel dynamic sampling technique. In the setting shown here, we apply our technique to the diffusion autoencoder\u2019s semantic representation to disentangle the sensitive attribute of skin type (a proxy variable for race) and perform causal image generation. for a survey). For instance, the equalized odds criterion [10] requires prediction output \u02c6 y to be independent of the sensitive attribute s conditioned on the true class label y. We note that the conditional independence formulation s \u22a5 \u22a5\u02c6 y | y is widely recognized as a more precise indicator of fairness compared with the marginal independence formulation s \u22a5 \u22a5\u02c6 y, due to the fact that some correlations between sensitive attributes and the target variable may be benign. CI plays a crucial role in capturing the true causal relationships 1 arXiv:2404.13798v1 [cs.CV] 21 Apr 2024 \fbetween the sensitive attributes and the target variable, ensuring that fairness is based on genuine causes of disparities rather than arbitrary associations [14, 24]. However, most existing work in fair machine learning typically focuses on enforcing independence on lowdimensional features in cases where the conditioning set is either empty or merely a low-dimensional categorical variable [2, 3, 24, 25, 37]. This restricts the learning of fair representations to limited settings involving only binary attributes in tabular datasets [35]. These methods cannot directly translate to high-dimensional spaces, such as the latent representations of large generative models. Recently, Ahuja et al. developed a differentiable framework for enforcing CI in high-dimensional and continuous feature spaces by using a GAN-based approach in the context of data generation. To enforce s \u22a5 \u22a5\u02c6 y | y, they minimize the Jensen-Shannon (JS) divergence between the joint distribution p(s, \u02c6 y, y) and an auxiliary distribution q(s, y\u2032, y) while the joint marginal q(s, y) is distributed identically to p(s, y) and q(y\u2032 | y) is independent of s. Although this formulation enforces CI, it can only sample the joint probability distributions in the label space y and does not necessarily guarantee the learning of fair representations, i.e., only the last network layer before predicting y removes the dependence while the rest of the network (including latent representations) remain biased or confounded. Our approach introduces the concept of CI into the latent space by using a dynamic sampler to form the joint distributions with respect to the learned representations. This yields a computationally rigorous approach to fair and causal representation learning that exhibits a level of versatility not found in existing methods: We are able to control not just which features are encoded, but also where within the representation they appear. With this, we gain improved classification performance and, in the case of generative models such as the diffusion autoencoder (DiffAE), the ability to maintain fine-grained control over generated content. As a result, we can not only protect downstream tasks from bias against specific attributes, but also generate featureinvariant images corresponding to the enforced schema, allowing for a richer understanding of a model\u2019s representations. 2. Related Work Invariant Representation Learning In the wake of increasingly biased large-scale models [1, 10, 18, 31], the learning of fair and unconfounded representations has taken on outsized importance. Methods based on domain adversarial learning are increasingly popular tools of choice in reducing bias and confounding effects [1, 15, 33, 34, 37]. Among these, Johndrow et al. and Tan et al. proposed methods for invariant feature learning and Zhao et al. presented models that minimize statistical mean dependence encoder : image : predicted label CI f (OR) : invariant portion variant portion : sensitive attribute LATENT REPRESENTATION Figure 2. High-level view of our architecture. We introduce two variants of a conditional independence enforcer that can be added to any off-the-shelf encoder. using a correlation-based adversarial loss function. Recently, Pirhadi et al. introduced a data cleaning method that enforces the conditional independence constraint for tabular data using optimal transport. Other types of methods incorporating statistical operations have also been explored in the literature, such as using multivariate regression analysis [19] with general linear models [36]. Other approaches enforce fairness through post-processing steps on unfair trained classification models [6, 10, 36]. But since the training and fairness enforcement steps are conducted separately, these algorithms often lead to suboptimal fairness and accuracy trade-offs [32]. Recently, using traditional statistical methods, Lu et al. proposed a normalization layer that corrects the feature distributions with respect to labeled sensitive attributes. Their approach, known as metadata normalization, entailed a simple layer that could plug into any end-to-end model to protect the models against bias. Vento et al. then extended this work by turning the closed-form normalization operation into a network optimizable step. This is an active research field and many other approaches are being studied. Conditional Independence for Algorithmic Fairness Consider a classifier or regression function f(x) with output \u02c6 y and a protected attribute s, e.g., sex or race. Algorithmic fairness aims to learn prediction functions or transformations that make the final outcome invariant or insensitive to protected attributes. Several widely used associational and causal notions of fairness, including the equalized odds criterion, can be attained by means of independence [25]. Rather than attempting to transform already trained unfair models to fair ones in a post-hoc manner, it is critical to enforce fairness constraints from the beginning of the model 2 \ftraining pipeline [11]. Apart from preventing the propagation of bias to downstream modeling tasks, this approach of fairness from a data management standpoint also leads to more robust and significant fairness measures by ensuring that data sources, transformations, and other training assumptions are sound [24]. Representation Learning with Diffusion Models Diffusion-based models learn to generate images by a denoising process: Random Gaussian noise is added to input images, and the model learns how to reverse this process to hallucinate new images from random noise. This family of models has proven to offer remarkable quality in image generation, promising to replace generative adversarial networks (GANs) [8] as the dominant architectural paradigm. Chief among them is the diffusion autoencoder [23], which encodes an image into a two-part latent subcode, capturing a stochastic representation via the aforementioned denoising approach and conditioning this process upon a semantic representation learned by a CNN. Interpolation of the resulting subcode results in smooth and meaningful changes in the decoded image. The semantic separation capability of DiffAEs has enabled various applications, such as attribute manipulation and various low-shot or zero-shot downstream applications [7, 9, 26]. In this work, we further enable DiffAEs to learn unbiased, causal, and conditionally independent latent subcode representations. 3. Method Let {x, y, s} be a training sample in the dataset, where x is an input image, y is the target prediction label, and s is a protected sensitive attribute. We aim to learn an encoder network g\u03b8(x) = v resulting in a latent representation v, from which a prediction network f\u03d5(v) = \u02c6 y produces the final predicted label \u02c6 y. With a slight abuse of notation, we use lower-case letters to also denote random variables. Ultimately, we aim to train a model that achieves high accuracy in predicting label y while ensuring equalized odds: \\lab el { e q n : obj ec tive} \\b e gi n {s plit} \\ arg min _ { \\ t h e ta ,\\phi }~-\\frac {1}{n}\\sum _{i=1}^n y_i\\log (\\hat y_i)&+(1-y_i)\\log (1-\\hat y_i) \\\\ \\text {s.t.}\\quad s\\ind \\hat y~|~y. \\end {split} (1) We discuss two possible ways of enforcing the equalized odds fairness constraint: First, we enforce conditional independence with respect to the label y, i.e., s \u22a5 \u22a5\u02c6 y | y. As discussed in Ahuja et al., this can be seen as a means to enforce equalized odds through CI. However, we claim that enforcing the independence constraint with respect to only the label is limited in its ability to enforce the equalized odds fairness constraint effectively. We therefore additionally propose to enforce the conditional independence with respect to the more information-rich latent representation v, or s \u22a5 \u22a5v | y. We can then use this learned fair latent representation in a downstream prediction or generation task. 3.1. Enforcing CI with respect to Label When the label y is highly correlated with the protected attribute s in the training data, a basic aim for a fair machine learning model is to ensure conditional independence of the predicted \u02c6 y from s, i.e., s \u22a5 \u22a5\u02c6 y | y. Assuming that the joint distribution p(s, \u02c6 y, y) is welldefined with respect to Lebesgue measure \u00b5, previous work [3] demonstrates that the above conditional independence can be defined with respect to the Jensen-Shannon divergence (JS) between p(s, \u02c6 y, y) and an auxiliary distribution q(s, y\u2032, y), where the joint marginal q(s, y) is distributed identically to p(s, y), and q(y\u2032|y) is independent of s. In other words, when q(s, y\u2032, y) = p(s, y)q(y\u2032|y), the conditional independence constraint s \u22a5 \u22a5\u02c6 y | y is equivalent to enforcing \\lab el {e qn:js d} \\ text {J S }( p(\\ hat y, y), q(y', y)) = \\text {JS}(p(s,\\hat y, y), q(s, y', y)). (2) Accordingly, we can rewrite (1) in terms of JS as \\lab el { e q n : obj ec tivejsd } \\ b egi n {sp l i t} \\arg m in _{\\ th eta ,\\ph i }~ &-\\frac {1 }{n }\\su m _ {i= 1 } ^n y_i\\log (\\hat y_i)+(1-y_i)\\log (1-\\hat y_i)\\\\ \\text {s.t.} \\quad & \\big (\\text {JS}(p(\\hat y, y), q(y', y)) -\\\\ & \\text {JS}(p(s,\\hat y, y), q(s, y', y)) \\leq \\delta \\big ) \\end {split} (3) for some \u03b4 > 0 sufficiently small. With the help of a conditional sampler q(y\u2032 | y), the JS constraint of (3) can be satisfied by a general GAN architecture with two discriminators (Fig. 2) [3], i.e., by minimizing L _ \\tex t {CI} = (L_\\text {D1} L_\\text {D2})^2 \\label {eqn:ci} (4) where L _ \\text { D 1} &= \\ma t hbb {E}[ \\tex t {lo g}( 1-D _ 1(y',y) ) + \\ma th bb { E}_{y,s}[\\l og D _1 (\\ hat y, y)] \\quad \\text {and} \\\\ L_\\text {D2} &= \\mathbb {E}[\\text {log}(1-D_2(y',s,y)] + \\mathbb {E}_{y,s, y'}[\\log D_2(\\hat y, s, y)]. The key advantage of using (4) to derive conditional independence is that q(y\u2032| y) does not have to be a perfect sampler that exactly matches p(\u02c6 y | y). The only sufficient condition is that q(y\u2032| y) shares overlapping support with the original distribution p(\u02c6 y | y). This gives us many flexible ways to construct q, such as using a uniform distribution. Specifically, in a supervised learning setting, where y is known at training time, we can set q using a uniform distribution over all raw model outputs that correspond to y. The encoder and prediction networks can then be learned by optimizing the loss function L_ { \\tex t {Enc}} = \\lambda L_{\\text {CI}} + L_{\\text {task}}, \\label {eqn:enc_loss} (5) 3 \fwhere \u03bb is a hyperparameter and Ltask is the loss function for the prediction task. The discriminators can be trained adversarially to minimize the composite loss function: L _{\\ t ext {D}} = L_{\\text {D1}} + L_{\\text {D2}}. \\label {eqn:d_loss} (6) 3.2. Enforcing CI with respect to Latent Representation Enforcing conditional independence only with respect to \u02c6 y does not necessarily prevent the model from learning biased information from s in its representations. In this paper, we propose to enforce conditional independence with respect to the latent representation v in a way that most directly impacts the learnable parameters of the encoder. We achieve this by replacing the predicted label \u02c6 y with the latent vector v within our conditional independence loss described above, i.e., s \u22a5 \u22a5v | y. Similar to (2), our Jensen-Shannon divergence constraint becomes \\labe l { eqn:j sd_ l atent}\\ te xt {JS} (p( v, y), q(v', y)) = \\text {JS}(p(s,v, y), q(s, v', y)). (7) As before, this requires a conditional sampler q(v\u2032|y). However, one problem that arises with this approach is that the conditional sampler is no longer well-defined at training time. Since the distribution and support of p(v|y) is learned by the model on the fly, we cannot construct q(v\u2032|y) a priori. To resolve this challenge, we implement the imperfect conditional sampler q via a novel dynamic sampling procedure. Specifically, to sample v\u2032 \u223cq(v\u2032|y), we sample with replacement from the latent vectors associated with the given y within the same training batch. Similar to the bootstrapping procedure, our dynamic sampling can approximate the empirical distribution function p(v|y), allowing the discriminators to keep up with and robustly train the encoder as the latent space is learned. Note that, other than the difference in the sampler, the objective function of (4) and the GAN architecture in Fig. 2 can be translated here by replacing {\u02c6 y, y\u2032} with {v, v\u2032}. 3.3. Disentangling DiffAE Latent Representation The above construction of q provides us with a powerful technique that can be applied to any encoder architecture. We demonstrate this by using the state-of-the-art diffusion autoencoder model [23], which encodes images using a denoising diffusion process [26] conditioned upon a semantically meaningful representation inferred by a CNN encoder. An image, therefore, is encoded by two representations: a semantic subcode learned by the CNN, along with a stochastic subcode that, given the semantic subcode, denoises into the original image. The model learns to compress the most common high-level features into the semantic subcode and delegate the remaining, highly variable features to the stochastic representation. When applied to human face images, this yields a clean separation between anatomical features such as sex, facial structure, and skin type in the semantic subcode, and more transient qualities like pose, hairstyle, and expression in the stochastic subcode. By enforcing conditional independence with respect to the semantic subcode of the diffusion autoencoder, we produce a latent representation that is invariant to a protected attribute of choice. The model adapts by encoding any facial features associated with the protected attribute in the stochastic subcode instead. When such facial features include the high-level attributes described earlier, this can unduly affect the natural dichotomy between the semantic and stochastic subcodes, restricting the expressive power of the semantic subcode and reducing the efficiency of the denoising process used for image generation. Empirically, this results in blurred and unrealistic generated images. To overcome this, we enforce conditional independence with respect to only a portion of the semantic subcode. To further encourage the model to use the remaining portion to encode the facial features related to the sensitive attribute, we add a prediction head on the remaining portion to estimate the sensitive attribute s. We optimize this new prediction head alongside the encoder; instead of (5), we have L_ { \\text {E nc}} = \\l a mbda _1 L_{\\text {CI}} + \\lambda _2~\\text {BCE}(\\hat s, s) + L_{\\text {task}}. (8) In this way, the semantic subcode is apportioned into two parts: one variant to the sensitive attribute, and the other invariant. Although we only demonstrate this capability on tasks involving a single sensitive attribute, this approach can be easily extended to disentangle multiple attributes within the semantic subcode. With this formulation for causal disentanglement of the latent space, we show that the specific subcode in which a given facial attribute is represented can be methodically adjusted (e.g., to change the race of the generated image), all while preserving the model\u2019s ability to accurately reconstruct other characteristics of the input image. 4. Experiments and Results We apply our architecture to two different settings, each with a unique dataset, model, and confounding variable. In each case, we analyze the effectiveness of applying CI in the latent space as opposed to the label space and also compare with several baselines. Experiments were performed using a P100 GPU, with the exception of the diffusion autoencoder experiments which required four A100 GPUs. Metrics We report prediction accuracies for each value of the protected attribute when s is discrete, along with the balanced accuracy (bAcc) to account for class imbalances. To evaluate invariance to the sensitive attribute, we use the squared distance correlation (dcor2) [28] to quantitatively 4 \fmeasure the correlation between the protected variable and the learned features. When the protected variable is discrete, we also use the equality of opportunity (EO) independence metric, which measures the average gap in true positive rates for different possible values of the protected variable. 4.1. Synthetic Experiments As an initial proof of concept, we train a convolutional neural network on synthetically generated image data. Each image in this dataset is of size 32\u00d732 and composed of four Gaussian kernels, as shown in Fig. 3. The diagonal kernels are of equal intensity \u03c3A, and the off-diagonal kernels are of equal intensity \u03c3B. For half of the images, we sample \u03c3A, \u03c3B \u223cU(1, 4) and assign the label Y = 0. For the other half, we sample \u03c3A, \u03c3B \u223cU(3, 7) and assign Y = 1. We then denote \u03c3B as the protected attribute and analyze the model\u2019s ability to predict an image\u2019s label based on the diagonal kernels controlled by \u03c3A while ignoring the offdiagonal kernels controlled by \u03c3B. We do this by enforcing the conditional independence condition separately with respect to the label space (\u03c3B \u22a5 \u22a5\u02c6 Y | Y ) and the latent space (\u03c3B \u22a5 \u22a5V | Y ). Whereas an unconstrained model may achieve maximum theoretical accuracy of 1\u22121 2 \u0000 1 3 \u00012 = 0.94, correctly predicting all cases except half of those where \u03c3A, \u03c3B \u2208(3, 4), a fair classifier using only the information represented by \u03c3A can attain a theoretical accuracy of 1 \u22121 2 \u0000 1 3 \u0001 = 0.83. Our encoder is a simple CNN comprised of a convolutional layer, ReLU activation, max pooling, convolution, ReLU, and two linear layers, applied in that order. The first linear layer has an output of length 10, which we define as the latent space v, while the second linear layer converts this latent space into the single output logit corresponding to y. In Table 1, we report metrics after training each model using a batch size of 512 and a learning rate of 1e\u22124. We select the \u03bb hyperparameter by 5-fold cross-validation and report metrics based on a separate test set. Fig. 5 demonstrates the tradeoff of fairness versus accuracy and dcor2 for both model variants. The latent space \u03c3A \u03c3B \u03c3B \u03c3A Figure 3. Synthetic data format and sample. The diagonal kernels are controlled by \u03c3A, while the off-diagonals are controlled by \u03c3B. y\u2019 y \u0177 y s y\u2019 y s \u0177 y Y S X \u0176 V 0/1 CI CI or v\u2019 y v y s v\u2019 y s v y \u03c3A \u03c3B Figure 4. Synthetic data causal diagram. We apply the LCI component to either the latent vector space (V) or the label space (Y). Table 1. Synthetic data experiment results. Balanced accuracy (bAcc) closer to 0.83 is better, as is lower dcor2. Model bAcc dcor2 Vanilla CNN 0.94 0.432 Regularized v-space CNN 0.80 0.382 y-space CI-CNN 0.87 0.298 v-space CI-CNN 0.84 0.055 0 0.2 0.4 0.6 0.8 1 0 50 100 dcor2 bAcc \u03bb (normalized) Figure 5. Fairness and accuracy metrics versus conditional independence strength \u03bb. Orange lines correspond to the y-space CICNN, and blue lines to the v-space CI-CNN. variant introduces some instability for larger \u03bb, but achieves increased accuracy alongside decreased dcor2. While some instability is present for large \u03bb, this largely does not appear until after dcor2 has converged. In Fig. 6, we show each model\u2019s unnormalized output for each integer combination of \u03c3A and \u03c3B. Enforcing CI in the latent space produces outputs that remain nearly unaffected by changes in the protected attribute \u03c3B, as evidenced by near-constant values within each row. Model B1: Vanilla CNN As a baseline, we train the standalone CNN without any conditional independence enforcing mechanism, using only a binary cross entropy loss function. This model relies heavily on the protected attribute 5 \fVanilla Regularized v-space y-space CI-CNN v-space CI-CNN \ud835\udf0e\ud835\udf0e \ud835\udc34\ud835\udc34 \ud835\udf0e\ud835\udf0e \ud835\udc34\ud835\udc34 \ud835\udf0e\ud835\udf0e \ud835\udc34\ud835\udc34 \ud835\udf0e\ud835\udf0e \ud835\udc34\ud835\udc34 \ud835\udf0e\ud835\udf0e\ud835\udc35\ud835\udc35 \ud835\udf0e\ud835\udf0e\ud835\udc35\ud835\udc35 \ud835\udf0e\ud835\udf0e\ud835\udc35\ud835\udc35 \ud835\udf0e\ud835\udf0e\ud835\udc35\ud835\udc35 Figure 6. Unnormalized model predictions on synthetic data for given \u03c3A, \u03c3B. Negative values imply a prediction of \u02c6 y = 0, while positive values correspond to \u02c6 y = 1. \u03c3B, as shown by a high bAcc and dcor2. Model B2: Regularized v-space CNN To evaluate the efficacy of our conditional independence enforcement, we establish a baseline by embedding the equalized odds condition directly into the loss function. Defining R0 and R1 to be the absolute difference between the true positive rate and false positive rate for Y = 0 and Y = 1 respectively, the loss function becomes L = \\ lam b da (R _0 +R_1) + \\text {BCE}(\\hat y, y). (9) We tune the \u03bb parameter over the validation set to target the theoretical maximum accuracy under conditional independence. This regularizer appears to have an overbearing effect on the model, and despite the reduced dcor2, it struggles to learn a meaningful representation. Model 1: y-space CI-CNN We use the output of the final linear layer as input to our conditional independence enforcing loss component and apply binary cross entropy loss: L_ { \\tex t {Enc }} = \\lambda L_{\\text {CI}} + \\text {BCE}(\\hat y, y), (10) where, as in (5), \u03bb controls the strength of conditional independence enforcement and LCI is the loss from the dual discriminators. We find that the model learns a representation somewhat uncorrelated to \u03c3B, but only in an unstable manner without much improvement upon the regularized baseline. Model 2: v-space CI-CNN In our second iteration, we insert the conditional independence component closer to the latent space of the model, before the final linear layer, and employ the dynamic sampling procedure to simulate the conditional latent space distribution. The resulting model exhibits a much smaller confounding effect, achieving an accuracy closest to the theoretical maximum under CI and the lowest correlation between the protected and target variables. This demonstrates the effectiveness of our dynamic resampling procedure, which makes practical the enforcement of conditional independence with respect to the latent space and is one of the key contributions of this paper. 4.2. Face Image Experiments In selecting an experimental setting in which to apply our architecture to the diffusion autoencoder, we searched for datasets (1) labeled with a relevant secondary attribute that we could treat as a confounding variable, and (2) compatible with other public datasets that could be used to pretrain the DiffAE before finetuning. One of the few datasets meeting these criteria was a set of 1,270 face images provided by the Gender Shades project [5], representing subjects from 3 African countries and 3 European countries. Each image is labeled with both skin type, using the Fitzpatrick classification system, and sex, inheriting a binary male/female grouping as a simplified proxy for gender. We aligned and cropped the images to the format expected by the diffusion autoencoder model, and then employed our architecture to train the diffusion autoencoder to predict sex while being conditionally independent to skin type. We initialized a pretrained diffusion model based on the 128 \u00d7 128 Flickr-Faces-HQ Dataset, which consists of 70,000 highly varied face images. We treat the model\u2019s semantic subcode as the latent space and add a single linear layer prediction head to convert from this 512-length vector space to a single logit representing the sex label space. Models B1-3 We implement three baseline models. BRNet [1] uses adversarial training to learn image features that have statistical \u201cmean independence\u201d with the protected attribute. The adversarial objective of BR-Net is based on Pearson\u2019s correlation between the true and the predicted value of the protected attribute. Second, Kim et al. uses mutual information minimization to ensure that the learned image features cannot predict the protected attribute. Fi6 \fTable 2. Classification of sex from Gender Shades dataset facial images, with mean and standard deviation across five runs. Model bAcc (%) I II III IV V VI EO (%) dcor2 Kim et al. [13] 88.8 \u00b1 1.3 87.8 93.0 93.6 91.7 87.4 86.2 8.4 \u00b1 3.6 0.022 \u00b1 0.012 Multi-task [17] 93.9 \u00b1 0.4 92.7 95.6 93.6 100 95.8 87.3 7.3 \u00b1 1.6 0.190 \u00b1 0.012 BR-Net [1] 94.8 \u00b1 0.4 92.7 97.4 93.6 100 95.8 88.5 5.3 \u00b1 1.2 0.170 \u00b1 0.007 Vanilla Diffusion Autoencoder 93.0 \u00b1 0.6 85.4 95.6 100 91.7 87.3 90.1 11.5 \u00b1 1.1 0.326 \u00b1 0.000 y-space CI-DiffAE 94.8 \u00b1 2.1 80.5 93.7 93.6 87.5 81.1 86.2 9.9 \u00b1 6.0 0.120 \u00b1 0.034 v-space CI-DiffAE 96.6 \u00b1 1.8 97.5 100 97.9 91.7 93.7 87.3 5.0 \u00b1 2.5 0.076 \u00b1 0.025 0/1 Figure 7. Causal graph corresponding to face dataset. By way of the latent space V , we train a model to predict sex (Y ) from face image (X) while being invariant to skin color (S). Here, countless other factors (I) such as genetics and lighting also influence each image, and hidden confounders (dashed lines) cause correlation among inputs. nally, a multi-task network [17] is implemented to use the same encoder to predict both the target value and the protected attribute simultaneously. Model B4: Vanilla Diffusion Autoencoder For a closer comparison to our architecture, we train the diffusion autoencoder using its default loss function, denoted in [23] as Lsimple. In a separate optimization step, we train the prediction head using binary cross entropy loss (BCE). Model 1: y-space CI-DiffAE We unify the training of the diffusion autoencoder and the prediction head under a single loss function and enforce conditional independence with respect to the output of the prediction head. In the context of (5), the encoder objective function becomes L_ \\ text {Enc} = \\ l ambda L _{\\ text {CI}} + \\left ( L_{\\text {DiffAE}} + \\rho ~\\text {BCE}(y, \\hat y)\\right ), (11) for hyperparameter \u03c1. Model 2: v-space CI-DiffAE In an analogous manner to the v-space CI-CNN, we enforce conditional independence directly on the semantic subcode, keeping other portions of the model the same. Results in Table 2 indicate that the v-space constraint yields an all-around improvement in accuracy and fairness compared to the vanilla model, as opposed to the y-space constraint which achieves lower dcor2 with considerable impact on accuracy. This improvement is statistically significant as measured by a McNemar test (\u03c72 = 8.257, p-value = 0.0040). Overall, these results demonstrate that the v-space CI constraint produces results well on-par with existing work. Moreover, this technique has the added capability of causal image generation, described below. 4.2.1 Visual Results Due to the strong association between skin type and race, skin type impacts a large number of facial features. Our architecture removes much of this association, which creates a race-invariant representation, but as a result, also constricts the model\u2019s ability to encode anatomical facial features in the semantic subcode. Therefore, nearly all information becomes encoded by the stochastic subcode, and generated images are less realistic. As discussed previously, we remedy this effect by enforcing conditional independence on only a portion of the semantic subcode and simultaneously training a sensitive attribute prediction head on the remaining portion. We find that this partial CI-DiffAE architecture allows the semantic subcode as a whole to continue encoding the facial features necessary to produce a clean image. To interpret the effects on the semantic subcode, we generate images after adjusting the race-variant portion of the semantic subcode to values corresponding to each skin type (Fig. 8). We observe that the v-space model is effectively (and causally) able to disentangle race-related features within the semantic subcode, resulting in a smooth and meaningful transition with changes limited to those relating to race. Furthermore, by sampling the race-invariant portion of the semantic subcode, we retain the diffusion autoencoder\u2019s ability to generate novel images, but with added control over skin type (Fig. 9). These images demonstrate how our technique creates the race-invariant portion of the latent representation by removing the direct causal effect of skin shade, analogous to deleting the solid arrow between S and 7 \fy-space partial CI-DAE v-space partial CI-DAE Figure 8. Selected image reconstructions. For each variant of our CI-DiffAE, we reconstruct two images from the dataset while adjusting the race-variant portion of the semantic subcode: the leftmost image in each group corresponds to skin type 1, and the rightmost to skin type 6. The latent (v-space) CI constraint can effectively disentangle skin shade from other facial features. Figure 9. Race-invariant image generation using our v-space partial CI-DiffAE architecture. Each column contains hallucinated images of the same skin type. X in Fig. 7. At the same time, due to the fine-grained nature of the equalized odds criterion we enforce, the representation preserves the indirect effect of skin shade that is mediated by sex, shown in Fig. 7 as the dashed arrow between Y and S. Therefore, the model continues to capture differences in gender expression across race, most notably in the headwraps often worn by women in Sub-Saharan African cultures. By disentangling race from the latent representation, we gain a richer insight into the causal processes that underlie facial images, crucial for effective image generation. 5. Conclusion We introduced a framework to ensure fair and unconfounded representation learning during training and demonstrated both its versatility when applied to complex models and its effectiveness when compared to alternative methods. We iterated upon the theoretical idea of expressing the conditional independence constraint as an equality of two Jensen-Shannon divergences and extended this to high dimensional latent space via a dynamic sampling technique that can be easily implemented for any encoder. Our work exposes a new approach to generally enforce a conditional independence constraint on a model, which can then be used in downstream tasks such as causal image generation and fair predictive models. To our knowledge, this is the only model-agnostic training approach to be shown effective on enforcing specific features to be encoded in given dimensions of the latent space. We are optimistic that further experimentation will reveal applications to other tasks concerning fairness and disentanglement. Moreover, as a direction for future work, the use of conditionally invariant embeddings may prove useful in extending traditional causal methods like mediational analysis to complex, highdimensional settings. Acknowledgement This research was partially supported by UST, Stanford Institute for Human-Centered AI (HAI) GCP cloud credits, and National Institutes of Health (NIH) grants U54HG012510 and AG084471. J.H. was also supported by a research fund from Panasonic and E.A. by the Stanford School of Medicine Jaswa Innovator Award."
},
{
"url": "http://arxiv.org/abs/2404.14544v1",
"title": "WangLab at MEDIQA-CORR 2024: Optimized LLM-based Programs for Medical Error Detection and Correction",
"abstract": "Medical errors in clinical text pose significant risks to patient safety. The\nMEDIQA-CORR 2024 shared task focuses on detecting and correcting these errors\nacross three subtasks: identifying the presence of an error, extracting the\nerroneous sentence, and generating a corrected sentence. In this paper, we\npresent our approach that achieved top performance in all three subtasks. For\nthe MS dataset, which contains subtle errors, we developed a retrieval-based\nsystem leveraging external medical question-answering datasets. For the UW\ndataset, reflecting more realistic clinical notes, we created a pipeline of\nmodules to detect, localize, and correct errors. Both approaches utilized the\nDSPy framework for optimizing prompts and few-shot examples in large language\nmodel (LLM) based programs. Our results demonstrate the effectiveness of LLM\nbased programs for medical error correction. However, our approach has\nlimitations in addressing the full diversity of potential errors in medical\ndocumentation. We discuss the implications of our work and highlight future\nresearch directions to advance the robustness and applicability of medical\nerror detection and correction systems.",
"authors": "Augustin Toma, Ronald Xie, Steven Palayew, Patrick R. Lawler, Bo Wang",
"published": "2024-04-22",
"updated": "2024-04-22",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"label": "Original Paper",
"paper_cat": "LLM Fairness",
"gt": "WangLab at MEDIQA-CORR 2024: Optimized LLM-based Programs for Medical Error Detection and Correction",
"main_content": "Introduction Medical errors pose a significant threat to patient safety and can have severe consequences, including increased morbidity, mortality, and healthcare costs. Detecting and correcting these errors in clinical text is crucial for ensuring accurate medical documentation and facilitating effective communication among healthcare professionals. One of the fastest-growing use cases for artificial intelligence (AI) in healthcare is clinical note generation, often from transcriptions of physician-patient dialogues. However, assessing the quality and accuracy of these notes is challenging, and automated detection and correction of errors could have a significant impact on patient care. The reliability of large language models (LLMs) in critical applications, such as healthcare, is a major concern due to the potential for hallucinations (generating false or nonsensical information) and inconsistencies. Robust solutions to the question of error detection and correction are essential for addressing these concerns and enabling the safe and effective use of LLMs in medical contexts. The MEDIQA-CORR 2024 (Ben Abacha et al., 2024a) shared task focuses on identifying and correcting medical errors in clinical notes. Each text is either correct or contains a single error. The task involves three subtasks: (1) detecting the presence of an error, (2) extracting the erroneous sentence, and (3) generating a corrected sentence for flagged texts. In this paper, we present our approach, which achieved the top performance across all three subtasks in the MEDIQA-CORR 2024 competition. We develop a series of LLM-based programs using DSPy, a framework for optimizing prompts and few-shot examples. We provide a detailed description of our methodology and results, followed by a discussion of the implications of our work and future directions in the field of medical error detection and correction. 2 Related Work The use of large language models (LLMs) in medicine has attracted considerable attention in recent years. The release of LLMs such as GPT-4 has led to intensive research in the medical community (Nori et al., 2023), particularly in clinical note generation. The MEDIQA-Chat 2023 (Ben Abacha et al., 2023) competition showcased the performance of automated note generation solutions (Giorgi et al., 2023), and further work has demonstrated that LLMs can sometimes outperform humans on clinical text summarization tasks 1 arXiv:2404.14544v1 [cs.CL] 22 Apr 2024 \f(Van Veen et al., 2024). However, there has been limited research focusing on granular audits of these clinical notes with respect to accuracy and error correction. The MEDIQA-CORR 2024 shared task addresses this gap by providing a platform for researchers to develop and evaluate novel approaches to error detection and correction in clinical text, ultimately contributing to the development of more reliable AI systems in healthcare. 3 Task Description The MEDIQA-CORR 2024 shared task provides two distinct datasets: MS and UW (Ben Abacha et al., 2024b). The MS dataset consists of a Training Set containing 2,189 clinical texts and a Validation Set (#1) containing 574 clinical texts. The UW dataset, on the other hand, consists solely of a Validation Set (#2) containing 160 clinical texts. The test set for the shared task includes clinical texts from both the MS and UW collections. The evaluation metrics for the MEDIQA-CORR 2024 shared task vary across the three subtasks: \u2022 Subtask 1 (Error Flag Prediction): Evaluated using Accuracy. \u2022 Subtask 2 (Error Sentence Detection): Evaluated using Accuracy. \u2022 Subtask 3 (Sentence Correction): Evaluated using ROUGE (Lin, 2004), BERTScore (Zhang et al., 2020), BLEURT (Sellam et al., 2020), Aggregate-Score (mean of ROUGE-1F, BERTScore, BLEURT-20), and Composite Scores. The Composite Score for each text in Subtask 3 is calculated as follows: 1. Assign 1 point if both the system correction and the reference correction are \"NA\" 2. Assign 0 points if only one of the system correction or the reference correction is \"NA\" 3. Calculate the score based on metrics (ROUGE, BERTScore, BLEURT and the AggregateScore) within the range of [0, 1] if both the system correction and reference correction are non-\"NA\" sentences. 4 Approach 4.1 Overview Upon reviewing the MS and UW datasets, it became apparent that these two datasets presented distinct challenges. The errors in the MS dataset were often extremely subtle, to the point that many errors did not actually seem like errors, and in fact, clinicians on our team often couldn\u2019t identify the presence of an error within the text. However, when reviewing corrected text from the training set, it became clear that corrections were often \u2019optimal\u2019 completions. For example, consider the following error and its correction: Error sentence: After reviewing imaging, the causal pathogen was determined to be Haemophilus influenzae. (Ben Abacha et al., 2024b) Corrected sentence: After reviewing imaging, the causal pathogen was determined to be Streptococcus pneumoniae. (Ben Abacha et al., 2024b) These types of errors are subtle and seem akin to multiple-choice questions, where often multiple answers could independently be seen as correct completions, but only in the context of one another would you deem one answer wrong. On the other hand, the UW dataset appeared to reflect realistic clinical notes, and the errors were more apparent. For example, consider the following error and its correction: Error sentence: Hypokalemia based on laboratory findings patient has hypervalinemia. (Ben Abacha et al., 2024b) Corrected sentence: Hypokalemia based on laboratory findings patient has hypokalemia. (Ben Abacha et al., 2024b) In this case, the error involves a nonsensical term (hypervalinemia, a rare metabolic condition) when the context makes it clear that the patient has hypokalemia (low potassium levels). These are errors that a clinician can identify from the text alone. The distinct characteristics of the MS and UW datasets prompted us to develop a two-pronged approach to the MEDIQA-CORR 2024 shared task. For the MS dataset, we employed a retrieval-based system to identify similar questions from external medical question-answering datasets and leverage the knowledge contained in these datasets to detect 2 \fand correct errors. For the UW dataset, we created a series of modules to detect, localize, and correct errors in clinical text snippets. Both approaches were built on DSPy (Khattab et al., 2023), a novel framework for systematically optimizing prompts and few-shot examples in LLM based programs. 4.2 Approach for MS Dataset Our approach to the MS dataset involves a multistep process that leverages retrieval-based methods and the DSPy framework, as illustrated in Figures 1, 2, and 3. In all of our experiments, we utilized GPT-4-0125-preview as the underlying large language model, using default generation parameters (temperature of 1.0, top_p of 1) with the exception of a max tokens value of 4096. 4.2.1 Retrieval of Similar Questions First, we employ a retrieval-based approach to identify similar questions from the MedQA dataset (Jin et al., 2020). MedQA is a medical questionanswering dataset that contains multiple-choice questions, each with a set of answer options and a correct answer. By leveraging the knowledge contained in this external dataset, we aim to detect and correct errors in the MS dataset. We use TFIDF (Sparck Jones, 1972) to calculate the similarity between the given question in the MS dataset and the questions in MedQA, retrieving the most similar questions along with their answer options and correct answers for further analysis. 4.2.2 Identifying Answer Choices within Query Text To identify the implicit answer choice within the query text, we employ a two-step process using DSPy programs. First, we send both the query text and the identified similar multiple-choice question to a DSPy module that utilizes chain of thought (Wei et al., 2023) and the BootstrapFewShotWithRandomSearch teleprompter (Khattab et al., 2023). This teleprompter generates 20 few-shot examples by sampling from the training set and testing the module\u2019s performance on the validation set. The module aims to extract the answer choice that appears to be present in the query text. The output from this module is then passed to a second DSPy module, which also leverages the BootstrapFewShotWithRandomSearch teleprompter. This module creates multiple fewshot examples that compare the extracted answer against the true answer from the multiple-choice Figure 1: Predicting the presence of an error through a comparison to the retrieved question Figure 2: Identifying the error sentence question, as shown in Figure 1. We simultaneously bootstrap these two steps, optimizing the entire pipeline based on the accuracy of the overall error flag prediction. The result of this bootstrapping process is a compiled program with optimized multi-step chain of thought prompts based on the module\u2019s performance on error detection accuracy. This approach allows us to effectively identify the presence of errors in the query text by leveraging the knowledge from external medical question-answering datasets. 4.2.3 Localizing Errors within Query Text After detecting an error in the query text, we use a DSPy module to identify the specific line containing the error, as illustrated in Figure 2. This module takes the extracted answer choice and the preprocessed query text as inputs and then an LLM call is done to determine which line most closely matches the erroneous answer choice. Our experiments showed that GPT-4\u2019s performance was high enough that we did not need to compile the program or bootstrap few-shot prompts via a DSPy teleprompter. The module outputs the line number where the error is located, which is crucial for the subsequent error correction step, as it allows for targeted correction of the relevant text. 4.2.4 Error Correction with DSPy After identifying the error location within the query text, we use a final DSPy module to generate a corrected version of the text, as illustrated in Figure 3. This module takes three inputs: the error line, the extracted answer choice, and the correct answer 3 \fFigure 3: Generating the corrected sentence derived from the most similar retrieved multiplechoice question. The error correction module utilizes a chain of thought prompt along with 20 few-shot examples generated by the BootstrapFewShotWithRandomSearch teleprompter. This teleprompter samples examples from the training set and generates intermediate labels, such as rationales for the chain of thought, to provide additional context and guidance for the language model during the error correction process. The teleprompter optimizes the selection of few-shot prompts based on their performance on the validation set, using the ROUGE-L score as the metric. The selected few-shot examples, accompanied by the generated intermediate labels, demonstrate how to modify the error line based on the extracted answer choice and the correct answer, serving as a reference for the model to learn from and adapt to the specific error correction task. The module outputs the corrected version of the query text, with the error line revised based on the correct answer derived from the most similar multiple-choice question. This corrected text represents the final output of our retrieval-based approach for the MS dataset, addressing the subtle errors present in the clinical text. 4.3 Approach for UW Dataset Our approach for the UW dataset involves optimizing a series of DSPy modules to accomplish all three subtasks sequentially, as illustrated in Figure 4. In all of our experiments, we utilized GPT4-0125-preview as the underlying large language model, using default generation parameters (temperature of 1.0, top_p of 1) with the exception of a max tokens value of 4096. 4.3.1 Error Detection with DSPy For the UW dataset, we first employ a DSPy program to identify whether an error exists in the given clinical text snippet. This program is optimized using the Multi-prompt Instruction Proposal Optimizer (MIPRO) teleprompter, which generates Figure 4: Overview of the UW dataset pipeline, consisting of three main stages: error detection, error localization, and error correction. Each stage is implemented using a DSPy module optimized with the MIPRO teleprompter (Khattab et al., 2023) The pipeline also includes a quality control step based on the ROUGE-L score between the original erroneous text and the corrected version. and optimizes both the base prompts and few-shot examples. MIPRO optimizes the prompts and fewshot examples to maximize performance on the validation set, which we created by dividing the UW training collection (160 examples) into 80 training examples, 40 validation examples, and 40 test examples. The optimizer uses error flag accuracy as the metric to optimize and generates 20 examples. We also incorporate chain of thought reasoning into the DSPy module. 4.3.2 Error Localization If an error is detected in the clinical text snippet, we use another DSPy module to identify the specific line containing the error. This module is also optimized using MIPRO, which generates 20 bootstrap examples that include chain of thought rationales. Using a separate DSPy module for error localization allows us to precisely identify the source of the error and facilitate targeted corrections. The exact match of the error line is used as the metric for optimization, and this module is trained only on a subset of the training samples that contain errors. 4.3.3 Error Correction After identifying the error line, we use a third DSPy module to generate a corrected version of the erroneous text. This module is also optimized using MIPRO, following the same process as the previous modules. The error correction module takes the erroneous text as input and generates a corrected version based on the optimized prompts and weights. MIPRO uses the ROUGE-L score against the known correct sentence as the metric to optimize, and this module is trained only on a subset of the training samples that contain errors. 4 \fRank Team Error Flags Accuracy 1 WangLab 86.5% 2 MediFact 73.7% 3 knowlab_AIMed 69.4% 4 EM_Mixers 68.0% 5 IKIM 67.8% 6 IryoNLP 67.1% 7 Edinburgh Clinical NLP 66.9% 8 hyeonhwang 63.5% 9 PromptMind 62.2% 10 CLD-MEC 56.6% Table 1: Top 10 teams\u2019 performance on Task 1 (Error Flags Accuracy) 4.3.4 Quality Control with ROUGE-L To ensure the quality of the generated corrections, we calculate the ROUGE-L score between the original erroneous text and the corrected version. If the ROUGE-L score is below a threshold of 0.7, which we set as an arbitrary estimate for quality, we reject the correction and use the original erroneous text instead. This fallback mechanism is based on the observation that the ROUGE-L score of the erroneous text tends to be quite high since the error is only a small portion of the sentence. However, this fallback is more of a contest-metric-focused feature rather than something that significantly improves performance. 5 Results and Discussion 5.1 Overall Performance in the MEDIQA-CORR 2024 Shared Task Our approach achieved top performance in the MEDIQA-CORR 2024 shared task across all three subtasks. Tables 1, 2, and 3 present the performance of the top 10 teams in each subtask. 5.2 Performance on Subtask 1 Error Prediction In the official contest results for binary error prediction, our approach achieved an accuracy of 86.5%, ranking first among all participating teams. Table 1 shows the top 10 teams\u2019 performance on Task 1. 5.3 Performance on Subtask 2 Error Sentence Detection For error sentence detection, we obtained an accuracy of 83.6%, ranking first among all teams. Table 2 presents the top 10 teams\u2019 performance. These results demonstrate the effectiveness of our few-shot learning and CoT-based approach in Rank Team Error Sentence Detection Accuracy 1 WangLab 83.6% 2 EM_Mixers 64.0% 3 knowlab_AIMed 61.9% 4 hyeonhwang 61.5% 5 Edinburgh Clinical NLP 61.1% 6 IryoNLP 61.0% 7 PromptMind 60.9% 8 MediFact 60.0% 9 IKIM 59.0% 10 HSE NLP 52.0% Table 2: Top 10 teams\u2019 performance on Task 2 (Error Sentence Detection Accuracy) detecting the presence of errors and localizing the specific sentences containing the errors. 5.4 Performance on Subtask 3 Sentence Correction For subtask C (Sentence Correction), the official contest results show that our approach achieved an Aggregate-Score of 0.789, which is the mean of ROUGE-1-F (0.776), BERTScore (0.809), and BLEURT (0.783). This was the highest score among the participating teams for the sentence correction task. Table 3 displays the top 10 teams\u2019 performance on Task 3. The official contest results highlight the competitive performance of our approach across all three subtasks of the MEDIQA-CORR 2024 shared task, demonstrating its effectiveness in detecting, localizing, and correcting medical errors in clinical text for both the MS and UW datasets. 5.5 Implications and Limitations of the Approach Our work contributes to the ongoing efforts in improving the accuracy and reliability of medical information in clinical text. The automated detection and correction of certain types of errors could ensure the quality and consistency of medical documentation, ultimately supporting patient safety and quality of care. The development and integration of more advanced systems could help alleviate the burden of manual error checking for the specific error types addressed, allowing healthcare providers to allocate more time and resources to delivering high-quality patient care. However, it is important to acknowledge the limitations of our approach in the context of the diverse nature of errors in medical documentation. While our system demonstrates strong performance on the MS and UW datasets, it focuses on a specific subset of errors and has not been shown to be effec5 \fRank Team AggregateScore R1F BERTSCORE BLEURT AggregateCR 1 WangLab 0.789 0.776 0.809 0.783 0.775 2 PromptMind 0.787 0.807 0.806 0.747 0.574 3 HSE NLP 0.781 0.779 0.806 0.756 0.512 4 hyeonhwang 0.734 0.729 0.767 0.705 0.571 5 Maven 0.733 0.703 0.744 0.752 0.524 6 Edinburgh Clinical NLP 0.711 0.678 0.744 0.711 0.563 7 knowlab_AIMed 0.658 0.643 0.677 0.654 0.573 8 EM_Mixers 0.587 0.571 0.595 0.596 0.548 9 IryoNLP 0.581 0.561 0.592 0.591 0.528 10 IKIM 0.559 0.523 0.564 0.588 0.550 Table 3: Top 10 teams\u2019 performance on Task 3 (Aggregate Score and its components) tive in addressing the wide diversity of errors that can occur in medical documentation. For instance, our approach does not currently address errors that are propagated through multiple notes when a physician references prior documents containing inaccuracies, such as incorrect medical history. Such errors can be particularly challenging to identify and correct, as they may require a comprehensive understanding of the patient\u2019s medical history, the context of the referenced documents, and the resolution of conflicting statements across documents. Our system has not been designed or evaluated for handling these types of errors. Moreover, our approach does not cover errors that originate from sources beyond the scope of our training data, such as poor transcriptions, entries in the wrong medical record, or errors in decision making. These types of errors may necessitate different strategies and techniques for detection and correction, and our current approach has not been developed to handle them. Additionally, the reliance on external datasets for the retrieval-based approach in the MS dataset limits the generalizability of our method to other medical domains or datasets. In fact, we believe that an approach used in the MS dataset might actually create further errors if used on real clinical text, as real clinical practice does not always reflect optimal or most likely completions. The effectiveness of our approach in detecting and correcting errors may vary depending on the specific characteristics and error types present in different medical contexts, and further evaluation would be necessary to assess its performance in diverse settings. 5.5.1 Impact of Different LLMs and Compilation After the competition ended, we performed additional experiments to compare the performance of our approach when using GPT-4 and GPT-3.5 as the underlying language models for the DSPy modules, as well as the impact of using compiled and uncompiled DSPy programs. Table 4 presents the results of the ablation study for error flag accuracy (Task 1), error sentence detection accuracy (Task 2), and various metrics for Task 3. The results show that using GPT-4 as the underlying LLM consistently yields better performance compared to GPT-3.5 across all tasks. For Task 1, the compiled GPT-4 model achieves the highest accuracy of 97.3% (0.1%), while for Task 2, it achieves an accuracy of 97.0% (0.1%). The compiled DSPy programs outperform their uncompiled counterparts for both GPT-3.5 and GPT-4. In Task 3, the compiled GPT-4 model consistently outperforms the other models across all metrics, with the highest AggregateC score of 0.878 (0.002). Moreover, the results demonstrate that using compiled DSPy programs consistently outperforms the uncompiled approach across all tasks and datasets, emphasizing the significance of systematic optimization techniques in enhancing the performance of our error detection and correction system. It is important to note that we did not isolate the impact of retrieval in our post-competition experiments, as it was a fundamental component of all the modules in our approach. Removing the retrieval component would require the development of a new solution. However, the strong performance of our uncompiled GPT-3.5 solution suggests that a significant portion of the performance could be attributed to the retrieval process itself. Future work should 6 \fError Flags Accuracy (Task 1) GPT-3.5 Compiled GPT-3.5 Uncompiled GPT-4 Compiled GPT-4 Uncompiled Error Flags Accuracy 94.0% (0.4%) 81.2% (0.7%) 97.3% (0.1%) 88.9% (0.5%) Error Sentence Detection Accuracy (Task 2) GPT-3.5 Compiled GPT-3.5 Uncompiled GPT-4 Compiled GPT-4 Uncompiled Error Sentence Detection Accuracy 92.8% (0.5%) 78.5% (0.8%) 97.0% (0.1%) 88.0% (0.8%) Task 3 Metrics Metric GPT-3.5 Compiled GPT-3.5 Uncompiled GPT-4 Compiled GPT-4 Uncompiled aggregate_subset_check 0.853 (0.001) 0.809 (0.011) 0.824 (0.003) 0.827 (0.003) R1F_subset_check 0.827 (0.003) 0.778 (0.017) 0.789 (0.003) 0.792 (0.003) BERTSCORE_subset_check 0.874 (0.001) 0.827 (0.013) 0.856 (0.003) 0.857 (0.002) BLEURT_subset_check 0.859 (0.000) 0.824 (0.006) 0.827 (0.002) 0.832 (0.003) AggregateC 0.864 (0.004) 0.736 (0.010) 0.878 (0.002) 0.792 (0.005) Table 4: Ablation studies for error flag accuracy (Task 1), error sentence detection accuracy (Task 2), and Task 3 metrics. Numbers in parentheses represent standard deviations. explore the impact of different retrieval strategies on the performance of error detection and correction in clinical text. 5.6 Future Research Directions Although our approach has demonstrated competitive performance in the MEDIQA-CORR 2024 shared task, there are several potential avenues for future research that could further improve the effectiveness and applicability of our system. One area for future investigation is the finetuning of open access models specifically for clinical notes (Toma et al., 2023). While fine-tuning may lead to higher performance, we focused on working with DSPy in the current study and did not have the computational resources to maintain the necessary throughput and latency during initial experimentation. Future studies could examine the trade-offs between fine-tuning and using off-theshelf models with prompt optimization techniques, taking into account factors such as performance, efficiency, and scalability. Another direction for future research is the expansion of the benchmark dataset to include a broader range of errors, such as those spanning multiple documents or involving suboptimal clinical decisions. Broadening the scope of the dataset would enhance the robustness of error detection and correction systems and extend their applicability to more complex clinical scenarios. Integrating domain-specific knowledge, such as medical ontologies or expert-curated rules, into our approach could improve the system\u2019s ability to handle complex medical cases and make more informed decisions. This would be particularly relevant if the errors include suboptimal clinical decisions, as the system could provide more comprehensive support to healthcare professionals. Lastly, developing more comprehensive and robust methods for measuring and correcting errors is an area with significant potential. This could involve creating standardized evaluation metrics and datasets that better capture the intricacies of medical errors and developing more advanced error correction techniques that can handle a wider range of error types and contexts. 6 Conclusion The approach presented in this paper, which combines retrieval-based methods, few-shot learning, and systematic prompt optimization, demonstrates the potential of AI-assisted tools for detecting and correcting medical errors in clinical text. The strong performance achieved across all three subtasks of the MEDIQA-CORR 2024 shared task highlights the effectiveness of our methods in addressing the specific challenges posed by different datasets and error types. However, further research is necessary to extend the applicability of our approach to a wider range of medical contexts, incorporate domain-specific knowledge, and integrate with existing clinical systems. As the field of AI-assisted medical error detection and correction continues to evolve, collaboration between AI researchers and healthcare professionals will be crucial to develop solutions that effectively augment and support clinical decision-making processes, ultimately contributing to improved patient safety and healthcare quality. 7"
},
{
"url": "http://arxiv.org/abs/2404.13099v1",
"title": "Mathify: Evaluating Large Language Models on Mathematical Problem Solving Tasks",
"abstract": "The rapid progress in the field of natural language processing (NLP) systems\nand the expansion of large language models (LLMs) have opened up numerous\nopportunities in the field of education and instructional methods. These\nadvancements offer the potential for tailored learning experiences and\nimmediate feedback, all delivered through accessible and cost-effective\nservices. One notable application area for this technological advancement is in\nthe realm of solving mathematical problems. Mathematical problem-solving not\nonly requires the ability to decipher complex problem statements but also the\nskill to perform precise arithmetic calculations at each step of the\nproblem-solving process. However, the evaluation of the arithmetic capabilities\nof large language models remains an area that has received relatively little\nattention. In response, we introduce an extensive mathematics dataset called\n\"MathQuest\" sourced from the 11th and 12th standard Mathematics NCERT\ntextbooks. This dataset encompasses mathematical challenges of varying\ncomplexity and covers a wide range of mathematical concepts. Utilizing this\ndataset, we conduct fine-tuning experiments with three prominent LLMs: LLaMA-2,\nWizardMath, and MAmmoTH. These fine-tuned models serve as benchmarks for\nevaluating their performance on our dataset. Our experiments reveal that among\nthe three models, MAmmoTH-13B emerges as the most proficient, achieving the\nhighest level of competence in solving the presented mathematical problems.\nConsequently, MAmmoTH-13B establishes itself as a robust and dependable\nbenchmark for addressing NCERT mathematics problems.",
"authors": "Avinash Anand, Mohit Gupta, Kritarth Prasad, Navya Singla, Sanjana Sanjeev, Jatin Kumar, Adarsh Raj Shivam, Rajiv Ratn Shah",
"published": "2024-04-19",
"updated": "2024-04-19",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"label": "Original Paper",
"paper_cat": "LLM Fairness",
"gt": "Mathify: Evaluating Large Language Models on Mathematical Problem Solving Tasks",
"main_content": "Introduction Mathematical problem-solving represents a multifaceted cognitive skill, encompassing the comprehension of problem statements, identification of pertinent concepts and formulas, application of suitable strategies and algorithms, precise calculations, and the verification of solution validity and reasonableness. Traditionally, mathematical problem-solving has been imparted and assessed through conventional means such as textbooks, worksheets, and examinations, often affording limited feedback and learner guidance. Furthermore, these methods may not fully capture the diversity and intricacy of real-world mathematical challenges encountered by students. In the era of rapid advancements in artificial intelligence and natural language processing (NLP), large language models (LLMs) have emerged as formidable tools for generating natural language text across a spectrum of domains and tasks [12]. LLMs, grounded in the transformer architecture [32], have the capacity to glean long-range dependencies and contextual representations from vast corpora of text data. These LLMs have showcased impressive proficiency in mathematical reasoning and problem-solving by leveraging their inherent understanding of arithmetic operations, algebraic principles, and symbolic manipulation. Nevertheless, existing LLMs grapple with substantial hurdles in tackling math word problems, particularly those necessitating intricate reasoning, multi-step arithmetic calculations, or domain-specific knowledge [13, 20, 37]. Math-401 + Augmentation WizardMath Llama2 MAmmoTH Supervised Fine-Tuning I n f e r e n c e GSM-8K DeepMind MathQuest NumGLUE Simleq Inference Results Testing the fine-tuned models on these datasets to compare the inference results. Figure 1: This figure shows the fine-tuning flow, the LLMs we use for fine-tuning, and the datasets we use for inference. The advent of large language models (LLMs) has proven to be a boon in the field of education, as evidenced by recent studies [25, 29, 39]. These versatile models have ushered in a new era of learning possibilities, catering to individual student needs by considering their preferences, objectives, interests, and aptitudes. For instance, LLMs offer a tailored learning experience, providing personalized feedback, guidance, explanations, and recommendations [16]. Educators, too, find these models invaluable, as they simplify the creation of engaging learning materials such as quizzes, summaries, questions, and exercises [27]. Notably, LLMs can even generate multiple-choice questions based on provided text passages. Additionally, these models excel in enhancing language proficiency, aiding learners in vocabulary, grammar, pronunciation, and fluency [16]. Their versatility extends to assisting students and researchers in exploring new topics and extracting information from diverse sources. They effortlessly generate summaries [38], identify keywords, generate citations [17, 3, 4], and provide relevant links in response to queries. This paper endeavors to tackle the challenges posed by mathematical problem-solving within the context of LLMs. To this end, we introduce MathQuest, a comprehensive mathematics dataset meticulously curated from the 11th and 12th standard Mathematics NCERT textbooks1. This dataset spans various levels of mathematical complexity and encompasses a wide array of mathematical concepts. We introduce this dataset because existing open-source datasets primarily consist of relatively straightforward mathematical problems. In contrast, standard mathematical problems can be significantly more complex. To equip Large Language Models (LLMs) with the ability to solve these intricate problems, we conduct fine-tuning on this dataset. Furthermore, we propose a novel approach for fine-tuning three preeminent LLMs: MAmmoTH [41], LLaMA-2 [31], and WizardMath [23] 1https://ncert.nic.in/ 2 \fusing our MathQuest dataset. Our evaluation encompasses not only the performance of these finetuned models on our dataset but also their proficiency on other openly accessible mathematical reasoning datasets. Our findings indicate that MAmmoTH-13B outshines its counterparts, emerging as the most adept and proficient in solving the mathematical challenges presented. Thus, MAmmoTH13B establishes itself as a dependable and robust baseline for addressing NCERT mathematics problems. 2 Related Work In this section, we delve into the existing literature, unveiling a diverse array of approaches utilizing Large Language Models (LLMs) for tackling mathematical problems. Recent research has highlighted the potential of Large Language Models (LLMs) in education [2, 1]. They offer promise in automating question generation and supporting direct interactions within the learning environment [18]. Furthermore, investigations have explored few-shot prompting techniques over LLMs for addressing mathematical word problems [35, 42, 11]. The \"chain-ofthought\" prompting approach [35] leverages explicit intermediate reasoning steps to bolster the LLM\u2019s reasoning abilities. To mitigate arithmetic errors commonly observed in LLMs [21, 14], earlier studies [7] have explored the use of external calculators to execute operations generated by LLMs. Problem Problem If the lines $2x + y 3 = 0 , 5x + ky 3 = 0$ and $3x y 2 = 0$ are concurrent, find the value of k. If the lines $2x + y 3 = 0 , 5x + ky 3 = 0$ and $3x y 2 = 0$ are concurrent, find the value of k. Solution Solution For lines to be concurrent, they must intersect at a common point. We begin by determining the intersection point of lines (1) and (3). Using the lines $2x + y 3 = 0$ (referred to as (1)) and $3x y 2 = 0$ (referred to as (3)), and solving them simultaneously, we obtain the coordinates (1, 1) for their intersection. This means that for the lines to be concurrent, the point (1, 1) must also satisfy the second line, $5x + ky 3 = 0$ (referred to as (2)). Substituting x = 1 and y = 1 into this equation, we obtain $5(1) + k(1) 3 = 0$, which yields the result k = -2. For lines to be concurrent, they must intersect at a common point. We begin by determining the intersection point of lines (1) and (3). Using the lines $2x + y 3 = 0$ (referred to as (1)) and $3x y 2 = 0$ (referred to as (3)), and solving them simultaneously, we obtain the coordinates (1, 1) for their intersection. This means that for the lines to be concurrent, the point (1, 1) must also satisfy the second line, $5x + ky 3 = 0$ (referred to as (2)). Substituting x = 1 and y = 1 into this equation, we obtain $5(1) + k(1) 3 = 0$, which yields the result k = -2. Figure 2: Our Dataset MathQuest Sample Furthermore, [36] presents a novel method tailored for addressing elementary arithmetic and logical problems. This method concatenates the generated answer with the original problem statement, tasking the model with predicting the initial conditions to verify the accuracy of the answer. Notably, a subset of these approaches [10, 5] can function effectively with zero-shot prompts, offering a versatile approach to mathematical problem-solving. A specialized method, MathPrompter [15], targets the enhancement of arithmetic operations and reasoning capabilities of LLMs, particularly designed to facilitate mathematical problem-solving tasks. Various approaches exist for enhancing mathematical problem-solving with Large Language Models (LLMs). Wang et al.\u2019s self-consistency [34], built on the CoT framework, assesses multiple potential reasoning paths and selects answers via majority vote. [22] extend self-consistency by teaching a verifier to validate each step, while [24] use recent LLMs like GPT-3.5 to generate an output, provide feedback, and prompt the model for improvements. [33] evaluate pretrained language models on basic arithmetic expressions, including addition (+) and subtraction (\u2212), and [28] expand the assessment to include multiplication (\u2217) operations within the language models\u2019 scope. 3 Dataset For our research experiments, we employed the Math-401 dataset [40], which encompasses 401 samples of mathematical problems. This dataset encompasses a diverse range of mathematical operations, including addition (+), subtraction (\u2212), multiplication (\u2217), division (/), exponentiation, trigonometric functions (sin, cos, tan), logarithmic functions (log, ln), and incorporates integers, 3 \fdecimals, and irrational numbers (\u03c0, e). Recognizing the limited sample size of this dataset for effective learning by large language models, we expanded it through augmentation, resulting in a dataset size of 302, 000 samples. To construct our augmented dataset, we employed the SymPy Python library. This library allowed us to generate arithmetic mathematical equations along with their corresponding ground truth values. These equations covered basic arithmetic operators such as addition (+), subtraction (-), multiplication (*), and division (/). Furthermore, the dataset includes extensive arithmetic expressions with brackets, mimicking the complexity often encountered in real-world math word problems. Table 1 provides a comprehensive breakdown of the question types utilized in the creation of our augmented dataset. Furthermore, we evaluated our model\u2019s performance on four additional datasets: GSM-8K [8], DeepMind [30], NumGLUE [26], and SimulEq [19]. Type Range Decimal Places (1 4) Variables Count Small Integer [-20, 20] \u00d7 (x, y) 65,000 Small Decimal [-20, 20] \u2713 (x, y) 35,000 Small Decimal + Integer [-20, 20] \u2713 (x, y) 39,000 Large Integer [-1000, 1000] \u00d7 (x, y) 39,000 Large Decimal [-1000, 1000] \u2713 (x, y) 25,000 Large Decimal + Integer [-1000, 1000] \u2713 (x, y) 25,000 3 Terms [-100, 100] \u2713 (x, y, z) 25,000 4 Terms [-100, 100] \u2713 (w, x, y, z) 49,000 Total 302,000 Table 1: The distribution of types of question in our augmented Math-401 dataset 3.1 Our Dataset: MathQuest We have meticulously curated our own dataset, referred to as MathQuest, sourcing problems from high school mathematics NCERT books. MathQuest is a rich resource, encompassing word problems of varying complexities and spanning diverse mathematical concepts. Our dataset comprises a total of 14 overarching mathematical domains, including sets, trigonometry, binomial theorem, and more. The distribution of samples across these concepts is visually represented in Figure.3. Our dataset contains total of 223 samples. Notably, as depicted in the charts, the category of \"Sequence and Series\" boasts the highest number of problems within our dataset. To provide a glimpse of our dataset\u2019s structure, we present a sample from MathQuest in Figure.2. 4 Methodology This research aims to enhance the mathematical problem-solving capabilities of large language models. Initially, we observed that existing open-source models such as LLaMA-2 [31] and Vicuna [6] struggled with elementary mathematical tasks like simple addition and subtraction. This observation served as the catalyst for our research, motivating us to improve LLMs\u2019 proficiency in comprehending and accurately solving mathematical problems. To achieve this, we adopted a instructive approach reminiscent of teaching mathematics to students. We commenced by imparting a clear understanding of fundamental operators such as +, \u2212, \u2217, /, gradually progressing to more advanced operators and expressions. Similarly, we endeavored to acquaint LLMs with the meanings of mathematical operators and expressions. To facilitate this process, we leveraged the Math-401 dataset [40], a valuable resource comprising 401 data samples consisting of basic mathematical questions and their corresponding answers. Given the dataset\u2019s limited size, we augmented it to introduce greater diversity and complexity, ensuring that the model could grasp and master advanced mathematical concepts during training. For the fine-tuning process, we employed three prominent large language models: LLaMA-2 [31], WizardMath [23], and MAmmoTH [41]. LLaMA-2 [31] represents an upgraded version of LLaMA, refined through training on an enriched mixture of publicly available data. The enhancements 4 \fCount Binomial Theorem Complex Numbers And Quadratic Equations Conic Sections Intro to 3-D Geometry Limits And Derivatives Linear Inequalities Permutation Combination Probability Relations And Functions Sequence And Series Statistics Straight Lines Trigonometric Functions Sets 40 30 20 10 0 Figure 3: Distribution of Count of Samples of each Concept encompass a 40% increase in the pre-training corpus size, a doubling of the model\u2019s context length, and the incorporation of grouped-query attention. WizardMath [23] introduces an innovative approach known as Reinforcement Learning from EvolInstruct Feedback (RLEIF). This method combines Evol-Instruct and reinforced process supervision techniques to evolve GSM8k and MATH datasets. Subsequently, it fine-tunes the pre-trained LLaMA2 model using the evolved data and reward models, resulting in the development of the WizardMath model. Lastly, the MAmmoTH [41] models are trained using the MathInstruct dataset, meticulously curated for instructional tuning. MathInstruct is constructed from a compilation of 13 mathematical datasets, including six newly curated rationales. It encompasses a hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, ensuring comprehensive coverage of diverse mathematical domains. The entire fine-tuning process is outlined in Figure. 1. Model # of Params LLaMA-2 7B LLaMA-2 13B WizardMath 7B WizardMath 13B MAmmoTH 7B MAmmoTH 13B Accuracy GSM-8K DeepMind NumGLUE SimulEq Math-401* MathQuest 16.0 46.0 37.0 11.0 10.0 10.4 22.0 50.0 42.0 15.0 10.0 14.1 61.0 51.0 54.0 27.0 6.0 14.6 65.0 55.0 70.0 36.0 8.0 14.3 43.0 49.0 54.0 23.0 11.0 12.2 44.0 48.0 56.0 26.0 14.0 18.1 Table 2: Exact Match Accuracy results on the set of 100 samples of 5 datasets and our dataset MathQuest Before fine-tuning on Math-401 dataset. (*) refers to the set of Math-401 we augmented for fine-tuning. 5 \f5 Experiments In this section, we delve into the details of our conducted experiments, outlining the experimental setup and the utilized hyper-parameters. Our research objective revolves around the creation of a high school-level mathematical dataset, encompassing questions of varying complexities and diverse concepts, followed by the establishment of robust baselines for solving mathematical problems. To achieve this, we conducted experiments involving three prominent large language models: LLaMA2 [31], WizardMath [41]. We performed these experiments on both the 7B and 13B variants of these large language models (LLMs). Our experiments were executed in two stages. In the first stage, we directly loaded the original model weights and carried out inference on our designated test set. In the second stage, we undertook the fine-tuning of these models using the Math-401 [40] dataset as a crucial step in the process. The Math-401 [40] dataset initially comprised 401 elementary mathematical equations paired with their corresponding results. To enhance its comprehensiveness and diversity, we performed data augmentation by introducing more intricate equations involving operators such as addition (+), subtraction (\u2212), multiplication (\u2217), division (/), as well as parentheses (()). This augmentation process aimed to create a more generalized and versatile dataset. Subsequently, we proceeded to fine-tune the Large Language Models (LLMs) using this augmented Math-401 [40] dataset. Model # of Params LLaMA-2 7B LLaMA-2 13B WizardMath 7B WizardMath 13B MAmmoTH 7B MAmmoTH 13B Accuracy GSM-8K DeepMind NumGLUE SimulEq Math-401* MathQuest 30.0 46.0 45.0 15.0 17.0 10.6 42.0 51.0 54.0 16.0 24.0 20.3 64.0 55.0 52.0 29.0 15.0 16.01 68.0 56.0 70.0 38.0 10.0 20.1 56.0 50.0 62.0 24.0 16.0 18.5 67.0 51.0 64.0 34.0 18.0 24.0 Table 3: Exact Match Accuracy Results on the set of 100 samples of 5 datasets and our dataset MathQuest After fine-tuning on Math-401 dataset. (*) refers to the set of Math-401 we augmented for fine-tuning. The dataset was split into training (241,600 samples), validation (30,200 samples), and test (30,200 samples) subsets. We used the AdamW optimizer, a well-recognized technique, to enhance model performance. This optimization step was crucial for achieving the results in our study. For fine-tuning, we employed QLora [9], an efficient approach that maximizes memory efficiency and minimize computation cost using 4-bit quantization in a pretrained language model, resulting in Low Rank Adapters (LoRA). Each model underwent 10 epochs of fine-tuning with a learning rate of 3 \u00d7 10\u22124. Post fine-tuning, we assessed the models using the same test set employed for pre-fine-tuning inference. The results, summarized in Table. 3, serve to highlight the enhancements achieved in mathematical problem-solving capabilities before and after fine-tuning. 5.1 Evaluation Metric We compared all model variants to evaluate the quality of the generated solutions. To measure performance, we assessed the accuracy in matching the generated answers to the actual solutions for five open-source datasets: GSM-8K, DeepMind, SimulEq, NumGLUE, and Math-401. These datasets provide ground truth answers for exact match accuracy calculation. 6 Results & Discussion In this section, we present the outcomes of our experiments in the domain of mathematical problemsolving. Our study encompasses evaluations conducted on our proprietary dataset, MathQuest, as well as five other publicly available datasets. This paper establishes baseline performance metrics for 6 \fthe task using our MathQuest dataset. To gauge the effectiveness of Large Language Models (LLMs) across diverse datasets, we utilize exact match accuracy as a benchmark metric. We organize our results into two distinct setups: before fine-tuning and after fine-tuning the models, with the primary aim of evaluating the model\u2019s learning capabilities. Table. 2 presents the exact match accuracy of three models across two variants, 7B and 13B, before fine-tuning, on five datasets and our dataset MathQuest. To summarize these findings, referring to Table. 2, the performance of all the models is notably lower on the SimulEq dataset, as well as on our augmented dataset, Math-401. This discrepancy can be attributed to the presence of intricate problems within these datasets, which often require additional knowledge, such as questions like \"Number of red color cards in a deck of 52 cards.\" Consequently, Table.3 provides a detailed overview of the accuracy results following the fine-tuning process. In summary, the accuracy of all models showed significant improvement after undergoing fine-tuning on our diverse and complex question-answer dataset. Notably, models with 13B parameters exhibited higher accuracy compared to those with 7B parameters. The key takeaways from Table. 2, and Table. 3 reveal that the best-performing model is MAmmoTH13B for our dataset MathQuest, exhibiting the highest accuracy among all models after fine-tuning, at 24.0%. Additionally, it\u2019s noteworthy that both MAmmoTH 7B and 13B generated outputs with precision up to two decimal places, indicating their accuracy. From Table 3, It is evident that our dataset, MathQuest, poses a greater challenge due to its complexity and diversity, resulting in lower accuracy compared to other datasets. 7 Conclusion In summary, our approach enhances Large Language Models (LLMs) in acquiring vital reasoning skills for precise mathematical problem-solving. We introduce tailored question-answer pairs in our MathQuest dataset, encompassing single or multiple mathematical operators and expressions. These supportive simple and complex problems guide the model toward incremental problem-solving. Our primary aim is to provide illustrative examples that improve solution accuracy and clarity. Our results demonstrate significant enhancements in both solution precision and comprehensibility, promising valuable support for educators and students seeking effective mathematical problemsolving capabilities. While our research establishes a robust foundation for advancing mathematical problem-solving with Generative LLMs, further refinements and optimizations are essential to extend its applicability across a broader range of scenarios. Ultimately, our work contributes to advancing conceptual understanding and numerical problem-solving in high school-level mathematical question-answering, offering valuable assistance to students and professionals grappling with complex questions through LLMs. 8 Limitations While our proposed solution can successfully solve basic mathematical problems, it occasionally encounters challenges when dealing with complex mathematical problems that involve retaining variable values for use in subsequent equations. Another limitation of our proposed work is the partial enhancement of reasoning abilities in LLMs for solving mathematical problems. However, it still falls short in dealing with complex expressions that include nested brackets within equations. The reason could be limited training dataset size, we will try to increase our training data in future research. We intend to address this limitation in our future work, wherein we plan to incorporate recent prompting techniques and further enhance LLMs reasoning abilities for these types of problems. 9 Acknowledgement Dr. Rajiv Ratn Shah is partly supported by the Infosys Center for AI, the Center of Design and New Media, and the Center of Excellence in Healthcare at Indraprastha Institute of Information Technology, Delhi. We gratefully thank Dr. Astha Verma and Mr. Naman Lal for their guidance and continuous support during our research. Their knowledge and insightful feedback significantly influenced 7 \fthe direction and quality of our research. We appreciate their time, devotion, and willingness to share information, which all contributed considerably to the accomplishment of this job. Their encouragement and constructive talks were a continual source of motivation for us, and we consider ourselves fortunate to have benefited from their wisdom and leadership."
}
]
}