| { |
| "url": "http://arxiv.org/abs/2404.16645v1", |
| "title": "Tele-FLM Technical Report", |
| "abstract": "Large language models (LLMs) have showcased profound capabilities in language\nunderstanding and generation, facilitating a wide array of applications.\nHowever, there is a notable paucity of detailed, open-sourced methodologies on\nefficiently scaling LLMs beyond 50 billion parameters with minimum\ntrial-and-error cost and computational resources. In this report, we introduce\nTele-FLM (aka FLM-2), a 52B open-sourced multilingual large language model that\nfeatures a stable, efficient pre-training paradigm and enhanced factual\njudgment capabilities. Tele-FLM demonstrates superior multilingual language\nmodeling abilities, measured by BPB on textual corpus. Besides, in both English\nand Chinese foundation model evaluation, it is comparable to strong\nopen-sourced models that involve larger pre-training FLOPs, such as Llama2-70B\nand DeepSeek-67B. In addition to the model weights, we share the core designs,\nengineering practices, and training details, which we expect to benefit both\nthe academic and industrial communities.", |
| "authors": "Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Chao Wang, Xinzhang Liu, Zihan Wang, Yu Zhao, Xin Wang, Yuyao Huang, Shuangyong Song, Yongxiang Li, Zheng Zhang, Bo Zhao, Aixin Sun, Yequan Wang, Zhongjiang He, Zhongyuan Wang, Xuelong Li, Tiejun Huang", |
| "published": "2024-04-25", |
| "updated": "2024-04-25", |
| "primary_cat": "cs.CL", |
| "cats": [ |
| "cs.CL", |
| "cs.AI" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "Tele-FLM Technical Report", |
| "main_content": "Introduction Large Language Models (LLMs) have been considered a remarkable approach for unsupervised learning, utilizing extensive data to achieve significant advancements. Large models based on decoder-only Transformers [64; 43] have demonstrated strong abilities on language understanding, generation, and in-context learning [10], et al.. Through downstream supervised fine-tuning (SFT) and task-specific alignments (e.g., Reinforcement Learning from Human Feedback, RLHF) [41], LLMs have led to significant progress in the development of dialogue assistant applications with their human-level multi-turn interaction capabilities [40]. Furthermore, LLMs have demonstrated complex cognitive abilities as reflected by code interpretation and completion [37], mathematical problem-solving [35], logical reasoning [69], and agent-like actions [9]. Recently, LLMs have also shown potential to facilitate a unified sequence-to-sequence modeling paradigm for multimodal learning by treating image, video, and audio signals all as token sequences [57; 30]. This positions LLMs as pivotal for progress towards Artificial General Intelligence (AGI) [11]. Inspired by the superior performances of proprietary applications [40; 6], a plethora of open-sourced LLMs has been publicly available for both the English [60; 61; 42; 27; 58] and Chinese [71; 5; 7; 33] communities. The open-sourced models typically vary in size from 7B to 70B parameters, with their performances improving with model sizes and training FLOPs, which is described as scaling laws [29; 23]. Open LLMs can be classified into foundation language models, SFT models, and RLHF models. \u2020Indicates equal contribution. *Corresponding authors. Technical Report. April 26, 2024 (v1) arXiv:2404.16645v1 [cs.CL] 25 Apr 2024 \fTele-FLM Technical Report 2 PRE-TRAINING DATA Despite the growing prevalence and impressive evaluation performances, the high computational cost remains the major challenge in LLM development. In this study, we focus on alleviating the excessive computation by establishing a model-producing pipeline that streamlines the hyperparameter searching process, minimizes trial-and-error, and reduces restarts in training. For instance, the Llama technical report [60] assumed the use of around 2,048 A100 GPUs for 5 months, while a single Llama-65B training trial spanned only 21 days, constituting only 14% of the total GPU time. It indicates that open-source endeavors of pre-training LLMs may undergo redundant trial-and-error cycles that may consume enormous computational resources. In contrast, in this work, we reduce the total time cost due to restarts and trial-and-error to negligible levels. We believe that sharing our detailed techniques, engineering practices, and training dynamics [20], especially for LLMs exceeding the 50B scale, could benefit the community as well as contribute to green AI. In this report, we introduce Tele-FLM (aka FLM-2), an open multilingual LLM with 52 billion parameters, which is pre-trained from scratch on a 2.0 trillion token corpus comprising texts from English, Chinese, and various other languages. Tele-FLM inherits and extends the low carbon techniques and fact-enhancing pre-training objectives from the FLM family [33]. The training of Tele-FLM has encountered no instability issue except hardware failures through the completed 2T tokens, and remains ongoing for more data. In addition to the model checkpoints, we release the details of data composition, model architecture, hyperparameter searching, and the full pre-training dynamics. We evaluate Tele-FLM across multiple English and Chinese benchmarks. Regarding English language modeling, Tele-FLM has better Bits-Per-Byte (BPB) than Llama2-70B [61], demonstrating strong compression capabilities. The model also achieves lower BPB than Llama3-70B [2] and Qwen1.572B [5] on Chinese corpora, showcasing its multilingual nature. With fewer English training tokens and smaller models, Tele-FLM matches Llama-65B and is comparable to Llama2-70B in English foundation model evaluation. As for Chinese foundation model evaluation, Tele-FLM matches the overall performance of larger multilingual models trained with a similar amount of data (e.g., DeepSeek-67B [7]). On certain tasks, it surpasses larger models trained with significantly more data (e.g., Qwen1.5-72B). The remainder of this report is structured as follows: Section 2 delves into the specifics of pretraining data processing. Section 3 details our model architecture, tokenizer, infrastructures, training techniques, and hyperparameters. In Section 4, we illustrate the pre-training dynamics and conduct BPB-based evaluation and analysis. Benchmark evaluation in both English and Chinese are provided in Section 5. Section 6 discusses some common issues and lessons learned. Section 7 reviews related literature. We conclude our work and look to the future in Section 8. 2 Pre-training Data Our training dataset comprises a variety of domains, as detailed in Table 1. We build a custom pipeline on spark cluster for massive data processing and apply custom functions to each subset. The pipeline includes text extraction from HTML/WARC, cleaning and paragraph-level deduplication with heuristic rules, model-based quality filtering and document-level deduplication with MinHash [8] algorithm. We obtain 2T tokens after all the procedures, and the distribution ratio between English and Chinese data is roughly 2:1. We incorporate more English data because of its higher quality, especially regarding the WebText domain. Additionally, in line with the methodology of GPT-4, we collected some instruct data and incorporated it into our pre-training data after removing the test sets of common datasets using the strict n-gram-based method. We deliberately avoid \u201ctraining on the test set\u201d or any other benchmark-oriented trick. WebText. CommonCrawl1 is often considered to be a repository containing diverse human experience and rich knowledge (especially long-tail knowledge). However, the high-quality sources in CommonCrawl are primarily concentrated in the English segment, with the Chinese content exhibiting relatively lower information density and quality. We use the latest CommonCrawl dumps from RedPajama [15] and incorporate WudaoCorpora [77] and similar Chinese-specific datasets together to form a large web-text dataset. We apply custom heuristic rules and a FastText [28] classifier to 1https://commoncrawl.org/. 2 \fTele-FLM Technical Report 3 PRE-TRAINING DETAILS Table 1: Pre-training data. For each subset of our 2T pre-training tokens, we detail the language, the sampling proportion, the number of epochs completed during training, and the disk size. Domain Language Sampling Prop. Epochs Disk Size WebText en, zh 75.21% 1.0 5.9 TB Code code, zh 9.81% 1.0 528.1 GB Book en, zh 7.17% 0.8 647.6 GB WorldKnowledge multi., en, zh 2.87% 2.5 67.5 GB QA en, zh 2.12% 1.0 159.2 GB AcademicPaper en 0.99% 1.0 54.4 GB Profession-Law zh 1.04% 1.0 84.2 GB Profession-Math math 0.62% 2.0 6.1 GB Profession-Patent zh 0.14% 1.0 10.4 GB Profession-Medical zh 0.02% 1.0 1.2 GB ClassicalChinese zh 0.02% 2.5 0.5 GB filter out low-quality content, cross-deduplicate for each language, and up-sample/down-sample each subset with regard to data quality. The ratio of English to Chinese is approximately 2:1. Code. We incorporate multiple Github-like code datasets and post-process it to filter-out low quality and duplicated content. Simultaneously, we carefully assembled and curated a well-formed markdown dataset comprising Chinese technical articles. Book. We collect books from various sources in both English and Chinese, such as Redpajama [15] and Gutenberg2, among others. We develop a series of cleaning steps to remove redundant formatting, garbled text, formula errors, duplicated paragraphs, and other unwanted content from the books. After interleaved deduplication on document level, we finally obtain a high-quality book dataset. The ratio of English to Chinese is nearly 1:1. WorldKnowledge. To enrich the model\u2019s knowledge base and common sense, we add Wikipedia dumps3 from 2024 period to our training set, covering 22 languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, ja, nl, pl, pt, ro, ru, sl, sr, sv, uk, zh. We first process these dumps via Online Language Modelling Dataset Pipeline [59] to clean up format; then a meticulous multi-lingual cleaning function is applied to remove reference and subsequent content, which tend to be irrelevant to the main text. QA. We use StackExchange dataset provided by RedPajama-Data [15]. Furthermore, similar Chinese datasets are collected and incorporated into the training after filtering out those QA pairs with low information content. The ratio of English to Chinese in this subset is roughly 1:2. AcademicPaper. We use arxiv dataset collected and processed by RedPajama-Data. This dataset is processed following a Llama-like procedure, which mainly focuses on clearing useless or redundant formats for better language modeling. Profession. To enhance the model\u2019s capacity in various professional fields, we decide to include some specific domains in our dataset, including medical, law, patent, and math. Some subsets are from open-source data, such as Wanjuan-Patent [21] and MathGLM [74]. We post-process each subset independently to address formatting issues, private information disclosure, et al.. ClassicalChinese. In order to improve the model\u2019s understanding of traditional Chinese culture and its capability in classical Chinese, we carefully collect classic Chinese ancient books and poetry. These materials are more credible than those found in web texts; therefore, we assign them a larger weight during sampling. 3 Pre-training Details 3.1 Model Architecture We adapt the architecture of FLM-101B [33] as a backbone with several modifications. FLM-101B follows the standard GPT-style decoder-only transformer architecture [43], with pre-normalization 2https://www.gutenberg.org/. 3https://dumps.wikimedia.org/. 3 \fTele-FLM Technical Report 3 PRE-TRAINING DETAILS Table 2: Detailed model architecture. The model configuration of Tele-FLM\u00b5P is a reduced version of Tele-FLM with a smaller hidden size. Models Layer Num Attention Heads Hidden Size FFN Hidden Size Vocab Size Context Length Params Size (M) Tele-FLM 64 64 8,192 21,824 80,000 4,096 52,850 Tele-FLM\u00b5P 64 4 512 1,344 80,000 4,096 283 Table 3: Tokenizer compression ratio. Tokenizer Compression Ratio is defined as the ratio of token length to the original UTF-8 text length. Smaller values indicate better compression. We report the compression ratios of GPT-4, Llama1/2, Llama3, and Tele-FLM on various domains in our training set, as well as the weighted average. Tokenizer Vocab Size Compression Rate English Chinese Classical Chinese Code Multilingual Mathematical Weighted Avg. GPT-4 100k 0.221 0.420 0.478 0.267 0.303 0.508 0.291 Llama1/2 32k 0.262 0.515 0.558 0.367 0.314 0.974 0.356 Llama3 128k 0.220 0.294 0.353 0.267 0.274 0.508 0.251 Tele-FLM 80k 0.248 0.235 0.307 0.363 0.340 0.965 0.261 and adds a LayerNorm to the last layer\u2019s output. Meanwhile, we apply scalar multipliers to: (1) the output of the word embedding layer and (2) the final output hidden states before softmax. We leave these multipliers tunable in pre-training to control the numerical flow. For example, the output multiplier may benefit training by modulating the entropy of the vocabulary distribution. Building on FLM-101B, we further optimize the model structure for Tele-FLM. Specifically, We use RMSNorm [80] for normalization and SwiGLU [50] for the activation function. We roll back to use Rotary Positional Embedding (RoPE) [53] without Extrapolatable Position Embedding (xPos) [55], untie the embedding layer with language modeling head, and disable linear bias in the attention and all MLP modules. One mini version named Tele-FLM\u00b5P is used to search hyper-parameters here. Table 2 details the architecture of both Tele-FLM and Tele-FLM\u00b5P. 3.2 Tokenizer The key to training a text tokenizer is to make a better trade-off between compression ratio and vocabulary size. English-focused tokenizers like GPT-4 or previous Llama series often underperform in compressing Chinese text. In order to guarantee Tele-FLM\u2019s text compression ratio within Chinese while maintaining performance under multilingual setting, we train a tokenizer that aligns closely with the pre-training data distribution. We sample 12 million diverse text samples from our pretraining dataset as the tokenizer\u2019s training dataset, including multilingual texts with a primary focus on Chinese and English, code snippets, classical Chinese literature, and mathematical content. We train the tokenizer with Byte-level BPE (BBPE) algorithm [65]. Table 3 details the tokenizers of Tele-FLM, GPT-4, and the Llama family. The tokenizer of Tele-FLM outperforms GPT-4 and Llama series in both Chinese and Classical Chinese and is comparable with their performances in English, code, and multilingual content. In math, our tokenizer aligns with Llama2 while slightly trailing GPT-4. Overall, Tele-FLM tokenizer showcases a superior compression ratio for Chinese text and satisfactory performance in English. While slightly behind Llama3, Tele-FLM outperforms other approaches on average compression ratio by a large margin. 3.3 Cluster Hardware Tele-FLM is trained on a cluster of 112 A800 SXM4 GPU servers, each with 8 NVLink A800 GPUs and 2TB of RAM. The nodes have heterogeneous CPU architectures: 96 nodes with Intel 8358 (128\u00d7 2.60GHz) CPUs and 16 nodes with AMD 7643 (96\u00d7 2.30GHz) CPUs. All nodes are interconnected via InfiniBand (IB). The training process lasts around two months, including downtime due to unexpected factors. As a comparison of infrastructures, Llama3 [2] is pre-trained on at least 49,152 Nvidia H100 GPUs (in contrast to our 896\u00d7 A800). Meta also claims to have 4 \fTele-FLM Technical Report 3 PRE-TRAINING DETAILS the equivalent of 600k H100 GPUs for future computing power4. With this significant gap in total resources, computational efficiency and success rate are critical for average entities. 3.4 Parallelism Tele-FLM utilizes 3D parallel training, combining the prevailing methodologies: data parallelism, tensor parallelism, and pipeline parallelism. Data parallelism [63] is a well-established distributed training method, in which the samples in a batch are partitioned and distributed across multiple devices and processed simultaneously. No inter-device communication is involved in the forward and backward computation, while the gradient is aggregated at the end of each step. Tensor parallelism [51] splits specific neural network tensors across multiple devices and computes via inter-device communication. In Tele-FLM training, tensor parallelism is mainly applied to the attention and feed-forward modules. Excessive use of tensor parallelism may escalate GPU communication overheads and reduce the training speed. To alleviate this, we integrate pipeline parallelism [39] that partitions the model at the layer level. 3D parallelism incorporates these parallel approaches, prioritizing allocation of tensor parallelism groups with higher communication overheads to the same node, thereby maximizing intra-node communication and minimizing inter-node communication. The parallel training setup for Tele-FLM is a mixture of 4 tensor parallel, 2 pipeline parallel, and 112 data parallel. Additionally, we partition inputs to the Transformer\u2019s LayerNorm and Dropout layers along the sequence length dimension with sequence parallelism [31], yielding further GPU computational and memory savings. Furthermore, we utilize Distributed Optimizer module from Megetron-LM5 [46] with optimization. This optimizer further reduces GPU memory consumption by partitioning optimizer states with larger memory footprints across the data parallel dimension. 3.5 Hyperparameter Search Effective hyperparameter tuning may accelerate the loss reduction and ensure convergence, making it crucial for model training. However, the high cost of training large models often renders exhaustive grid searches impractical. Hence, we employ \u00b5P [73] for optimal parameter search. The Tensor Programs theories [72; 36] reveal universal relations in the training dynamics across a series of models, with their widths approaching infinity. For certain hyperparameter classes, this leads to a parameterized mapping for their optimal values between small and large widths. Generally, under \u00b5P transfer, wider models will consistently achieve lower loss than narrower ones when trained on identical data [73]. Consequently, if a narrow model converges, its wider counterparts will always converge. Based on this approach, we set a small model, namely Tele-FLM\u00b5P, for grid search purpose. As demonstrated in Table 2, this small model\u2019s architecture is different from Tele-FLM only in width. With a fixed layer number of 64 and attention head dimension of 128, we reduce the hidden size to 512. This modification results in 4 attention heads and a feed-forward hidden size of 1344. Due to its smaller size, Tele-FLM\u00b5P allows for significantly more experimental runs within fixed time and resource constraints. We search 7 hyperparameters: Learning Rate for vector-like and matrix-like weights, the Minimum Learning Rate at the end of the schedule, the initialization Standard Deviation for vector-like and matrix-like weights, the scaling factor for the embedding layer (namely Input Mult), and the scaling factor for the output hidden state in the final layer (namely Output Mult). For the definitions of vector/matrix-like weights and the \u00b5P transferring formula we apply, please refer to [75] and [73]. We use truncated normal distribution for model initialization. Figure 1 illustrates the loss and gradient norm dynamics of 9 hyperparameter combinations for the grid search, which are selected based on our prior knowledge of model configurations. We choose 4https://www.instagram.com/reel/C2QARHJR1sZ/?hl=en. 5https://github.com/NVIDIA/Megatron-LM. 5 \fTele-FLM Technical Report 4 LOSS DYNAMICS AND BPB EVALUATION 0 10000 20000 30000 40000 50000 Steps 2.60 2.65 2.70 2.75 2.80 2.85 2.90 2.95 3.00 Training Loss (a) Loss curves for grid search. 0 10000 20000 30000 40000 50000 Steps 0 2 4 6 8 10 Gradient Norm (b) Gradient norm curves for grid search. Figure 1: Experimental curves of hyperparameter search based on \u00b5P. Table 4: Tele-FLM Training Hyperparameters. Searched Hyperparameters Non-Searched Hyperparameters Learning Rate 1.5e-4 LR Schedule Type cosine Matrix Learning Rate 1.5e-4 LR Schedule (tokens) 2.5T Minimum Learning Rate 1.5e-5 Warmup Step 2,000 Standard Deviation 4e-3 Clip Grad 1.0 Matrix Standard Deviation 4.242e-3 Weight Decay 0.0 Input Mult 1.0 Batch Size (tokens) 5,505,024 Output Mult 3.125e-2 RoPE Theta 10,000 the hyperparameters represented by the red line for final training after assessing the rate of loss decrease, trend stability, and gradient norm stability. Using \u00b5P, we derive the optimal hyperparameter configuration for the final 52B model based on this searched result, which is detailed in Table 4. A more fine-grained search can be conducted with expanded time and budgets. 4 Loss Dynamics and BPB Evaluation 0 250 500 750 1000 1250 1500 1750 2000 Trained T okens (Billions) 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 Training Loss (a) Training loss curve. 0 250 500 750 1000 1250 1500 1750 2000 Trained T okens (Billions) 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 Validation Loss (b) Validation loss curve. 0 250 500 750 1000 1250 1500 1750 2000 Trained T okens (Billions) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Gradient Norm (c) Training gradient norm curve. Figure 2: Pre-training curves for Tele-FLM w.r.t. amount of data in billion tokens. We present the curves for training and validation loss and gradient norm on our pre-training data distribution in Figure 2. Figure 2a shows that the training process of Tele-FLM succeeds with a single, stable run without any divergence. This result is predictable with our \u00b5P hyperparameter search mentioned above. Figure 2b indicates that the loss curve generalizes well to validation data without saturation or overfitting. Figure 2c presents the gradient norm. We observe that the reduction in language modeling loss translates well into improvements on downstream tasks. Language modeling is compression [16]. Evaluation metrics related to language perplexity (PPL) are well-known to be closely connected to compression ratio. Moreover, these metrics usually exhibit more stable scaling behavior, making them an authentic foundation of downstream task performance (which is usually measured by more complex and nonlinear metrics [48]). For PPL-related evaluation, we use Bits-Per-Byte (BPB) [38; 18] as our metric, which considers both per-token loss and the 6 \fTele-FLM Technical Report 4 LOSS DYNAMICS AND BPB EVALUATION 0.0 0.5 1.0 1.5 2.0 0.50 0.55 0.60 0.65 0.70 0.75 WebT ext (en) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.45 0.50 0.55 0.60 0.65 0.70 AcademicPaper (en) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.60 0.65 0.70 0.75 0.80 Book (en) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.45 0.48 0.50 0.53 0.55 0.58 0.60 0.62 0.65 StackExchange (en) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.45 0.50 0.55 0.60 0.65 0.70 0.75 Wikipedia (multi-language) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.16 0.18 0.20 0.22 0.24 0.26 0.28 0.30 Github (code) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.60 0.80 1.00 1.20 1.40 WebT ext (zh) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.80 1.00 1.20 1.40 1.60 Book (zh) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.60 0.70 0.80 0.90 1.00 1.10 1.20 1.30 WorldKnowledge (zh) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.80 1.00 1.20 1.40 1.60 QA (zh) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 Trained T okens (Trillions) 1.00 1.20 1.40 1.60 1.80 2.00 2.20 ClassicalChinese (zh) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90 1.00 BPB Loss on Validation Dataset Professional (zh) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B Figure 3: BPB curves of Tele-FLM on representative English (en), Chinese (zh), multi-language, and code validation datasets, compared with Llama series. influence of domains and tokenizers. Specifically, on a test corpus in a certain domain, if the total loss is close, a model that tokenizes with a better compression ratio is preferred by the BPB metric. For the English language, we break down the BPB evaluation into 6 different domains, represented by validation datasets from WebText6, Github, Wikipedia, Books, ArXiv, and StackExchange, respectively. We compare with different versions of Llama, including Llama-65B, Llama2-70B, Llama3-8B, and Llama3-70B [2], to analyze how well Tele-FLM fits to compress English data. 6We use text from CommonCrawl and C4, which approximately represent the same source (broad web data). 7 \fTele-FLM Technical Report 5 BENCHMARK EVALUATIONS Table 5: BPB of Tele-FLM, Llama family models, and Qwen1.5-72B on English datasets. BPB is computed for 6 dataset categories, with weighted sum results based on Llama [60] and Tele-FLM training data configurations. The best results are in boldface and second-best underlined. Model WebText Github Wikipedia Book ArXiv StackExchange Weighted Sum L-Prop.1 F-Prop.2 Loss Llama-65B 1.650 0.543 1.297 1.791 1.205 1.293 1.572 1.485 Llama2-70B 1.588 0.471 1.198 1.695 1.103 1.220 1.506 1.418 Llama3-70B 1.729 0.597 1.300 1.886 1.042 1.388 1.642 1.556 Qwen1.5-72B 1.996 0.592 1.433 2.107 1.111 1.393 1.878 1.773 Tele-FLM (52B) 1.598 0.314 1.163 1.843 1.153 1.193 1.512 1.411 BPB Llama-65B 0.615 0.286 0.595 0.710 0.590 0.570 0.602 0.574 Llama2-70B 0.592 0.249 0.544 0.672 0.540 0.538 0.576 0.547 Llama3-70B 0.542 0.229 0.513 0.633 0.479 0.497 0.528 0.502 Qwen1.5-72B 0.642 0.234 0.601 0.717 0.521 0.515 0.620 0.586 Tele-FLM (52B) 0.562 0.164 0.570 0.700 0.567 0.531 0.550 0.516 1 L-Prop. (Llama [60] Proportion): 82% : 4.5% : 4.5% : 4.5% : 2.5% : 2.0%. 2 F-Prop. (Tele-FLM Proportion): 75.17% : 13.48% : 3.56% : 5.26% : 1.46% : 1.07%. Table 6: BPB of Tele-FLM, Llama family models and Qwen1.5-72B, on Chinese datasets. BPB is computed for 7 dataset categories, with direct average and weighted sum results based on Tele-FLM training data distributions. Models WebText Code Book World QA Classical Professional Direct Weighted1 Knowledge Chinese Average Sum Loss Llama-65B 1.773 1.236 2.029 1.586 2.076 2.819 1.215 1.819 1.782 Llama2-70B 1.419 1.019 1.542 1.189 1.681 2.233 0.896 1.426 1.414 Llama3-70B 2.152 1.264 2.210 1.722 2.568 2.844 1.109 1.981 2.114 Qwen1.5-72B 2.260 1.405 2.520 1.751 2.888 2.748 0.908 2.069 2.243 Tele-FLM (52B) 1.923 1.096 2.135 1.612 2.530 2.144 0.846 1.755 1.913 BPB Llama-65B 1.325 0.744 1.503 1.161 1.528 2.280 0.919 1.351 1.326 Llama2-70B 1.060 0.614 1.142 0.869 1.237 1.811 0.678 1.059 1.052 Llama3-70B 0.913 0.498 0.943 0.752 1.063 1.458 0.485 0.873 0.897 Qwen1.5-72B 0.759 0.537 0.871 0.663 0.951 1.237 0.329 0.764 0.759 Tele-FLM (52B) 0.643 0.478 0.741 0.619 0.831 0.949 0.290 0.650 0.646 1 Tele-FLM training set Proportion: 76.60% : 1.91% : 11.61% : 1.44% : 4.50% : 0.07% : 3.87%. Figure 3 illustrates the BPB trends w.r.t. to the amount of our pre-training data (in trillion tokens). As training progresses, Tele-FLM surpasses Llama2-70B on WebText, Github, and StackExchange, outperforming Llama-65B and Llama3-8B on almost all datasets, demonstrating strong foundation abilities in English. Numerical results are presented in Table 5. Regarding the weighted sum of BPB, Tele-FLM outperforms Llama-65B, Llama2-70B, Qwen1.5-72B, and Llama3-8B on both Tele-FLM and Llama [60] weighting proportions. Note that Llama3-8B is trained on more than 15T tokens, and these results may indicate that scaling up the model size is still important, despite the rapid growth of the total amount of training data. Similarly to English, we compute BPB across 7 domains with the corresponding Chinese validation data, namely WebText, Code, Book, World Knowledge, QA, Classical Chinese, and Professional. Results are visualized in Figure 3 (with \u201czh\u201d suffix). Specific scores are provided in Table 6. On all these validation corpora, Tele-FLM demonstrates lower BPB than Qwen1.5-72B and the latest Llama3-70B model. Thus, we conclude that our foundation model achieves strong compression performance for Chinese without sacrificing its English language modeling abilities, and vice versa. 5 Benchmark Evaluations 5.1 English: Open LLM, HumanEval, and BBH Benchmarks. We evaluate Tele-FLM on three public and widely-used English benchmarks: Open LLM Leaderboard7, HumanEval [12], and BIG-Bench Hard [52]. 7https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard. 8 \fTele-FLM Technical Report 5 BENCHMARK EVALUATIONS \u2022 Open LLM Leaderboard is hosted on Huggingface and includes 6 key tasks to measure a model\u2019s performance on a variety of areas, such as commonsense inference, knowledge capacity, truthfulness, and maths. We report our model\u2019s results with the official evaluation tools (Language Model Evaluation Harness [19]). For the baseline models, we pick the results directly from the Open LLM Leaderboard. \u2022 HumanEval, introduced by OpenAI, tends to evaluate the code generation ability of language models by measuring functional correctness of docstring-prompted output. We choose the pass@5 metric as a trade-off between representing model capability and the evaluation speed. \u2022 Big-Bench Hard is derived from the BIG-Bench benchmark, a diverse evaluation suite that focuses on tasks believed to be beyond the capabilities of current language models. The Big-Bench-Hard, containing 23 challenging tasks, is specifically chosen to represent areas where language models did not surpass average human-rater performance, according to prior evaluations [56]. Table 7: Performance of Tele-FLM and baselines on English benchmarks. Model Average ARC HellaSwag MMLU TruthfulQA WinoGrande GSM8K HumanEval BBH 25-shot 10-shot 5-shot zero-shot 5-shot 5-shot zero-shot 3-shot Llama2-70B 63.39 67.32 87.33 69.83 44.92 83.74 54.06 46.95 52.94 Llama2-13B 50.29 59.39 82.13 55.77 37.38 76.64 22.82 28.66 39.52 Llama-65B 56.98 63.48 86.09 63.93 43.43 82.56 37.23 33.54 45.54 Llama-13B 46.20 56.23 80.93 47.67 39.48 76.24 7.58 23.78 37.72 Tele-FLM (52B) 56.60 59.47 82.25 64.00 43.09 79.40 45.19 34.76 44.60 Results. Table 7 compares Tele-FLM to the Llama series. With 52B parameters and around 1.3T English pre-training tokens, Tele-FLM matches the overall performance of Llama-65B, which is trained on approximately 1.4T tokens. Regarding the nature of different subtasks, Tele-FLM shows advantages over Llama-65B on GSM8K [14] and HumanEval, which focus on reasoning capabilities, but performs slightly worse on some tasks that rely more heavily on knowledge. This disadvantage can potentially be mitigated with more pre-training data consumed. Besides, Tele-FLM achieves > 90% of the performances of Llama2-70B, which is larger in size and trained on a 2T token corpus. 5.2 Chinese: OpenCompass Benchmarks. To measure the Chinese language and knowledge capabilities of our model, we conduct an evaluation using the OpenCompass8 toolkit. Specifically, we choose the following tasks to evaluate the model\u2019s performance in multiple aspects: C-Eval [26] and CMMLU [32] (multisubject knowledge), C3 [54] (reading comprehension), CHID [82] (Chinese culture and language understanding), and CSL [34] (keyword recognition). Results. Table 8 shows evaluation results on Chinese benchmarks. On average, Tele-FLM achieves significantly higher scores than GPT-3.5 and comparable to GPT-4 and DeepSeek-67B [7], reaching 84% of Qwen1.5-72B\u2019s performance [5]. Note that Qwen1.5-72B is larger in size and trained with up to 3T tokens. On CHID and CSL, Tele-FLM shows leading performance among all the models compared. Interestingly, CHID is very specific to Chinese culture, while CSL comes from the scientific domain. This indicates Tele-FLM\u2019s potential to both quickly adapt to a specific language and benefit from general knowledge presented in different languages. 5.3 Evolution of Performance during Training We automatically track the evaluation scores on sampled validation data for 8 of the evaluation benchmarks, as depicted in Figure 4. We observe that for all the tasks, evaluation score improves as pre-training and validation loss/BPB decreases. For knowledge-oriented English benchmarks, including ARC [13], HellaSwag [78], Winogrande [3], and MMLU [22], the performances increase smoothly with more data, which is intuitive regarding the task nature. For reasoning-oriented tasks including GSM8K and BBH, we observe a sharper increase, which indicates these tasks have more complex metrics and could possibly demonstrate emergent abilities. CMMLU is a knowledge-oriented Chinese benchmark. The sharper increase in CMMLU indicates that our Chinese training data is far from saturating, and further improvement can be expected with the ongoing training process. 8https://opencompass.org.cn/home. 9 \fTele-FLM Technical Report 6 LESSONS LEARNED Table 8: Performance of Tele-FLM and baselines on Chinese benchmarks. The results of Qwen1.5-72B and our Tele-FLM are locally computed with the OpenCompass toolkit, while other results are picked from OpenCompass leaderboard. Model Average C-Eval CMMLU C3 CHID CSL GPT-4 76.64 69.90 71.00 95.10 82.20 65.00 GPT-3.5 61.86 52.50 53.90 85.60 60.40 56.90 Qwen1.5-72B 80.45 83.72 83.09 81.86 91.09 62.50 Qwen-72B 83.00 83.30 83.60 95.80 91.10 61.20 DeepSeek-67B 73.46 66.90 70.40 77.80 89.10 63.10 Tele-FLM (52B) 71.13 65.48 66.98 66.25 92.57 64.38 0.0 0.5 1.0 1.5 2.0 35 40 45 50 55 60 Acc Norm ARC 0.0 0.5 1.0 1.5 2.0 60 65 70 75 80 Acc Norm HellaSwag 0.0 0.5 1.0 1.5 2.0 10 20 30 40 Acc GSM8K 0.0 0.5 1.0 1.5 2.0 38 39 40 41 42 43 44 Exact Match BBH 0.0 0.5 1.0 1.5 2.0 T okens (T) 38 40 42 44 MC2 TruthfulQA 0.0 0.5 1.0 1.5 2.0 T okens (T) 65 70 75 80 Acc Norm Winogrande 0.0 0.5 1.0 1.5 2.0 T okens (T) 30 35 40 45 50 55 60 65 Acc MMLU 0.0 0.5 1.0 1.5 2.0 T okens (T) 54 56 58 60 62 64 66 68 Acc CMMLU Figure 4: Evolution of performance evaluated by Language Model Evaluation Harness during training. Note that we sampled 20% examples for Hellswag and 30% examples for MMLU considering the time cost. 6 Lessons Learned Lesson on Pre-training Data. We have the following observations in Tele-FLM\u2019s pre-training process. First, as is widely known, both quality and quantity of the data are critical for pre-training; however, when there should be a trade-off between quality and quantity, data quality might be prioritized. For our project, an English-Chinese data ratio of 2:1 works better than 1:1, likely because the average quality of the Chinese Web data we have is relatively low. Second, changing the data distribution midway sometimes leads to changes in gradient norm curves and potential divergence, while maintaining a fixed distribution is more stable. Another advantage of maintaining a fixed data distribution is that it allows for safer early-stop of the \u00b5P experiments. To conclude, the data processing should be as complete as possible before the pre-training starts. Lesson on Hyperparameter Search. We observe that \u00b5P-based methods [73; 75] are effective and efficient in searching for the best hyperparameters and predicting the behaviors of the final large models. Specifically, prior experiences and the open-sourced learning rates are good starting points for hyperparameter search. Nevertheless, initialization standard deviation and output multipliers have more significant influences than commonly known. Lesson on Loss Dynamics. First, the slope of the loss curve typically flattens after 500B tokens. Therefore, training should be restarted promptly if early loss values are unsatisfactory. Second, random loss spikes are common and acceptable if the gradient norm curve looks normal. We observe that our model recovers from all the spikes in the pre-training process, unlike the early open-sourced endeavors [81; 4; 79]. We speculate that modern Llama-like structures, especially those with non-bias designs and truncated normal initialization, combined with effective hyperparameter search, provide decent robustness against loss spikes. Another type of spike corresponds to consistent loss increases, which can be identified early with \u00b5P and avoided before the training begins. 10 \fTele-FLM Technical Report REFERENCES Lesson on Gradient Norm. The early gradient norm curves are not strong indicators of training stability. In hyperparameter search, we observe divergence following various gradient curve patterns, yet with higher divergence probabilities associated with continuously increasing gradient trends. 7 Related Work The idea of large foundation models originates from unsupervised pre-training with Transformerbased [64] architectures. Well-known examples of early foundation models include Bert [17], GPT-2 [43], and T5 [45]. GPT-3 [10] increases the model size to 175B and observes decent few-shot and zero-shot reasoning capabilities, which encourages a series of efforts to scale up foundation models [81; 47; 4; 79]. Research on scaling laws [29; 23; 24; 75] sheds light on the predictable trends of model performance when the parameter number increases. On the other hand, other works explore the emergent abilities [68; 67; 48] and their relationships to evaluation metrics and task nature. The Llama series [60; 61; 2] is well-known for its contributions to open-sourced large language models, and is widely regarded as a strong baseline for foundation model evaluation. Falcon [42] explores data processing of publicly available pre-training corpora. Mistral [27] and Gemma [58] release 7B-scaled models that are trained with more data and incorporated with advanced designs. For the Chinese community, Qwen [5], Baichuan [71], Yi [76], and DeepSeek [7] represent efforts in multilingual foundation model pre-training and open-sourcing. FLM-101B [33] studies methodologies for training large foundation models under limited budgets. InstructGPT [41] establishes the paradigm of aligning large foundation models with human preferences. Widely used approaches include supervised fine-tuning (SFT) [66; 70] and Reinforcement Learning from Human Feedback (RLHF) [49], among others [44]. Aligning techniques turn foundation models into dialogue agents, which form the core of AI assistants in commercial use. Closed-source dialogue agents are represented by GPT-4 [40], Claude [6], Grok [1], and Gemini [57]. Open-sourced chat models include Zephyr [62] and ChatGLM [25], among the large number of human-aligned versions of the open foundation models mentioned above. 8 Conclusions and Future Work In this report, we introduce Tele-FLM, an open multilingual foundation model. With 52B parameters and 2T training tokens, Tele-FLM matches the performance of larger models trained with more data, in both multilingual language modeling capabilities and benchmark evaluations. The pre-training procedure of Tele-FLM features a high success rate and low carbon footprint. We open-source the model weights as well as technical details and training dynamics. We hope this work will catalyze the growth of open-sourced LLM communities and reduce the trial-and-error cycles to train LLMs with more than 50B parameters. Note that although efforts are made to filter out harmful contents in the training data, such kind of outputs could still potentially be elicited from the released model, which does not represent the opinions of the authors or entities involved. For future work, we plan to continue enhancing the capabilities of Tele-FLM to facilitate broader application, as well as to develop efficient training techniques to explore the unmanned deep space of larger-scaled dense models. Acknowledgments This work is supported by the National Science and Technology Major Project (No. 2022ZD0116300) and the National Science Foundation of China (No. 62106249). We would like to thank Boya Wu, Li Du, Quanyue Ma, Hanyu Zhao, Shiyu Wu and Kaipeng Jia for their help on data, Hailong Qian, Jinglong Li, Taojia Liu, Junjie Wang, Yuanlin Cai, Jiahao Guo, Quan Zhao, Xuwei Yang, Hanxiao Qu, Yan Tian, and Kailong Xie for their help on computational resources, and all other colleagues\u2019 strong support for this project.", |
| "additional_info": [ |
| { |
| "url": "http://arxiv.org/abs/2404.13099v1", |
| "title": "Mathify: Evaluating Large Language Models on Mathematical Problem Solving Tasks", |
| "abstract": "The rapid progress in the field of natural language processing (NLP) systems\nand the expansion of large language models (LLMs) have opened up numerous\nopportunities in the field of education and instructional methods. These\nadvancements offer the potential for tailored learning experiences and\nimmediate feedback, all delivered through accessible and cost-effective\nservices. One notable application area for this technological advancement is in\nthe realm of solving mathematical problems. Mathematical problem-solving not\nonly requires the ability to decipher complex problem statements but also the\nskill to perform precise arithmetic calculations at each step of the\nproblem-solving process. However, the evaluation of the arithmetic capabilities\nof large language models remains an area that has received relatively little\nattention. In response, we introduce an extensive mathematics dataset called\n\"MathQuest\" sourced from the 11th and 12th standard Mathematics NCERT\ntextbooks. This dataset encompasses mathematical challenges of varying\ncomplexity and covers a wide range of mathematical concepts. Utilizing this\ndataset, we conduct fine-tuning experiments with three prominent LLMs: LLaMA-2,\nWizardMath, and MAmmoTH. These fine-tuned models serve as benchmarks for\nevaluating their performance on our dataset. Our experiments reveal that among\nthe three models, MAmmoTH-13B emerges as the most proficient, achieving the\nhighest level of competence in solving the presented mathematical problems.\nConsequently, MAmmoTH-13B establishes itself as a robust and dependable\nbenchmark for addressing NCERT mathematics problems.", |
| "authors": "Avinash Anand, Mohit Gupta, Kritarth Prasad, Navya Singla, Sanjana Sanjeev, Jatin Kumar, Adarsh Raj Shivam, Rajiv Ratn Shah", |
| "published": "2024-04-19", |
| "updated": "2024-04-19", |
| "primary_cat": "cs.CL", |
| "cats": [ |
| "cs.CL", |
| "cs.AI" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "Mathify: Evaluating Large Language Models on Mathematical Problem Solving Tasks", |
| "main_content": "Introduction Mathematical problem-solving represents a multifaceted cognitive skill, encompassing the comprehension of problem statements, identification of pertinent concepts and formulas, application of suitable strategies and algorithms, precise calculations, and the verification of solution validity and reasonableness. Traditionally, mathematical problem-solving has been imparted and assessed through conventional means such as textbooks, worksheets, and examinations, often affording limited feedback and learner guidance. Furthermore, these methods may not fully capture the diversity and intricacy of real-world mathematical challenges encountered by students. In the era of rapid advancements in artificial intelligence and natural language processing (NLP), large language models (LLMs) have emerged as formidable tools for generating natural language text across a spectrum of domains and tasks [12]. LLMs, grounded in the transformer architecture [32], have the capacity to glean long-range dependencies and contextual representations from vast corpora of text data. These LLMs have showcased impressive proficiency in mathematical reasoning and problem-solving by leveraging their inherent understanding of arithmetic operations, algebraic principles, and symbolic manipulation. Nevertheless, existing LLMs grapple with substantial hurdles in tackling math word problems, particularly those necessitating intricate reasoning, multi-step arithmetic calculations, or domain-specific knowledge [13, 20, 37]. Math-401 + Augmentation WizardMath Llama2 MAmmoTH Supervised Fine-Tuning I n f e r e n c e GSM-8K DeepMind MathQuest NumGLUE Simleq Inference Results Testing the fine-tuned models on these datasets to compare the inference results. Figure 1: This figure shows the fine-tuning flow, the LLMs we use for fine-tuning, and the datasets we use for inference. The advent of large language models (LLMs) has proven to be a boon in the field of education, as evidenced by recent studies [25, 29, 39]. These versatile models have ushered in a new era of learning possibilities, catering to individual student needs by considering their preferences, objectives, interests, and aptitudes. For instance, LLMs offer a tailored learning experience, providing personalized feedback, guidance, explanations, and recommendations [16]. Educators, too, find these models invaluable, as they simplify the creation of engaging learning materials such as quizzes, summaries, questions, and exercises [27]. Notably, LLMs can even generate multiple-choice questions based on provided text passages. Additionally, these models excel in enhancing language proficiency, aiding learners in vocabulary, grammar, pronunciation, and fluency [16]. Their versatility extends to assisting students and researchers in exploring new topics and extracting information from diverse sources. They effortlessly generate summaries [38], identify keywords, generate citations [17, 3, 4], and provide relevant links in response to queries. This paper endeavors to tackle the challenges posed by mathematical problem-solving within the context of LLMs. To this end, we introduce MathQuest, a comprehensive mathematics dataset meticulously curated from the 11th and 12th standard Mathematics NCERT textbooks1. This dataset spans various levels of mathematical complexity and encompasses a wide array of mathematical concepts. We introduce this dataset because existing open-source datasets primarily consist of relatively straightforward mathematical problems. In contrast, standard mathematical problems can be significantly more complex. To equip Large Language Models (LLMs) with the ability to solve these intricate problems, we conduct fine-tuning on this dataset. Furthermore, we propose a novel approach for fine-tuning three preeminent LLMs: MAmmoTH [41], LLaMA-2 [31], and WizardMath [23] 1https://ncert.nic.in/ 2 \fusing our MathQuest dataset. Our evaluation encompasses not only the performance of these finetuned models on our dataset but also their proficiency on other openly accessible mathematical reasoning datasets. Our findings indicate that MAmmoTH-13B outshines its counterparts, emerging as the most adept and proficient in solving the mathematical challenges presented. Thus, MAmmoTH13B establishes itself as a dependable and robust baseline for addressing NCERT mathematics problems. 2 Related Work In this section, we delve into the existing literature, unveiling a diverse array of approaches utilizing Large Language Models (LLMs) for tackling mathematical problems. Recent research has highlighted the potential of Large Language Models (LLMs) in education [2, 1]. They offer promise in automating question generation and supporting direct interactions within the learning environment [18]. Furthermore, investigations have explored few-shot prompting techniques over LLMs for addressing mathematical word problems [35, 42, 11]. The \"chain-ofthought\" prompting approach [35] leverages explicit intermediate reasoning steps to bolster the LLM\u2019s reasoning abilities. To mitigate arithmetic errors commonly observed in LLMs [21, 14], earlier studies [7] have explored the use of external calculators to execute operations generated by LLMs. Problem Problem If the lines $2x + y 3 = 0 , 5x + ky 3 = 0$ and $3x y 2 = 0$ are concurrent, find the value of k. If the lines $2x + y 3 = 0 , 5x + ky 3 = 0$ and $3x y 2 = 0$ are concurrent, find the value of k. Solution Solution For lines to be concurrent, they must intersect at a common point. We begin by determining the intersection point of lines (1) and (3). Using the lines $2x + y 3 = 0$ (referred to as (1)) and $3x y 2 = 0$ (referred to as (3)), and solving them simultaneously, we obtain the coordinates (1, 1) for their intersection. This means that for the lines to be concurrent, the point (1, 1) must also satisfy the second line, $5x + ky 3 = 0$ (referred to as (2)). Substituting x = 1 and y = 1 into this equation, we obtain $5(1) + k(1) 3 = 0$, which yields the result k = -2. For lines to be concurrent, they must intersect at a common point. We begin by determining the intersection point of lines (1) and (3). Using the lines $2x + y 3 = 0$ (referred to as (1)) and $3x y 2 = 0$ (referred to as (3)), and solving them simultaneously, we obtain the coordinates (1, 1) for their intersection. This means that for the lines to be concurrent, the point (1, 1) must also satisfy the second line, $5x + ky 3 = 0$ (referred to as (2)). Substituting x = 1 and y = 1 into this equation, we obtain $5(1) + k(1) 3 = 0$, which yields the result k = -2. Figure 2: Our Dataset MathQuest Sample Furthermore, [36] presents a novel method tailored for addressing elementary arithmetic and logical problems. This method concatenates the generated answer with the original problem statement, tasking the model with predicting the initial conditions to verify the accuracy of the answer. Notably, a subset of these approaches [10, 5] can function effectively with zero-shot prompts, offering a versatile approach to mathematical problem-solving. A specialized method, MathPrompter [15], targets the enhancement of arithmetic operations and reasoning capabilities of LLMs, particularly designed to facilitate mathematical problem-solving tasks. Various approaches exist for enhancing mathematical problem-solving with Large Language Models (LLMs). Wang et al.\u2019s self-consistency [34], built on the CoT framework, assesses multiple potential reasoning paths and selects answers via majority vote. [22] extend self-consistency by teaching a verifier to validate each step, while [24] use recent LLMs like GPT-3.5 to generate an output, provide feedback, and prompt the model for improvements. [33] evaluate pretrained language models on basic arithmetic expressions, including addition (+) and subtraction (\u2212), and [28] expand the assessment to include multiplication (\u2217) operations within the language models\u2019 scope. 3 Dataset For our research experiments, we employed the Math-401 dataset [40], which encompasses 401 samples of mathematical problems. This dataset encompasses a diverse range of mathematical operations, including addition (+), subtraction (\u2212), multiplication (\u2217), division (/), exponentiation, trigonometric functions (sin, cos, tan), logarithmic functions (log, ln), and incorporates integers, 3 \fdecimals, and irrational numbers (\u03c0, e). Recognizing the limited sample size of this dataset for effective learning by large language models, we expanded it through augmentation, resulting in a dataset size of 302, 000 samples. To construct our augmented dataset, we employed the SymPy Python library. This library allowed us to generate arithmetic mathematical equations along with their corresponding ground truth values. These equations covered basic arithmetic operators such as addition (+), subtraction (-), multiplication (*), and division (/). Furthermore, the dataset includes extensive arithmetic expressions with brackets, mimicking the complexity often encountered in real-world math word problems. Table 1 provides a comprehensive breakdown of the question types utilized in the creation of our augmented dataset. Furthermore, we evaluated our model\u2019s performance on four additional datasets: GSM-8K [8], DeepMind [30], NumGLUE [26], and SimulEq [19]. Type Range Decimal Places (1 4) Variables Count Small Integer [-20, 20] \u00d7 (x, y) 65,000 Small Decimal [-20, 20] \u2713 (x, y) 35,000 Small Decimal + Integer [-20, 20] \u2713 (x, y) 39,000 Large Integer [-1000, 1000] \u00d7 (x, y) 39,000 Large Decimal [-1000, 1000] \u2713 (x, y) 25,000 Large Decimal + Integer [-1000, 1000] \u2713 (x, y) 25,000 3 Terms [-100, 100] \u2713 (x, y, z) 25,000 4 Terms [-100, 100] \u2713 (w, x, y, z) 49,000 Total 302,000 Table 1: The distribution of types of question in our augmented Math-401 dataset 3.1 Our Dataset: MathQuest We have meticulously curated our own dataset, referred to as MathQuest, sourcing problems from high school mathematics NCERT books. MathQuest is a rich resource, encompassing word problems of varying complexities and spanning diverse mathematical concepts. Our dataset comprises a total of 14 overarching mathematical domains, including sets, trigonometry, binomial theorem, and more. The distribution of samples across these concepts is visually represented in Figure.3. Our dataset contains total of 223 samples. Notably, as depicted in the charts, the category of \"Sequence and Series\" boasts the highest number of problems within our dataset. To provide a glimpse of our dataset\u2019s structure, we present a sample from MathQuest in Figure.2. 4 Methodology This research aims to enhance the mathematical problem-solving capabilities of large language models. Initially, we observed that existing open-source models such as LLaMA-2 [31] and Vicuna [6] struggled with elementary mathematical tasks like simple addition and subtraction. This observation served as the catalyst for our research, motivating us to improve LLMs\u2019 proficiency in comprehending and accurately solving mathematical problems. To achieve this, we adopted a instructive approach reminiscent of teaching mathematics to students. We commenced by imparting a clear understanding of fundamental operators such as +, \u2212, \u2217, /, gradually progressing to more advanced operators and expressions. Similarly, we endeavored to acquaint LLMs with the meanings of mathematical operators and expressions. To facilitate this process, we leveraged the Math-401 dataset [40], a valuable resource comprising 401 data samples consisting of basic mathematical questions and their corresponding answers. Given the dataset\u2019s limited size, we augmented it to introduce greater diversity and complexity, ensuring that the model could grasp and master advanced mathematical concepts during training. For the fine-tuning process, we employed three prominent large language models: LLaMA-2 [31], WizardMath [23], and MAmmoTH [41]. LLaMA-2 [31] represents an upgraded version of LLaMA, refined through training on an enriched mixture of publicly available data. The enhancements 4 \fCount Binomial Theorem Complex Numbers And Quadratic Equations Conic Sections Intro to 3-D Geometry Limits And Derivatives Linear Inequalities Permutation Combination Probability Relations And Functions Sequence And Series Statistics Straight Lines Trigonometric Functions Sets 40 30 20 10 0 Figure 3: Distribution of Count of Samples of each Concept encompass a 40% increase in the pre-training corpus size, a doubling of the model\u2019s context length, and the incorporation of grouped-query attention. WizardMath [23] introduces an innovative approach known as Reinforcement Learning from EvolInstruct Feedback (RLEIF). This method combines Evol-Instruct and reinforced process supervision techniques to evolve GSM8k and MATH datasets. Subsequently, it fine-tunes the pre-trained LLaMA2 model using the evolved data and reward models, resulting in the development of the WizardMath model. Lastly, the MAmmoTH [41] models are trained using the MathInstruct dataset, meticulously curated for instructional tuning. MathInstruct is constructed from a compilation of 13 mathematical datasets, including six newly curated rationales. It encompasses a hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, ensuring comprehensive coverage of diverse mathematical domains. The entire fine-tuning process is outlined in Figure. 1. Model # of Params LLaMA-2 7B LLaMA-2 13B WizardMath 7B WizardMath 13B MAmmoTH 7B MAmmoTH 13B Accuracy GSM-8K DeepMind NumGLUE SimulEq Math-401* MathQuest 16.0 46.0 37.0 11.0 10.0 10.4 22.0 50.0 42.0 15.0 10.0 14.1 61.0 51.0 54.0 27.0 6.0 14.6 65.0 55.0 70.0 36.0 8.0 14.3 43.0 49.0 54.0 23.0 11.0 12.2 44.0 48.0 56.0 26.0 14.0 18.1 Table 2: Exact Match Accuracy results on the set of 100 samples of 5 datasets and our dataset MathQuest Before fine-tuning on Math-401 dataset. (*) refers to the set of Math-401 we augmented for fine-tuning. 5 \f5 Experiments In this section, we delve into the details of our conducted experiments, outlining the experimental setup and the utilized hyper-parameters. Our research objective revolves around the creation of a high school-level mathematical dataset, encompassing questions of varying complexities and diverse concepts, followed by the establishment of robust baselines for solving mathematical problems. To achieve this, we conducted experiments involving three prominent large language models: LLaMA2 [31], WizardMath [41]. We performed these experiments on both the 7B and 13B variants of these large language models (LLMs). Our experiments were executed in two stages. In the first stage, we directly loaded the original model weights and carried out inference on our designated test set. In the second stage, we undertook the fine-tuning of these models using the Math-401 [40] dataset as a crucial step in the process. The Math-401 [40] dataset initially comprised 401 elementary mathematical equations paired with their corresponding results. To enhance its comprehensiveness and diversity, we performed data augmentation by introducing more intricate equations involving operators such as addition (+), subtraction (\u2212), multiplication (\u2217), division (/), as well as parentheses (()). This augmentation process aimed to create a more generalized and versatile dataset. Subsequently, we proceeded to fine-tune the Large Language Models (LLMs) using this augmented Math-401 [40] dataset. Model # of Params LLaMA-2 7B LLaMA-2 13B WizardMath 7B WizardMath 13B MAmmoTH 7B MAmmoTH 13B Accuracy GSM-8K DeepMind NumGLUE SimulEq Math-401* MathQuest 30.0 46.0 45.0 15.0 17.0 10.6 42.0 51.0 54.0 16.0 24.0 20.3 64.0 55.0 52.0 29.0 15.0 16.01 68.0 56.0 70.0 38.0 10.0 20.1 56.0 50.0 62.0 24.0 16.0 18.5 67.0 51.0 64.0 34.0 18.0 24.0 Table 3: Exact Match Accuracy Results on the set of 100 samples of 5 datasets and our dataset MathQuest After fine-tuning on Math-401 dataset. (*) refers to the set of Math-401 we augmented for fine-tuning. The dataset was split into training (241,600 samples), validation (30,200 samples), and test (30,200 samples) subsets. We used the AdamW optimizer, a well-recognized technique, to enhance model performance. This optimization step was crucial for achieving the results in our study. For fine-tuning, we employed QLora [9], an efficient approach that maximizes memory efficiency and minimize computation cost using 4-bit quantization in a pretrained language model, resulting in Low Rank Adapters (LoRA). Each model underwent 10 epochs of fine-tuning with a learning rate of 3 \u00d7 10\u22124. Post fine-tuning, we assessed the models using the same test set employed for pre-fine-tuning inference. The results, summarized in Table. 3, serve to highlight the enhancements achieved in mathematical problem-solving capabilities before and after fine-tuning. 5.1 Evaluation Metric We compared all model variants to evaluate the quality of the generated solutions. To measure performance, we assessed the accuracy in matching the generated answers to the actual solutions for five open-source datasets: GSM-8K, DeepMind, SimulEq, NumGLUE, and Math-401. These datasets provide ground truth answers for exact match accuracy calculation. 6 Results & Discussion In this section, we present the outcomes of our experiments in the domain of mathematical problemsolving. Our study encompasses evaluations conducted on our proprietary dataset, MathQuest, as well as five other publicly available datasets. This paper establishes baseline performance metrics for 6 \fthe task using our MathQuest dataset. To gauge the effectiveness of Large Language Models (LLMs) across diverse datasets, we utilize exact match accuracy as a benchmark metric. We organize our results into two distinct setups: before fine-tuning and after fine-tuning the models, with the primary aim of evaluating the model\u2019s learning capabilities. Table. 2 presents the exact match accuracy of three models across two variants, 7B and 13B, before fine-tuning, on five datasets and our dataset MathQuest. To summarize these findings, referring to Table. 2, the performance of all the models is notably lower on the SimulEq dataset, as well as on our augmented dataset, Math-401. This discrepancy can be attributed to the presence of intricate problems within these datasets, which often require additional knowledge, such as questions like \"Number of red color cards in a deck of 52 cards.\" Consequently, Table.3 provides a detailed overview of the accuracy results following the fine-tuning process. In summary, the accuracy of all models showed significant improvement after undergoing fine-tuning on our diverse and complex question-answer dataset. Notably, models with 13B parameters exhibited higher accuracy compared to those with 7B parameters. The key takeaways from Table. 2, and Table. 3 reveal that the best-performing model is MAmmoTH13B for our dataset MathQuest, exhibiting the highest accuracy among all models after fine-tuning, at 24.0%. Additionally, it\u2019s noteworthy that both MAmmoTH 7B and 13B generated outputs with precision up to two decimal places, indicating their accuracy. From Table 3, It is evident that our dataset, MathQuest, poses a greater challenge due to its complexity and diversity, resulting in lower accuracy compared to other datasets. 7 Conclusion In summary, our approach enhances Large Language Models (LLMs) in acquiring vital reasoning skills for precise mathematical problem-solving. We introduce tailored question-answer pairs in our MathQuest dataset, encompassing single or multiple mathematical operators and expressions. These supportive simple and complex problems guide the model toward incremental problem-solving. Our primary aim is to provide illustrative examples that improve solution accuracy and clarity. Our results demonstrate significant enhancements in both solution precision and comprehensibility, promising valuable support for educators and students seeking effective mathematical problemsolving capabilities. While our research establishes a robust foundation for advancing mathematical problem-solving with Generative LLMs, further refinements and optimizations are essential to extend its applicability across a broader range of scenarios. Ultimately, our work contributes to advancing conceptual understanding and numerical problem-solving in high school-level mathematical question-answering, offering valuable assistance to students and professionals grappling with complex questions through LLMs. 8 Limitations While our proposed solution can successfully solve basic mathematical problems, it occasionally encounters challenges when dealing with complex mathematical problems that involve retaining variable values for use in subsequent equations. Another limitation of our proposed work is the partial enhancement of reasoning abilities in LLMs for solving mathematical problems. However, it still falls short in dealing with complex expressions that include nested brackets within equations. The reason could be limited training dataset size, we will try to increase our training data in future research. We intend to address this limitation in our future work, wherein we plan to incorporate recent prompting techniques and further enhance LLMs reasoning abilities for these types of problems. 9 Acknowledgement Dr. Rajiv Ratn Shah is partly supported by the Infosys Center for AI, the Center of Design and New Media, and the Center of Excellence in Healthcare at Indraprastha Institute of Information Technology, Delhi. We gratefully thank Dr. Astha Verma and Mr. Naman Lal for their guidance and continuous support during our research. Their knowledge and insightful feedback significantly influenced 7 \fthe direction and quality of our research. We appreciate their time, devotion, and willingness to share information, which all contributed considerably to the accomplishment of this job. Their encouragement and constructive talks were a continual source of motivation for us, and we consider ourselves fortunate to have benefited from their wisdom and leadership." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.13556v1", |
| "title": "ChatRetriever: Adapting Large Language Models for Generalized and Robust Conversational Dense Retrieval", |
| "abstract": "Conversational search requires accurate interpretation of user intent from\ncomplex multi-turn contexts. This paper presents ChatRetriever, which inherits\nthe strong generalization capability of large language models to robustly\nrepresent complex conversational sessions for dense retrieval. To achieve this,\nwe propose a simple and effective dual-learning approach that adapts LLM for\nretrieval via contrastive learning while enhancing the complex session\nunderstanding through masked instruction tuning on high-quality conversational\ninstruction tuning data. Extensive experiments on five conversational search\nbenchmarks demonstrate that ChatRetriever substantially outperforms existing\nconversational dense retrievers, achieving state-of-the-art performance on par\nwith LLM-based rewriting approaches. Furthermore, ChatRetriever exhibits\nsuperior robustness in handling diverse conversational contexts. Our work\nhighlights the potential of adapting LLMs for retrieval with complex inputs\nlike conversational search sessions and proposes an effective approach to\nadvance this research direction.", |
| "authors": "Kelong Mao, Chenlong Deng, Haonan Chen, Fengran Mo, Zheng Liu, Tetsuya Sakai, Zhicheng Dou", |
| "published": "2024-04-21", |
| "updated": "2024-04-21", |
| "primary_cat": "cs.IR", |
| "cats": [ |
| "cs.IR", |
| "cs.CL" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "ChatRetriever: Adapting Large Language Models for Generalized and Robust Conversational Dense Retrieval", |
| "main_content": "Introduction Conversational search is rapidly gaining prominence and reshaping how users interact with search engines to foster a more natural informationseeking experience. At the heart of a conversational search system lie two key components: retrieval and generation (Gao et al., 2022; Zhu et al., 2023). The retrieval process is tasked with sourcing relevant passages, which the generation component then uses to craft the final response. Conversational retrieval plays a crucial role in ensuring the accuracy and reliability of the system responses by providing relevant passages (Liu et al., 2023). Compared to traditional ad-hoc web search, conversational retrieval requires an accurate under*Corresponding author. !!: Can the bottom of the ocean freeze? # !: Ocean water freezes just like freshwater, \u2026, because of the salt\u2026 !\": How does it freeze? How does the bottom of ocean water freeze? ChatRetriever LLM \u201cReformulate the current query into a context-free rewrite\u201d Conversational Search Session LLM Prompting LLM-based Rewriter LLM Conv. Retrieval Adaption CSIT on high-quality conversational instruction tuning data Figure 1: Illustration of adapting LLM for query rewriting and conversational dense retrieval. standing of the user\u2019s real search intent within longer, noisier, and more complex conversational contexts. A \u201cshortcut\u201d approach is to transform the conversational session into a standalone query rewrite, enabling the usage of ad-hoc retrievers for conversational retrieval. However, the additionally introduced rewriting process is hard to directly optimize towards better retrieval, and it also introduces extra search latency from the rewriting step (Yu et al., 2021). In contrast, the end-to-end conversational dense retrieval appears to be more promising, as it directly encodes the original conversational search session and passages into dense representations without additional input processing and can enjoy the efficiency benefit from advanced approximate nearest neighbor search algorithms (e.g. Faiss (Johnson et al., 2021)). Nonetheless, the effectiveness of existing conversational dense retrievers largely trails behind state-of-the-art conversational query rewriting approaches, which leverage large language models (LLMs). Owing to their strong text understanding and generation capabilities, LLM-based rewriters (Mao et al., 2023b; Ye et al., 2023) have demonstrated exceptional effectiveness, even outperforming human rewrites. Given that LLMs are inherently generative models, they can naturally serve as a high-quality conversational rewriter just through prompting (Figure 1). The question that remains is: whether the potent capabilities of LLMs can be harnessed to substantially enhance the performance of conversational dense retrievers. Several studies have explored tuning LLMs for arXiv:2404.13556v1 [cs.IR] 21 Apr 2024 \fdense retrieval but with a primary focus on ad-hoc search (Asai et al., 2023; Su et al., 2023; Ma et al., 2023; Wang et al., 2024; Muennighoff et al., 2024). While in conversational search, the multi-turn sessions exhibit greater diversity, complex expressions, and longer-tail intents compared to singleturn ad-hoc queries, posing severe challenges to the session representation learning. Additionally, these approaches often rely on manually designed and fixed instruction templates, which can considerably limit their ability to generalize and handle intricate conversational scenarios. In this work, we propose adapting LLM itself to serve as a powerful conversational dense retriever. To achieve this, we select high-quality conversational instruction tuning data (Ding et al., 2023) as our training data and propose a simple dual-learning approach called Contrastive SessionMasked Instruction Tuning (CSIT) for the model training. Specifically, we adopt the classical contrastive ranking loss function (Izacard et al., 2022) to fine-tune LLM from a generative model to a retrieval (or representational) model on the multiturn instruction (i.e., session)-response pairs, using the special tokens at the end of the input text to represent the entire text. Meanwhile, we mix the basic contrastive learning with a session-masked instruction tuning objective, where we mask all tokens except the special tokens of the session when computing the language modeling loss of the response tokens. The incorporation of this generative instruction tuning loss forces a strong enhancement in the learning of the complex session representation since the response tokens have to be generated solely based on the special tokens representing the session. Furthermore, it also helps retain the strong generalization capability of LLM for retrieval. Our resulting model, which we call ChatRetriever, can inherit the strong generalization capability of LLM to robustly represent complex conversational sessions for dense retrieval. We conducted extensive experiments across five conversational search benchmarks, where ChatRetriever substantially outperforms existing conversational dense retrievers. Notably, it achieves absolute NDCG@3 improvements of 6.8% and 12.2% on CAsT-20 and CAsT-21, respectively, matching the performance of the leading LLM-based conversational query rewriting methods. Beyond standard evaluations using fixed conversational trajectories, we also developed two robustness evaluation methods to assess the resilience of conversational retrieval approaches by altering the historical context. ChatRetriever demonstrates markedly more stable performance in our robustness test, showcasing its superior robustness in comparison to baselines when faced with varied contexts. Our contributions can be summarized as: (1) We introduce ChatRetriever, the first LLMadapted conversational dense retriever, which substantially outperforms existing conversational dense retrievers and achieves performance comparable to LLM-based rewriting approaches. (2) We propose Contrastive Session-Masked Instruction Tuning for such a retrieval-oriented adaption for LLM, which can help achieve better complex session representation and generalization. (3) We design two robustness evaluation methods for conversational retrieval by systematically varying the conversation contexts. Results highlight ChatRetriever\u2019s superior generalization capability in handling diverse conversational search scenarios. 2 Related Work Conversational search has seen the development of two primary approaches: conversational query rewriting (CQR) and conversational dense retrieval (CDR). The former approach transforms the conversational search problem into a traditional ad-hoc search problem by reformulating the conversational context into a standalone query. Techniques in this area range from selecting useful tokens from the context (Voskarides et al., 2020; Lin et al., 2021b) to training generative rewriters based on session-rewrite pairs (Yu et al., 2020; Wu et al., 2022; Mao et al., 2023a; Mo et al., 2023a). Inspired by the strong language generation capability of LLMs, some studies (Mao et al., 2023b; Ye et al., 2023; Yoon et al., 2024) propose to leverage LLMs as query rewriters and achieve amazing performance. Conversational dense retrieval (CDR), on the other hand, directly encodes the entire conversational session for end-to-end dense retrieval (Yu et al., 2021). Efforts in this direction have focused on improving session representation through various perspectives such as context denoising (Mao et al., 2022a; Mo et al., 2023b; Mao et al., 2023c), data augmentation using other corpus and LLMs (Lin et al., 2021a; Mao et al., 2022b; Dai et al., 2022; Jin et al., 2023; Chen et al., 2024; Mo et al., 2024b), and hard negative mining (Kim and Kim, 2022; Mo et al., 2024a). \fLLM-based and instruction-aware retrieval. Existing research has demonstrated that similar to the scaling laws (Kaplan et al., 2020) observed in LLMs, increasing the scale of models, data, and computing resources can also enhance the performance of retrieval models (Ni et al., 2022). To incorporate the ability to follow instructions into retrievers, some studies (Su et al., 2023; Asai et al., 2023) propose the creation of fixed instruction templates for various retrieval tasks, and use these instruction-enhanced datasets to train the retrievers. Moreover, there have been efforts to adapt LLMs for retrieval purposes by training on improved search data (Ma et al., 2023; Wang et al., 2024) or developing new search-oriented training objectives (Li et al., 2023). However, these approaches often rely on manually designed and fixed instruction templates, which can limit the generalization capabilities of the retrievers across diverse instructions. Additionally, they are typically designed for single-turn ad-hoc search, lacking the capability to comprehend long and complex search sessions. In contrast to LLMs, which can smoothly understand a wide range of complex user inputs, existing LLM-based retrievers still exhibit a large gap in their generalization capabilities, particularly in the context of conversational search. 3 Methodology We describe our simple and effective dual-learning approach, Contrastive Session-Masked Instruction Tuning (CSIT), which is designed to adapt LLM to a generalized and robust conversational dense retriever. An overview is shown in Figure 2. Contrastive instruction tuning. Recent works have demonstrated the effectiveness of simply using the contrastive ranking loss to adapt LLM to a retriever (Asai et al., 2023; Su et al., 2023; Ma et al., 2023; Wang et al., 2024; Muennighoff et al., 2024). However, their generalization capability can be limited as they overfit the narrow distribution of ad-hoc queries and fixed instruction templates they were trained on. We fine-tune LLM on diverse conversational instruction tuning data for more general conversational retrieval adaption. Specifically, given a training sample {(x, y+)} from conversational instruction tuning dataset, where x comprises all historical turns and the current instruction (we call x a session) and y is the response, we fine-tune LLM with the contrastive ranking loss: LC = \u2212log \u03d5(x, y+) \u03d5(x, y+) + P y\u2212\u2208D\u2212\u03d5(x, y\u2212), (1) where \u03d5(x, y) = exp((E(x) \u00b7 E(y))/\u03c4), E(\u00b7) is the shared text encoder of the retriever. D\u2212is a negative response collection for x. \u03c4 is a hyperparameter temperature. To encode text with LLM, we append t special tokens ([EMB1], ..., [EMBt]) to the end of the input text and utilize the representation of the last token ([EMBt]) as the comprehensive representation of the entire text. This approach is analogous to the text-level chain-of-thought (CoT) (Wei et al., 2020) for LLMs. We hypothesize that these t consecutive special tokens act as a representational chain-of-thought, expanding and guiding the learning space to achieve a more effective representation. Session-masked instruction tuning. To enhance the generalized encoding of complex search sessions, we integrate a session-masked instruction tuning objective with the fundamental contrastive learning. Given a training sample (x, y+), we concatenate the instruction and the response to form one input sequence s: s = [x1, ..., xN, [EMB1], ..., [EMBt], y+ 1 , ..., y+ M, [EMB1], ..., [EMBt]], (2) where xi and y+ i represent the i-th token of the session and the response, respectively. N and M denote the total number of tokens in the session and the response, respectively. We then input this sequence into the LLM to obtain the token representations. Specifically, the representations for the (N + t) session tokens are obtained through a standard auto-regressive process. However, for the subsequent (M+t) response token representations, we mask the N session token representations and allow only the attention of t special session tokens and their preceding response tokens. We achieve it by applying a customized attention mask matrix illustrated on the right side of Figure 1. Correspondingly, the loss function of the session-masked instruction tuning is defined as: LS = \u22121 M M X i=1 logp(y+ i |y+ 1 , ..., y+ i\u22121, x1:t), (3) where x1:t are the representations of the t session special tokens, which have been contextualized by the N session tokens. \f[Q1] [R1] [Q2] <EMB_1> <EMB_2> <EMB_3> [R2] <EMB_1> <EMB_2> <EMB_3> Session <EMB_3> Response <EMB_3> Session-Masked Attention Matrix Session-Masked Language Modeling Loss Contrastive Ranking Loss Session: Q1: Can the bottom of the ocean freeze? R1: Ocean water freezes just like freshwater, \u2026, because of \u2026 Q2: How does it freeze? Response (R2): Freezing happens when the molecules, \u2026, a solid crystal. Session-Response Concatenation A Training Sample Session Response ChatRetriever ChatRetriever ChatRetriever Figure 2: Overview of CSIT. We fine-tune LLM to be ChatRetriever using dual learning objectives. We use the last special token (i.e., <EMB_3>) to represent the input text, which can be session or response. In the session-masked attention matrix, the blue squares denote the session or the response tokens while the green squares denote their special tokens. By masking the session text and forcing correct generation for the response tokens, we build a closer connection between the session representation and the response token representations. The model has to perform a more nuanced understanding of the complex session and accurately encode them into the t session special tokens. We combine the contrastive instruction tuning and the session-masked instruction tuning to form the final training objective of ChatRetriever: L = LC + \u03b1LS, (4) where \u03b1 is a hyperparameter to balance the two losses. Discussion. Our dual-learning approach CSIT takes inspiration from several notable works in LLM-based retrieval and input compression such as RepLLaMA (Ma et al., 2023), E5mistral-7b (Wang et al., 2024), GRIT (Muennighoff et al., 2024), Gisting (Mu et al., 2023), and AutoCompressor (Chevalier et al., 2023). However, CSIT distinguishes from them in the following key aspects: (1) RepLLaMA and E5mistral-7b primarily focus on contrastive learning using (synthetic) ad-hoc search data with pre-defined instruction templates, which is hard to generalize to complex conversational search scenarios. (2) GRIT aims to build a unified model for both retrieval and generation, incorporating vanilla instruction tuning and using different training data for its contrastive learning and instruction tuning. (3) The mechanism of our session-masked instruction tuning shares similarities with Gisting and AutoCompressor, but they are for a completely different target: improving longcontext language modeling, not retrieval. In contrast, CSIT stands out from these works by specifically addressing the challenges of adapting LLM generalized to complex conversational retrieval. 4 Experiments 4.1 Setup Training data. We fine-tune LLM to be ChatRetriever on high-quality conversational instruction tuning datasets. We select training samples that are informative, diverse, and exhibit informationseeking intents. Our final training data comprises two sources: (1) The Question About the World subset of UltraChat (Ding et al., 2023) and (2) MSMARCO (Nguyen et al., 2016) passage ranking dataset. Ultrachat is a multi-turn instruction tuning dataset while MSMARCO can be deemed as a single-turn search-oriented instruction tuning dataset by treating the query as the instruction and the positive passage as the response. We find that incorporating MSMARCO is important to improve the basic (ad-hoc) retrieval performance. Evaluation data and metrics. We conduct evaluations on five public conversational search benchmarks, including QReCC (Anantha et al., 2021), TopiOCQA (Adlakha et al., 2022), CAsT-19 (Dalton et al., 2020), CAsT-20 (Dalton et al., 2021), and CAsT-21 (Dalton et al., 2022). The retrieval corpus sizes of these five datasets are in the tens of millions. Among them, the large-scale QReCC and TopiOCQA have training sets, while the other three CAsT datasets are small datasets that only have test sets. We mainly report NDCG@3 to evaluate the retrieval performance, as conversational search is more concerned with the top results (Dalton et al., 2021). Baselines. We compare ChatRetriever against the following three types of retrieval baselines. The first is CQR baselines, including T5QR (Lin et al., 2020), ConvGQR (Mo et al., 2023a), and LLM4CS (Mao et al., 2023b). The original \fModel Base Model #Model Parameter QReCC TopiOCQA CAsT-19 CAsT-20 CAsT-21 Conversational Query Rewriting T5QR T5-base (Raffel et al., 2020) 250M 31.8 22.2 41.7 29.9 33.0 ConvGQR T5-base (Raffel et al., 2020) 250M 41.0 24.3 43.4 33.1 27.3 LLM4CS (REW) ChatGPT-3.5 (OpenAI) Unknown 43.1 35.7 40.4 LLM4CS (RAR) ChatGPT-3.5 (OpenAI) Unknown 45.3 39.5 44.9 LLM4CS ChatGPT-3.5 (OpenAI) Unknown 51.5 45.5 49.2 LLM-based Retrieval LLM Embedder BGE (Xiao et al., 2023) 110M 50.5 22.4 36.6 15.3 31.2 INSTRCUTOR GTR-XL (Ni et al., 2022) 1.5B 42.3 12.3 26.8 17.3 32.4 RepLLaMA LLaMA-2 (Touvron et al., 2023) 7B 31.8 15.0 31.6 18.3 32.7 E5mistral-7b Mistral (Jiang et al., 2023) 7B 32.9 16.9 31.3 15.4 32.4 GRIT Mistral (Jiang et al., 2023) 7B 33.5 17.3 30.9 19.3 33.6 Conversational Dense Retrieval Conv-ANCE ANCE (Xiong et al., 2021) 110M 45.6 20.5 34.1 27.5 34.2 ConvDR ANCE (Xiong et al., 2021) 110M 35.7 26.4 43.9 32.4 37.4 DialogInpainter T5-Large (Raffel et al., 2020) 770M 47.0 33.2 LeCoRE SPLADE (Formal et al., 2022) 110M 48.5 31.4 42.2 29.0 32.3 ChatRetriever Qwen (Bai et al., 2023) 7B 52.5\u2020 40.1\u2020 52.1\u2020 40.0\u2020 49.6\u2020 Table 1: Results of the normal evaluation on five conversational search benchmarks. The base models of CQR methods are their rewriters and the model parameters are also counted as the rewriter\u2019s parameters. \u2020 denotes significant differences to baselines (p < 0.05). The best results are bold and the second-best results are underlined. LLM4CS has three prompting methods: REW, RAR, and RTR, and it requires multiple rounds of generation, which is time-consuming. For efficiency consideration, we additionally compare with its two single-generation variants based on RAR and REW; The second is CDR baselines, including ConvDR (Yu et al., 2021), ConvANCE (Mao et al., 2023c), DialogInpainter (Dai et al., 2022), and LeCoRE (Mao et al., 2023c); The third is the LLM-based retriever baselines, including INSTRUCTOR (Su et al., 2023), LLM Embedder (Zhang et al., 2023), RepLLaMA (Ma et al., 2023), E5mistral-7b (Wang et al., 2024), and GRIT (Muennighoff et al., 2024). More baseline details on in Appendix A. Implementations. We initialize ChatRetriever with Qwen-7B-Chat (Bai et al., 2023) and train it on eight 40G A100 GPUs using LoRA (Hu et al., 2022) with a maximum input sequence length of 1024. The training process involves 2500 steps with a learning rate of 1e-4, a gradient accumulation of 4 steps, a batch size of 64, and 4 hard negatives per sample. For consistency, we adopt the chatml input format of Qwen-Chat to form the input of ChatRetriever. We add three special tokens (i.e., <|extra_1|>, <|extra_2|>, and <|extra_3|>) at the end of the instructions and responses. For baseline comparisons, we adhere to the implementation settings specified in their original papers. Code is released at https://github.com/kyriemao/ ChatRetriever. 4.2 Normal Evaluation The retrieval performance comparisons on the five datasets are reported in Table 1. Our proposed ChatRetriever outperforms all the baseline methods across these datasets. Existing conversational dense retrievers are constrained by limited model capacity and data quality, resulting in suboptimal performance for conversational retrieval tasks. Prior to ChatRetriever, there was a considerable performance gap between existing conversational dense retrieval methods and the state-ofthe-art LLM-based conversational query rewriter (i.e., LLM4CS). Specifically, the absolute gaps between the best existing CDR model and LLM4CS were 1.6%, 12.2%, and 11.8% on the three CAsT datasets, respectively. However, ChatRetriever can achieve comparable or even superior performance to LLM4CS, highlighting the high potential of endto-end conversational dense retrieval compared to the two-stage approach of conversational query rewriting methods. If we force LLM4CS to generate a single output (RAR) or only consider query rewriting (REW) for efficiency, the advantages of ChatRetriever become even more pronounced, with over 4% absolute gains. We also observe that ex\fModel Partial Response Modification Full Context Modification CAsT-19 CAsT-20 CAsT-21 CAsT-19 CAsT-20 CAsT-21 NDCG@3\u2191 Diff.\u2193 NDCG@3\u2191 Diff.\u2193 NDCG@3\u2191 Diff.\u2193 Mean\u2191 SD\u2193 Mean\u2191 SD\u2193 Mean\u2191 SD\u2193 LLM4CS 50.4 1.1 43.8 1.7 49.4 0.2 49.7 1.5 44.0 1.1 48.4 1.4 ConvDR 44.3 0.4 31.0 1.4 34.8 2.6 39.3 3.4 30.2 2.6 35.8 2.9 LeCoRE 44.5 2.3 25.4 3.6 29.9 2.4 42.0 1.9 28.3 2.2 31.0 2.3 ChatRetriever 52.2 0.1 39.5 0.5 48.9 0.7 51.5 1.6 45.8 1.7 48.8 1.8 Table 2: Results of the robust evaluation. Diff. represents the absolute difference compared to the results in Table 1 and SD represents the standard deviation, where a smaller value means more stable. isting LLM-based retrievers do not perform well on conversational retrieval tasks. This can be attributed to the fact that they are fine-tuned solely on templated instructions, which fails to fully leverage the generalization capabilities of LLMs to handle complex and diverse conversational scenarios. 4.3 Robustness Evaluation Existing evaluations for conversational retrieval are mainly conducted on fixed conversation trajectories. In this section, we evaluate the robustness of conversational retrievers in different contexts. Our principle is modifying the context but fixing the current query (i.e., search intents) for each turn so that the original relevance labels can be re-used. Specifically, we propose the following two types of context modification: (1) Partial response modification: We do not use the provided responses in the evaluation dataset. Instead, for each turn, we input the current query, the context, and the top-3 passages retrieved by the conversational retriever, and prompt LLM to generate the response. The simulated online nature of generating responses turn-by-turn better matches how conversational retrieval systems are used in practice. However, a problem with this online evaluation manner is that the query of the next turn in the original dataset may become unreasonable after modifying its last response (Li et al., 2022). We propose a simple heuristic method to tackle this problem with LLM. Specifically, we prompt LLM to judge whether the current query is reasonable given the context. If not, we replace the current query with its human rewrite to make it stand on its own without needing external context. Otherwise, we can use the original query. The prompts can be found in Appendix B. (2) Full context modification: For each turn, we supply the original query and its human-modified version to the LLM, prompting it to generate new contexts (See Appendix C). We finally got five different contexts for each turn. We evaluate conversational retrievers based on different contexts generated by these two modification methods using ChatGPT 3.5. For the partial response modification setting, we report the retrieval performances and their absolute differences (Diff.) compared to the original counterpart results reported in Table 1. For the full context modification setting, we report the Mean performance of different runs and their standard deviation (SD). The robust evaluation results are shown in Table 2. For the partial response modification setting, it shows that the performance changes of ChatRetriever are the smallest. By referring to Table 1, we also observe a general degradation in retrieval performance compared to the original context. This degradation may stem from the retrieved passages being inaccurate, consequently leading to inaccurate responses, and then affecting the retrieval performance of the subsequent turns. For the full context modification setting, the robustness of ChatRetriever is further highlighted by its small average standard deviation of 1.7, which is lower compared to the 3.0 and 2.1 standard deviations observed for ConvDR and LeCoRE, respectively. These results demonstrate the strong robustness of ChatRetriever to different conversational search contexts. In contrast, the LLM4CS, which utilizes ChatGPT for query rewriting, shows an even lower standard deviation of 1.3, demonstrating the superior robustness of ChatGPT for conversational query rewriting. 4.4 Ablation Studies We build four ablations to study the effects of our proposed training approach: (1) w/o R-CoT: removing the representational CoT; (2) w/o SIT: removing the session-masked instruction tuning; (3) with Vanilla IT: replacing the session-masked instruction tuning with vanilla instruction tuning. Table 4 shows the ablation results. We find that \fBase LLM Model Parameter Base/Chat Training CAsT-19 CAsT-20 CAsT-21 Qwen 1.8B Chat Full 38.8 33.7 45.2 Qwen 1.8B Chat LoRA 35.1 31.9 42.4 Qwen 7B Base LoRA 46.9 37.7 46.5 Qwen 7B Chat LoRA 50.5 40.0 49.6 LLaMA-2 7B Chat LoRA 47.3 38.4 49.1 Mistrial 7B Chat LoRA 49.5 39.2 49.6 Table 3: Performance comparisons of ChatRetrievers under different settings with different backbone LLMs. Ablation CAsT-19 CAsT-20 CAsT-21 w/o SIT 49.5 36.8 45.8 w/o R-CoT 49.9 38.5 47.5 with Vanilla IT 51.1 39.3 48.4 CSIT 52.1 40.0 49.6 Table 4: Results of ablation studies. either removing the representational CoT or removing or replacing session-masked instruction tuning can lead to performance degradation. By contrast, the session-masked instruction tuning, which achieves 6.6% relative performance gains across the three CAsT datasets on average, is shown to be more effective than representational CoT, which achieves 3.4% relative performance gains on average. The results suggest that our two techniques have positive effects in helping adapt LLMs for conversational retrieval. We also studied the influence of the number of special CoT tokens, which can be found in Appendix D. 4.5 Influence of LLMs Table 3 shows the comparisons between different settings about the backbone LLM of ChatRetriever. (1) Base vs. Chat. Our results indicate that the Chat model outperforms the Base model, which aligns with our expectations. We hypothesize that the ability to follow instructions well is indicative of strong generalization capabilities, which are crucial for complex conversational search tasks. Therefore, the Chat model, having been fine-tuned for conversational instructions, provides a more appropriate foundation for this task. (2) Different LLMs. We find that different LLMs have similar performance under our training recipe. The relatively worst variation based on LLaMA-2 still largely outperforms existing conversational dense retrieval baselines on the more complex CAsT-20 and CAsT-21 datasets, and also outperforms smaller ChatRetrievers. (3) LoRA vs. full parameter tuning. Due to constraints in computing resources, our investigation into training modes (i.e., LoRA vs. full parameter tuning) was limited to the 1.8B scale model. Our findings indicate that employing LoRA training yields inferior performance compared to full parameter tuning. However, this may be attributed to the LoRA parameter capacity being insufficient for the 1.8B model. 4.6 Influence of Training Data Fine-tuning on different data sources. Table 6 presents the performance of ChatRetriever when trained solely on UltraChat, solely on MSMARCO, and on a combination of QReCC+MSMARCO (i.e., replacing UltraChat with the QReCC\u2019s training set). The model performance is evaluated using both session inputs and human rewrite inputs (i.e., converted to ad-hoc search). We find that training exclusively on UltraChat leads to a decline in performance for both input types, with a more pronounced degradation observed for the rewrite input. Conversely, training solely on MSMARCO yields comparable results for the rewrite input but considerably worse performance for the session input. These results suggest that MSMARCO effectively enhances the ad-hoc retrieval capabilities of LLMs, possibly due to its well-curated hard negatives. However, ad-hoc search data from MSMARCO alone is insufficient for transferring the generalization capability of LLMs to the more complex context of conversational search. The traditional conversational QA data (i.e., QReCC) is also not highly effective for LLMs in learning a diverse range of complex conversational patterns. To optimize LLM to be a universal conversational retriever, we recommend combining general conversational instruction tuning data (e.g., UltraChat) with ad-hoc search-oriented instruction tuning data (e.g., MSMARCO). Continuelly fine-tuning baselines on the same \fMethods QReCC TopiOCQA CAsT-19 CAsT-20 CAsT-21 Original New Original New Original New Original New Original New GRIT 33.5 48.3 17.3 36.0 30.9 47.1 19.3 35.7 33.6 45.3 Conv-ANCE 45.6 44.8 20.5 21.6 34.1 35.0 27.5 30.5 34.2 36.0 ConvDR 35.7 36.0 26.4 24.9 43.9 43.2 32.4 30.9 37.4 35.5 LeCoRE 48.5 46.1 31.4 31.0 42.2 42.9 29.0 30.1 32.3 33.4 ChatRetriever 52.5 40.1 52.1 40.0 49.6 Table 5: Results of continually fine-tuning baselines on the training data of ChatRetriever. \u201cOriginal\u201d and \u201cNew\u201d denote the performance before and after fine-tuning, respectively. 100 500 1000 1500 2000 2500 20 30 40 50 60 NDCG@3 31.2 38.5 39.4 39.6 39.9 40.0 44.8 47.9 48.7 49.5 50.2 49.9 CAsT-20 Session Human Rewrite 100 500 1000 1500 2000 2500 30 40 50 60 70 NDCG@3 41.7 46.9 49.1 48.9 49.7 49.6 50.8 58.1 58.7 59.5 59.0 59.2 CAsT-21 Session Human Rewrite Figure 3: Performance of ChatRetriever at different training steps. Data Source CAsT-20 CAsT-21 Session Rewrite Session Rewrite Only U 39.5 43.7 46.5 50.0 Only M 18.3 49.8 34.1 58.9 Q+M 31.5 46.9 42.4 47.9 U+M 40.0 49.9 49.6 59.2 Table 6: Comparisons of using different data sources combinations for training. U, M, and Q represent UltraChat, MSMARCO, and QReCC, respectively. training data of ChatRetriever. In Table 1, we follow the original training settings of the baselines. Here, we further fine-tune baselines on the training data of ChatRetriever. Results are shown in Table 5 and we find: (1) GRIT, a unified retrieval and generation model based on LLM, showed substantial performance improvement after fine-tuning on conversational instruction tuning data. Its performance approached that of ChatRetriever without session-masked instruction tuning, although it still lagged behind the final ChatRetriever. (2) The performance of Conv-ANCE, ConvDR, and LeCoRE did not show noticeable improvements and even experienced declines in QReCC and TopiOCQA. This may be because that the newly introduced training data disrupted their original in-domain training-test settings, as they were initially trained on the in-domain training sets of QReCC and TopiOCQA. This also highlights the robust generalization of ChatRetriever, which, when trained only on general conversational instruction tuning data, can effectively adapt to various conversational search test sets. Data volume. Figure 3 shows the performance of ChatRetriever across various training steps. It is observed that the performance attains a relatively high level at 500 steps and subsequently experiences marginal improvements as the number of training steps increases. The performance stabilizes upon reaching 2500 steps. Furthermore, the trends for inputs with sessions and human rewrites are similar. These findings suggest that, under our framework, adapting LLMs to function effectively as conversational retrievers may require only a small amount of high-quality data. 5 Conclusion In this paper, we introduce ChatRetriever, a large conversational retrieval model adapted from LLM. We propose a novel contrastive session-masked instruction tuning approach for this adaptation and fine-tune LLM on high-quality conversational instruction tuning data. Experimental results on five conversational retrieval datasets demonstrate the superior performance and robustness of ChatRetriever. Looking ahead, we aim to further explore and expand the generalization capabilities of ChatRetriever in a broader range of complex IR scenarios beyond conversational search, such as legal case retrieval, product search, and other instructionfollowed search tasks. We envision ChatRetriever to be as versatile as LLMs, capable of accepting \fand understanding any conversational inputs and retrieving useful information for those inputs. Limitations Efficiency. As indicated in Table 1, ChatRetriever is a 7B model which is much larger than existing CDR models. Our preliminary findings (Section 4.5) suggest that the large model size is a crucial factor for ChatRetriever\u2019s exceptional performance. However, this also raises efficiency concerns. With an embedding dimension of 4096, ChatRetriever incurs higher time and storage costs for indexing and retrieval than existing CDR models. Nevertheless, on the one hand, ChatRetriever\u2019s enhanced retrieval accuracy potentially reduces the need for extensive passage re-ranking, which could, in real-world applications, offset the initial higher costs by ultimately reducing the total time spent on ranking. On the other hand, we view ChatRetriever as a promising research direction in leveraging the potent capabilities of LLMs for more complex and potentially universal retrieval tasks. We are exploring the possibility of distilling ChatRetriever into a more efficient, smaller model. Hard Negatives. Unlike typical search datasets that provide a large retrieval corpus, the conversational instruction tuning dataset we used (i.e., UltraChat) consists of only multi-turn instructions (i.e., sessions) and responses. In this work, we simply chose the CAsT-21 corpus for the hard negative mining of UltraChat (see Appendix A.3). However, as existing studies have shown, hard negatives are crucial for improving retrieval performance (Zhan et al., 2021; Zhou et al., 2022). Therefore, a better strategy for mining hard negatives tailored to instruction tuning data is desirable. We plan to explore using LLMs to generate hard negatives for instructions similar to (Wang et al., 2024). Generalizability. ChatRetriever substantially outperforms existing CDR models in understanding and retrieving information for complex multi-turn inputs and achieves comparable performance to state-of-the-art LLM-based rewriting, showcasing its strong generalization capability. However, it has not yet achieved the same level of generalization as LLMs, particularly in following complex retrieval instructions, addressing very detailed information needs, or performing in-context learning across various specific domains. It is worth noting that existing instruction-aware retrievers (Su et al., 2023; Zhang et al., 2023; Muennighoff et al., 2024) also have limitations in perceiving complex (multi-turn) instructions that largely fall short of the generality of LLMs, as highlighted in this work (Table 1) and also in recent studies (Oh et al., 2024; Weller et al., 2024). As stated in our conclusion, we are committed to further advancing ChatRetriever\u2019s generalization capabilities to match those of LLMs." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.13043v1", |
| "title": "Data Alignment for Zero-Shot Concept Generation in Dermatology AI", |
| "abstract": "AI in dermatology is evolving at a rapid pace but the major limitation to\ntraining trustworthy classifiers is the scarcity of data with ground-truth\nconcept level labels, which are meta-labels semantically meaningful to humans.\nFoundation models like CLIP providing zero-shot capabilities can help alleviate\nthis challenge by leveraging vast amounts of image-caption pairs available on\nthe internet. CLIP can be fine-tuned using domain specific image-caption pairs\nto improve classification performance. However, CLIP's pre-training data is not\nwell-aligned with the medical jargon that clinicians use to perform diagnoses.\nThe development of large language models (LLMs) in recent years has led to the\npossibility of leveraging the expressive nature of these models to generate\nrich text. Our goal is to use these models to generate caption text that aligns\nwell with both the clinical lexicon and with the natural human language used in\nCLIP's pre-training data. Starting with captions used for images in PubMed\narticles, we extend them by passing the raw captions through an LLM fine-tuned\non the field's several textbooks. We find that using captions generated by an\nexpressive fine-tuned LLM like GPT-3.5 improves downstream zero-shot concept\nclassification performance.", |
| "authors": "Soham Gadgil, Mahtab Bigverdi", |
| "published": "2024-04-19", |
| "updated": "2024-04-19", |
| "primary_cat": "cs.CV", |
| "cats": [ |
| "cs.CV", |
| "cs.CL", |
| "cs.LG" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "Data Alignment for Zero-Shot Concept Generation in Dermatology AI", |
| "main_content": "INTRODUCTION In dermatology, for performing a diagnosis, dermatologists often use concepts, which refer to a clinical lexicon that is used to describe skin disease findings in the dermoscopic images. For example, Melanoma is often associated with the ABCDE rule including asymmetry, border, color, diameter and evolving (Duarte et al., 2021). Thus, learning these concepts from an image can aid in providing diagnostic explanations and building classifiers which are explainable. However, obtaining these concept labels for dermatology is a difficult and time-consuming task since only well-trained dermatologists can accurately describe skin diseases. There are datasets (Codella et al., 2018; Groh et al., 2021) which have high-quality dermoscopic images, but they are either devoid of manual labels, not inclusive of all concepts, or have very limited samples for some concepts. There have been many advances in fully-supervised learning for medical image classification spanning multiple domains (Yadav & Jadhav, 2019; Islam et al., 2020; Li et al., 2014). However, the same progress has not been achieved in dermatology image analysis due to limited availability of highquality images with expert annotations. Recently introduced methods like CLIP provide avenues to perform zero-shot classification without the need of labeled datasets. Prior works like MONET (Kim et al., 2023) leverage image-caption pairs from PubMed articles and medical textbooks to fine-tune CLIP models for dermatology. However, the captions used in these academic sources contain medical terms which are not aligned with the pre-training data of CLIP, which includes image-caption pairs found on the internet. We posit that LLMs like GPT variants can be effectively used to model natural human language. Our contributions include (i) using LLMs for data generation by extending the original captions to align them with CLIP\u2019s pre-training data and improve downstream performance on zero-shot concept classification, (ii) demonstrating that these LLMs can be further fine-tuned on the field\u2019s textbooks to improve their expressiveness. \u2217Equal contribuation 1 arXiv:2404.13043v1 [cs.CV] 19 Apr 2024 \fNavigating and Addressing Data Problems for Foundation Models (DPFM) Workshop, ICLR 2024 2 DATASETS Textbooks With the advent of LLMs, many open-source and closed-source LLM models pre-trained on vast amounts of open internet text data are available. Although, for a specific task like this work, an improved and more informative text in the dermatology field is required that some of these pre-trained models cannot provide. Therefore, fine-tuning a LLM on the desired text set is a crucial solution to this problem. Dermatology textbooks are a good option for fulfilling this requirement. We chose four books for this purpose: Differential Diagnosis In Dermatology (Ashton & Leppard, 2021), General Dermatology (English, 2007), Top 50 Dermatology Case Studies for Primary Care (Reich et al., 2017), and Handbook of Dermoscopy (Malvehy et al., 2006). We used the text from these textbooks to generate prompt and completion pairs for fine-tuning the LLM models as described in section 3.2. Evaluation Dataset To evaluate the trained CLIP model for zero-shot concept classification, we used the SKINCON dataset (Daneshjou et al., 2022). SKINCON includes 3230 images from the Fitzpatrick 17k skin disease dataset (Groh et al., 2021), densely annotated with 48 clinical concepts, 22 of which have at least 50 images representing the concept. The concepts used were chosen by two dermatologists considering the clinical descriptor terms used to describe skin lesions, such as \"plaque\", \"scale\", and \"erosion\" to name a few. The list of concepts was based on the clinical lexicon used by dermatologists to describe skin lesions and was developed with consultation of the terms listed in one of the most widely used dermatology textbooks Dermatology (Bolognia et al., 2012). 3 METHODS 3.1 EXPLORATORY ANALYSIS For training CLIP, the captions need to be tokenized using the CLIP tokenizer before the contrastive learning procedure. All CLIP models use 77 as the maximum tokenized context length, either padding or truncating the caption if it is below or above that length respectively. Since we were restricted to 77 as the maximum number of tokens, we first did an exploratory analysis of the tokenized lengths of the original 44314 captions obtained from the PubMed articles utilizing the scripts provided in Kim et al. (2023). This would give us an intuition of how many tokens were available for extending the caption for alignment. Table 2 (Appendix A.2) shows the statistics of the tokenized captions. The mean length of captions is \u223c35 which shows that most of the captions are short and do not exceed the maximum token length of 77. 75% of the captions have a token length of less than 51 which indicates that a majority of captions do have additional tokens available to be extended and improved. There are \u223c13% captions which have been truncated at the max token length of 77, still leaving around \u223c38000 captions that can be improved using LLMs. 3.2 DATA PREPROCESSING Fine-tuning data for an LLM needs to be in the form of prompt-completion pairs. It meant for a specific prompt, we needed to define the ideal completion that we expect the model to output. Naively, these would be the sentences that follow a given prompt in the text. However, the whole of the raw text from the books could have misleading phrases and sentences, so applying some preprocessing strategies was essential for fine-tuning data preparation. We knew that each dermatology book had its own structure, types of references, and formatting method. Preprocessing and extracting a proper text from the books and creating a prompt-completion dataset for further fine-tuning was divided into manual and automatic steps. The manual extraction phase was deleting irrelevant pages like glossaries, acknowledgments, and references. Also, not all text in the preserved pages assisted in creating prompt-completion pairs, such as titles, footnotes, captions, tables\u2019 text, and citations. Figure 1 shows some examples. We filtered the main text by picking the lines with the dominant font and size using the PyMuPDF 1 python library. We assumed figures\u2019 captions or other non-informative texts like titles are less frequent in the book and have different fonts and sizes. This assumption was valid for all books we used. Table 3 (Appendix A.2) shows the names of the books and the number of prompt-completion pairs obtained for each. 1https://github.com/pymupdf/PyMuPDF 2 \fNavigating and Addressing Data Problems for Foundation Models (DPFM) Workshop, ICLR 2024 Figure 1: a) Irrelevant and confounding parts of textbooks shown in red boxes are removed from the prompt-completion dataset. b) An example of a prompt sentence in blue with the following four sentences in pink as its completion. 3.3 FINE-TUNING We fine-tuned two LLM models, GPT-2 (Radford et al., 2019) and GPT-3.5 (Brown et al., 2020), which have been pre-trained as general purpose learners on a huge amount of text data scraped from the internet. GPT-3.5 is one of the largest autoregressive language models available, trained with 4096-token-long context. However, the model is close-sourced and fine-tuning comes as part of an API endpoint. We first decided to use GPT-2, which is GPT-3.5\u2019s predecessor with 1.5 billion parameters. To fine-tune GPT-2, we started with the extracted prompt and completion pairs from the preprocessing step. Then, we created the fine-tuning dataset by combining each prompt and completion into a single sentence separated by the padding token and tokenized the sentence using the GPT-2 tokenizer. Finally, we passed the data to the trainer with the combined prompt and completion as the label. We used the huggingface library (Wolf et al., 2019) to implement the GPT-2 model and fine-tuned it for two epochs. GPT-3.5 was easier to fine-tune and only needed an API key to directly call a fine-tuning endpoint. The gpt-3.5-turbo variant of GPT-3.5 was fine-tuned for four epochs using a similar input data from the mentioned books with the format {\u201dprompt\u201d : \u201dpromptA\u201d, \u201dcompletion\u201d : \u201dcompletionA\u201d}. For fine-tuning CLIP, we started by extracting the image-caption pairs from PubMed articles using the scripts provided in Kim et al. (2023). We didn\u2019t use textbooks here since the repository does not have the list of textbooks used. Then, we passed the captions through the fine-tuned LLM to generated enriched captions with a max length of 512 tokens. Table 4 (Appendix A.2) shows some of the improved captions generated using fine-tuned GPT-2 and GPT-3.5 models. We then fine-tuned the pre-trained CLIP model openai/clip-vit-base-path32 with a batch size of 64 using the Adam optimizer (Kingma & Ba, 2014) and a learning rate of 1e-5 with a cosine annealing scheduler with warm restarts. 3.4 ZERO-SHOT CLASSIFICATION AND EVALUATION Once the CLIP model was fine-tuned, we used the 3230 images and corresponding concepts from the SKINCON dataset to perform zero-shot concept classification. For each concept key in the 48 SKINCON concepts, we created embeddings for the text \"This is {concept_key}\" and all of the images in CLIP\u2019s joint embedding space. Then, using the cosine similarity scores, we generated a Receiver operating characteristic (ROC) curve independently for each of the 48 concepts. The evaluation metric used was the area under the ROC curve (AUC). 4 RESULTS We evaluated using five different CLIP models: 1) The vanilla CLIP model without fine-tuning (Vanilla) and the CLIP model fine tuned using 2) Original PubMed image-caption pairs (Original). 3 \fNavigating and Addressing Data Problems for Foundation Models (DPFM) Workshop, ICLR 2024 3) Aligned captions from fine-tuned GPT-2 4) Aligned captions from Vanilla GPT-3.5. 5) Aligned captions from fine-tuned GPT-3.5. We decided to include vanilla GPT-3.5 in our results since from qualitative analysis it seemed that GPT-3.5 by itself had a high enough expressive power to understand even techincal medical context from the captions and generate customizations. Table 1 shows the mean AUC across all concepts for the different CLIP models as defined above and Table 5 (Appendix A.2) shows the AUC scores for each of the concepts. Table 1: Mean AUC across all concepts CLIP Model Mean AUC Vanilla 0.572 Original 0.636 Fine-Tuned GPT-2 0.642 Vanilla GPT-3.5 0.639 Fine-Tuned GPT-3.5 0.648 From Table 4 (Appendix A.2), it can be seen that the fine-tuned GPT-2 model is able to extend the input caption while keeping the sentence grammatical correct. However, it sometimes strays away from the context of the input caption and can start constructing sentences by stringing together medical jargon. This might be a result of setting a high max token length which causes the model to lose context in longer ranges. GPT-3.5 is able to maintain context for a longer token length and performs better data alignment. Fine-tuning the CLIP model improves performance for most of the concepts (41 out of 48), see Table 5 (Appendix A.2). The fine-tuned GPT-3.5 model performs the best among all the models tested, with an AUC of 0.648 and it performs better than the original model in a majority of the concepts (26 out of 48). This indicates that fine-tuning the LLM using dermatology text helps in improving the data alignment in the extended captions. The second best performing model is the GPT-2 fine-tuned model, with an AUC of 0.642 and performing better than the original model in 25 out of the 48 concepts. This result was unexpected since the GPT-3.5 model is much more powerful in terms of the model capacity as compared to GPT-2 and we expected the Vanilla GPT-3.5 model to outperform the fine-tuned GPT-2 model, which was not the case. This indicates that fine-tuning LLM models does actually improve the predictive performance even if the model does not have as many trainable parameters. The Vanilla GPT-3.5 model is also able to outperform the Original model with an AUC of 0.639. This shows that LLMs can be effectively used to produce customized and well-aligned captions which improve the language supervision provided to the CLIP training procedure resulting in improved performance. 5 CONCLUSION Our study reveals that extending captions through the use of a fine-tuned Large Language Model (LLM) on dermatology textbooks effectively connects clinical lexicon with CLIP\u2019s pre-training data, resulting in enhanced downstream zero-shot concept classification performance in dermatology images. To summarize, our findings underscore the promise of LLMs in enhancing language supervision for dermatology AI. The improved CLIP model can be further used to annotate images with concepts that can be crucial to developing concept-based disease classifiers like concept bottleneck models (Koh et al., 2020) that are interpretable and transparent. However, further investigation is essential to optimize integration of LLMs with domain-specific models, ensuring more resilient applications in medical image analysis. 4 \fNavigating and Addressing Data Problems for Foundation Models (DPFM) Workshop, ICLR 2024 6 ACKNOWLEDGMENT This work was done as part of the final project for the course CSE 527 (Computational Biology) at the University of Washington. We would like to thank the professor Dr. Su-In Lee along with the teaching assistants Wei Qiu and Mingyu Lu for their valuable feedback. 7 LIMITATIONS Although the early findings are promising, there are many ways to extend this project. We only used 4 dermatology textbooks for extracting the prompt-completion pairs to fine-tune the LLMs, but there are a lot more books available which can also be preprocessed. PubMed articles can be used to generate the prompt-completion pairs as well. Also, in the extraction pipeline, we made pairs by getting the following four sentences of a particular sentence without considering the context and paragraph switch. This could introduce confounders in the fine-tuning process. For instance, the first completion sentence could be related to melanoma; in contrast, the other three could be from the next section and discuss another disease. In addition, python pdf parsers occasionally fail and break some words into meaningless chunks that can doubtlessly mislead the LLM during fine-tuning. A solution for the extraction issues is adding more manual and automatic steps to remove and filter meaningless words and checking the context integration. LLMs have also been known to hallucinate (Lee et al., 2018; Bang et al., 2023) and proper steps need to be taken to ensure non-existent facts are not fabricated which is pertinent in a high-stakes domain like dermatology. Furthermore, we used the gpt-3.5-turbo variant of GPT-3.5, but there are more powerful variants available like GPT-4 which we did not use due to budget constraints. Another approach to enhance the performance of the fine-tuned Large Language Model (LLM) and refine the generated captions is by incorporating Instruction Tuning data (Zhang et al., 2023; Liu et al., 2023; Dai et al., 2023; Ouyang et al., 2022), instruction-output pairs, extracted from dermatology books during the fine-tuning process. This task needs a careful plan to create a dataset that is useful and gives valuable insights. Another change that could be made is the CLIP model used. We used the openai/clip-vit-base-path32 model for CLIP training but there is a more powerful baseline CLIP model openai/clip-vit-large-patch14 available, which we did not use because of memory constraints and longer training times. We can also employ a non-random batch sampling strategy, which includes samples with different concepts in one mini-batch for efficient learning of concepts. Another way to improve language supervision by employing ways to increase the number of tokens from 77, which is CLIP\u2019s limitations. We anticipate that all of these changes will improve the zero-shot classification performance of the fine-tuned CLIP model. 5 \fNavigating and Addressing Data Problems for Foundation Models (DPFM) Workshop, ICLR 2024" |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.14745v1", |
| "title": "TAAT: Think and Act from Arbitrary Texts in Text2Motion", |
| "abstract": "Text2Motion aims to generate human motions from texts. Existing datasets rely\non the assumption that texts include action labels (such as \"walk, bend, and\npick up\"), which is not flexible for practical scenarios. This paper redefines\nthis problem with a more realistic assumption that the texts are arbitrary.\nSpecifically, arbitrary texts include existing action texts composed of action\nlabels (e.g., A person walks and bends to pick up something), and introduce\nscene texts without explicit action labels (e.g., A person notices his wallet\non the ground ahead).\n To bridge the gaps between this realistic setting and existing datasets, we\nexpand the action texts on the HumanML3D dataset to more scene texts, thereby\ncreating a new HumanML3D++ dataset including arbitrary texts. In this\nchallenging dataset, we benchmark existing state-of-the-art methods and propose\na novel two-stage framework to extract action labels from arbitrary texts by\nthe Large Language Model (LLM) and then generate motions from action labels.\nExtensive experiments are conducted under different application scenarios to\nvalidate the effectiveness of the proposed framework on existing and proposed\ndatasets. The results indicate that Text2Motion in this realistic setting is\nvery challenging, fostering new research in this practical direction. Our\ndataset and code will be released.", |
| "authors": "Runqi Wang, Caoyuan Ma, GuoPeng Li, Zheng Wang", |
| "published": "2024-04-23", |
| "updated": "2024-04-23", |
| "primary_cat": "cs.CV", |
| "cats": [ |
| "cs.CV" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "TAAT: Think and Act from Arbitrary Texts in Text2Motion", |
| "main_content": "INTRODUCTION Text2Motion [1\u20133, 5, 9, 12, 14, 25, 31, 32, 36, 37] denotes generating motions from natural language, which has proven useful in reducing \u2217Equal contribution Figure 2: HumanML3D++ Dataset Structure. We expand the action texts in the HumanML3D dataset to multiple scene texts. Taking a set of data as an example, HumanML3D provides 3-5 action texts for each motion data. Building upon this, we provide two scene texts for each action text. labor costs in industries requiring motion capture actors and manual editings, such as movie production and game development. In these arXiv:2404.14745v1 [cs.CV] 23 Apr 2024 \fRunqi Wang, Caoyuan Ma, GuoPeng Li, and Zheng Wang\u2020 entertainment industries, the motion editing of characters is limited to the development stage, and motion patterns are fixed after release. However, the target needs to interact with users in more flexible applications, which brings various unrestricted scenes, such as embodied intelligence [33] and interactive Non-Player Characters (NPCs) in open-world games. Therefore, exploring the generation of potential motions from arbitrary texts is important. However, as shown in Figure 1, existing datasets [10, 13, 23, 26, 27, 30] simply assume that motions are from specific action labels or action texts (i.e. inputs with action labels). We argue this is impractical for some flexible applications that need scene inputs (i.e. inputs without action labels). For example, when we describe an event \u201cA person picks up something\u201d, Action2Motion [4, 6, 10, 16, 20, 22, 24, 35] (the left figure in Figure 1) can only generate motions from specific action labels rather than a sentence, such as \u201cwalk, bend, and pick\u201d. More flexibly, Text2Motion(the middle figure in Figure 1) generates motions from action texts, such as \u201cA person walks and bends to pick up something\u201d. Compared to them, it is more practical to generate motions from arbitrary texts (the right figure in Figure 1), such as \u201cA person notices his wallet on the ground ahead\u201d. In this case, perfect action labels or action texts are not guaranteed, hindering the applications of existing methods and datasets. Therefore, a natural question arises: can we generate reliable motions from arbitrary texts? In light of the novelty of this problem, we propose a new dataset to evaluate Text2Motion in a more realistic setting. Briefly, given the action texts of the HumanML3D dataset, the introduced scene texts are generated by LLM in a one-to-many manner. In total, our dataset includes 44,970 action texts, 134,910(about)scene texts, and 14,616 motions (see details in Table 1). The new dataset, called HumanML3D++, gives rise to two fundamental differences between this work and prior research. Beyond Action Texts. Previous methods mainly focus on specific action texts because existing datasets consider perfectly aligned action texts and motions as default. However, HumanML3D++ introduces many scene texts based on the action texts of HumanML3D and enables us to explore the effect of more flexible scene texts in real-life applications. As a result, we need to align multiple arbitrary texts with the same motions, breaking the limited action texts. Beyond Text2Motion. Previous frameworks generate motions from action texts in a one-stage manner because they have perfectly aligned action texts and motions. However, the introduced scene texts of our HumanML3D++ have vague relationships with motions. Therefore, we split the Text2Motion into Text2Action and Action2Motion. In the Text2Action stage, we use the emergent abilities of the LLM to extract the action texts and corresponding scene texts to understand the inherent means of scene texts, hereby extracting the complete and potential action labels. In the Action2Motion stage, we use the sequential abilities of the Transformer to ensure the coherence from action labels to final motions. As a result, our two-stage framework can extract the action labels from arbitrary texts and generate final motions from the extracted action labels, breaking the limited Text2Motion. Our main contributions can be summarized as follows: \u2022 We conduct a new dataset that contains over 80,000 scene text annotations to help infer the potential actions from scene texts (texts without action labels), which has not been explored in the past. \u2022 We propose a more practical two-stage framework, which extracts semantic information with LLMs from arbitrary texts and then generates motions from extracted information. \u2022 Compared with existing methods, our method is better able to understand scene texts and generate motions that align more closely with scene texts. 2 RELATED WORK 2.1 Human Motion Generation Human motion generation supports diverse multimodal inputs, including text [5, 8, 25, 32, 36, 38], action lable [10, 20, 24], incomplete posture sequences [7, 11, 32], music [15, 17, 18], images [29], and more. In all conditional tasks, text-to-motion [5, 8, 25, 32, 36, 38] has consistently propelled and dominated the forefront of research, given that linguistic descriptors remain the most user-friendly and convenient means of representation. In the realm of tasks conditioned on natural language inputs, the generation of actions predominantly relies on deterministic, action textual prompts. Our endeavor diverges by placing emphasis on scene textual inputs, aimed at comprehending natural language interactions and generating appropriate responsive actions. 2.2 Text-to-motion Generation According to the survey [40], tasks utilizing natural language as a conditional input can be categorized into two main classes: Action2motion and text2motion. The core objective of the Action2motion task is to generate human motion sequences corresponding to specific action categories. [5, 6, 10, 21, 22, 24, 32, 35] serves as a typical representative in the Action2motion task. PoseGPT [22]employed an autoencoder to map human motion into a latent index sequence in discrete space. Actor [24] utilized a Transformer-based architecture for encoding and decoding parameterized SMPL human body model sequences estimated from action recognition datasets. INR [4] introduced a motion-conditioned human motion generation method utilizing Variational Implicit Neural Representations. Kinetic-GAN [6] leveraged the advantages of generative adversarial networks and graph convolutional networks to synthesize a new architecture for human body dynamics. These methods demonstrate certain effectiveness. However, existing Action2motion methods suffer from limitations where input action categories are predetermined, thus unable to continuously generate multiple motion sequences, leading to restricted generative capabilities. Nevertheless, despite this limitation, given the relatively short length of textual input, these methods are capable of faithfully generating information relevant to the corresponding action category. Based on this, our design leverages the precision of action2motion generation. In contrast, text-to-motion tasks focus on generating human motion sequences from natural language descriptions. T2M-GPT [36] utilized a simple CNN-based VQ-VAE to obtain high-quality discrete representations of motion. MotionGPT [39] generated continuous human body motion by treating multimodal signals as special input tokens in a large language model (LLM). MLD [5] introduced the \fTAAT: Think and Act from Arbitrary Texts in Text2Motion Table 1: Dataset comparison. #Sub refers to the number of humans included in the dataset. #Act. Class denotes the number of action classes present in the dataset, representing the variety of actions captured (this metric is not applicable to motion datasets annotated with action texts). Our dataset stands out as the most abundant in terms of annotated text content among existing datasets, particularly due to the incorporation of a significant volume of scene texts. Name #Sub. #Motion #Text #Act. Class Scene Action Supervision AMASS [23] 344 11,000 NTU-120RGB+D[19] 106 114,000 120 UESTC [13] 118 25,600 40 NTU RGB+D [30] 56,000 60 BABEL [27] 344 66,000 250 HumanAct12 [10] 12 1,191 12 Text Sup. KIT-ML [26] 111 3,911 6,278 HumanML3D [8] 344 14,616 44,970 Ours 344 14,616 134,910 \u2713 diffusion model into the field of motion generation, diffusing the motion latent space and reducing computational expenses during both the training and inference stages. The use of natural language input aligns more with users\u2019 interaction habits. However, when receiving textual inputs containing multiple actions, due to the inherent complexity of textual content, these models often struggle to faithfully generate all actions in sequence. Our work, also relying on natural language input to align with user habits, addresses the issue of poor performance in multi-action motion generation through the implementation of a precision generator. By leveraging the strengths of both tasks, our model achieves more accurate and flexible motion generation. 3 DATASET: HUMANML3D++ Motion data is pivotal in the advancement of motion generation tasks. As our task relies on scene input, we primarily focus on datasets commonly used in text-to-motion tasks. KIT MotionLanguage (KIT-ML) [26] provides sequence-level annotations for motions, while HumanML3D [8] offers additional textual annotations for some motions in AMASS [23]. It also serves as a focal point in our text-to-motion task. For datasets mapping action labels to actions, Babel [27] collects actions from AMASS [23] and provides annotations for actions and behaviors. ACTOR [24] utilizes two action recognition datasets, HumanAct12 [10] and UESTC [13], employed for action-to-action tasks. However, existing datasets only encompass action texts. To adapt to our task, the modification and enhancement of existing textual data become issues of concern. As shown in Figure 2, we have enhanced the scene textual input component of the dataset built upon HumanML3D [8], named HumanML3D++. As illustrated in Table 1, it can be observed that we have provided the first dataset with scene textual annotations to date. Data composition. As shown in Figure 2, HumanML3D++ is expanded based on HumanML3D [8]. Specifically, HumanML3D [8] annotates 3-5 action texts for each motion. We use LLM to understand action texts and generate two different scene texts for each action text. We test many prompts to claim scene data, here are some examples: Template 1: Here is an example where the action sentence is \"a person takes a few steps forward and then bends down to pick up something.\" and the corresponding scene sentence is \"a person discovers his long lost wallet.\" The causal relationship between the two sentences is very close. I am now giving you some action sentences, hoping that you can complete some scene sentences, which should be the antecedents of the corresponding action sentence actions. The action sentence I am giving you now is <>, I hope you can generate two sentences for each action sentence. Template 2: Here is an example where the action sentence is \"a person takes a few steps forward and then bends down to pick up something,\" and the corresponding scene sentence is \"a person discovers his long lost wallet\". The causal relationship between the two sentences is very close. I am now giving you some action sentences, hoping that you can complete some scene sentences, When completing scene sentences, please try not to use verbs in action sentences. The action sentence I am giving you now is <>. Template 3: Here are some events, and I hope you can summarize in one sentence what happened that could have caused such a reaction. For example, the action sentence is \"a person takes a few steps forward and then bends down to pick up something\", and the corresponding scene sentence is \"a person discovers his long lost wallet\". I am now giving you some action sentences, hoping that you can complete some scene sentences, When completing scene sentences, please try not to use verbs in action sentences. The action sentence I am giving you now is <>. We conduct multiple experiments and evaluate the effectiveness of the generated outcomes, Template 1 can effectively generate scene texts that match the action texts. We ultimately chose the first one as the prompt to be used in our data generation process. Data validation. All the meta versions of scene texts in HumanML3D++ are generated by LLM. Despite having a good prompt, there still exists a certain level of uncertainty in the generation process. To validate the data reliability, we invite 20 participants to evaluate randomly selected data (which accounts for 15% of the total amount). Since different motions may be responses to the same scene text, and the same motion may also be responses to different scene texts, we set the evaluation criteria that as long as the action text is one of the possible reactions for a given scene text, we will mark it as reasonable. The results show that about 94% of the selected data is considered reasonable. At the same time, we have cleaned up or manually corrected the abnormal data. 4 METHOD Our objective is to comprehend scene and action textual inputs and generate lifelike human motion responses. As illustrated in Figure 3, the entire framework comprises two main components. The LLM accepts action and scene inputs and produces corresponding action labels. The generation module uses VQ-VAE to learn the mapping between motion data and discrete encoding sequences and generates code indices based on the corresponding action descriptions. Leveraging the decoder in the motion VQ-VAE, we are able to reconstruct motion from the code indices. In Section 4.1, we presented our comprehension module and introduced a new dataset provided for novel tasks. Subsequently, in Section 4.2, we outlined our universal generation module. \fRunqi Wang, Caoyuan Ma, GuoPeng Li, and Zheng Wang\u2020 Figure 3: Our pipeline overview. Our approach consists of two main parts. a) Understanding natural language and decoupling it into a sequence of actions. We use LLM to obtain possible action labels from action texts or scene texts. b) Generating action sequences corresponding to the obtained action labels. Action X represents the \ud835\udc65-th acquired action label. \ud835\udc50\ud835\udc65 \ud835\udc59represents the \ud835\udc59-th discrete representation ( pose ID) of the motion generated by the \ud835\udc65-th action label. \ud835\udc4erepresents the number of smoothing pose IDs we use. We use the sequential abilities of the Transformer and use the last few actions of the previous action as part of the input for the next action. 4.1 Think Model 4.1.1 Dataset. The existing datasets contain only action text inputs without scene texts, we need to expand the scene text on the existing dataset. We selecte our foundational dataset based on the following considerations. First, when it comes to addressing the issue of generating quality motion, it has been demonstrated that the amount of motion data has an impact on the results. A greater amount of motion data allows for the learning of more poses, consequently leading to improved generation performance. Therefore, our foundational dataset should contain as many motion actions as possible. Secondly, the textual component of the dataset must include annotations of action text to reduce the workload of data labeling. Additionally, this facilitates the supplementation of scene text sections. Based on these criteria, we have chosen the largest dataset with textual annotations, the HumanML3D dataset, as our foundational dataset. However, the textual component of HumanML3D comprises solely action annotations of motion. To meet the requirements of our task, we adopt a comprehensive approach, combining large-scale models with manual processing. Figure 2 displays the representation of the data relationships in our dataset. For each motion data point in the source dataset, which includes 3-5 action descriptions of the motion, we refine the promotion of the large model to generate two scene descriptions corresponding to each data point. After the model completes the supplementation of scene text, we manually filter and clean the data. Then, the data is utilized to train the model, enabling it to perform specific tasks. We compared several datasets utilized in the current field, and according to Table1, it is evident that we possess the largest annotated (action and scene) three-dimensional motion dataset available. 4.1.2 LLM. In the realm of situational comprehension modules, LLM emerges as the paramount choice for delving into textual inputs, owing to its adeptness in modeling intricate language structures and its extraordinary comprehension prowess. Leveraging LLM, we endeavor to generate action representations corresponding to predefined scenarios. Illustrated in Figure 3, our think module ingests scene texts as input and subsequently yields corresponding action labels. Our extraction methodology unfolds in two distinct phases: initially, we harness LLM to procure action texts in response to the provided scene texts; thereafter, we capitalize on the language model\u2019s proficiency to distill action labels from the acquired action texts. Notably, we encounter impracticality in retraining or fine-tuning LLM for our purposes. Firstly, the endeavor to retrain large-scale models entails formidable demands on computational resources and time, rendering it unfeasible in many practical scenarios. Secondly, direct fine-tuning of large models on the paired text inputs of scene texts and action texts encounters the predicament of \u201clective forgetting\u201d. By contrast, the direct adaptation of prompts utilizing existing LLM presents a viable workaround to circumvent these challenges. The utilization of LLM to generate response action labels for scene texts entails a degree of uncertainty; namely, the action label generated from scene texts may not necessarily correspond to the ground truth (GT) action. We have taken this issue into consideration and implemented certain measures to address it, which are elaborated upon in Section 5.2. 4.2 ACT Model 4.2.1 CodeBook. The incorporation of VQ-VAE [34] into the model framework facilitates the acquisition of discrete representations within generative models. Herein, we denote the encoder and decoder components of the autoencoder as E and D, respectively. Consider a human motion sequence \ud835\udc4b= [\ud835\udc651,\ud835\udc652, ...,\ud835\udc65\ud835\udc47], with \ud835\udc47 denoting the total number of frames. The latent feature \ud835\udc4dcan be \fTAAT: Think and Act from Arbitrary Texts in Text2Motion derived as \ud835\udc4d= \ud835\udc38(\ud835\udc4b), where \ud835\udc4d= [\ud835\udc671,\ud835\udc672, ...,\ud835\udc67\ud835\udc47/\ud835\udc59], and \ud835\udc59signifies the temporal downsampling rate of the encoder \ud835\udc38. Quantization of each latent feature \ud835\udc67\ud835\udc56entails its mapping to the nearest centroid \ud835\udc50\ud835\udc58within the codebook \ud835\udc36, as delineated by the equation: \u02c6 \ud835\udc67\ud835\udc56= arg min \ud835\udc50\ud835\udc58\u2208\ud835\udc36 \u2225\ud835\udc67\ud835\udc56\u2212\ud835\udc50\ud835\udc58\u22252 (1) In the optimization of VQ-VAE, the standard objective function [34] Lvq encompasses three pivotal components: a reconstruction loss Lre, an embedding loss Lembed, and a commitment loss Lcommit. L\ud835\udc63\ud835\udc5e= L\ud835\udc5f\ud835\udc52+ \u2225\ud835\udc4d\u2212sg[ \u02c6 \ud835\udc4d]\u22252 | {z } Lembed +\ud835\udefd\u2225sg[\ud835\udc4d] \u2212\u02c6 \ud835\udc4d\u22252 | {z } Lcommit (2) In our organizational refinement framework, we introduce a hyper-parameter \ud835\udefdto govern the impact of the commitment loss and denote the stop-gradient operator as sg. Let Xre represent the reconstructed motion derived from X, specifically defined as Xre = \ud835\udc37(Z). Additionally, denote V(X) as the velocity vector corresponding to X, where V = [\ud835\udc631, \ud835\udc632, ..., \ud835\udc63\ud835\udc47\u22121] and each \ud835\udc63\ud835\udc56denotes the difference between consecutive elements in X, i.e., \ud835\udc63\ud835\udc56= \ud835\udc65\ud835\udc56+1 \u2212\ud835\udc65\ud835\udc56. Hence, the overarching objective guiding our reconstruction process can be articulated as follows: Lre = Lsmooth 1 (\ud835\udc4b,\ud835\udc4bre) + \ud835\udefcLsmooth 1 (\ud835\udc49(\ud835\udc4b),\ud835\udc49(\ud835\udc4bre)) (3) where \ud835\udefcis a hyper-parameter to balance the two losses. A rudimentary implementation of VQ-VAE training encounters a notable challenge known as codebook collapse, as discussed in literature [28, 34]. However, to mitigate this issue and enhance codebook utilization, two prominent training methodologies have been devised [28]. The first approach involves the utilization of exponential moving average (EMA) and the second is referred to as codebook reset (Code Reset). The EMA method facilitates a smooth evolution of the codebook C over iterations. On the other hand, the Code Reset strategy identifies inactive codes during the training process and dynamically reassigns them based on input data, thereby revitalizing the codebook and optimizing its utility throughout the training regimen. 4.2.2 Generativate Transfomer. Utilizing a learned motion VQVAE, a motion sequence \ud835\udc4b= [\ud835\udc651,\ud835\udc652, ...,\ud835\udc65\ud835\udc47] can be converted into a sequence of indices \ud835\udc3c= [\ud835\udc561,\ud835\udc562, ...,\ud835\udc56\ud835\udc47/\ud835\udc59, End], where \ud835\udc56\ud835\udc61\u2208 [1, 2, ...,\ud835\udc60\ud835\udc47/\ud835\udc59] denotes indices from the learned codebook. It\u2019s important to note that a special \"End\" token is appended to signify the end of the motion sequence. By projecting \ud835\udc3cback to their corresponding codebook entries, we obtain \u02c6 \ud835\udc4d= [\u02c6 \ud835\udc671, \u02c6 \ud835\udc672, ..., \u02c6 \ud835\udc67\ud835\udc47/\ud835\udc59] , which can then be decoded into a motion sequence \ud835\udc4bre using the decoder \ud835\udc37. Consequently, text-to-motion generation can be formulated as an autoregressive next-index prediction task: given previous \ud835\udc61\u22121 indices (i.e., \ud835\udc3c< \ud835\udc61) and text condition \ud835\udc50, our objective is to predict the distribution of possible next indices \ud835\udc5d(\ud835\udc56\ud835\udc61|\ud835\udc50, \ud835\udc3c< \ud835\udc61), a task well-suited for Transformer-based models. The overview of our Transformer model is depicted in Figure3. Optimization Goal. The optimization goal is defined by denoting the likelihood of the full sequence as \ud835\udc5d(\ud835\udc46|\ud835\udc50) = \u00ce\ud835\udc47/\ud835\udc59 \ud835\udc56=1 \ud835\udc5d(\ud835\udc46\ud835\udc56|\ud835\udc50,\ud835\udc46< \ud835\udc56). We directly maximize the log-likelihood of the data distribution: L\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc60= E\ud835\udc46\u223c\ud835\udc5d(\ud835\udc46) [\u2212log\ud835\udc5d(\ud835\udc46|\ud835\udc50)] (4) 4.2.3 full motion generation. In the training phase of the generative module, our input comprises textual labels paired with corresponding sequences of discrete actions. This design allows the generative module to learn various actions and transitional actions between two actions, thereby establishing a discrete representation of actions and mappings between them. Nevertheless, this does not completely faithfully generate all actions. When visualizing, we adopt a new approach to help us generate all actions. \u001a clip_feature_action0,\ud835\udc5b\ud835\udc62\ud835\udc59\ud835\udc59 if action = action0 clip_feature_action\ud835\udc56, action\ud835\udc56\u22121[\u2212\ud835\udc4e:] otherwise (5) Specifically, when the input label is the first in the sequence, we utilize the corresponding action label along with an empty ID list as input. When the input label is not the first, we employ the corresponding label and the last a IDs of the preceding action as input to generate the next index under the given label condition. For each action label, we initiate the generation process from the text embedding, proceeding in an autoregressive manner. This generation process continues until the model predicts the End token, signifying the completion of action sequence generation. Subsequently, upon obtaining all action label indices, we concatenate them. This concatenated sequence is then passed through the VAE decoder, facilitating the formation of a cohesive and smooth sequence of actions. 5 EXPERIMENT In the experiments, We select R-Precision, Frechet Inception Distance (FID), Multimodal Distance (MM-Dist), Diversity, and Multimodality (MModality) as our evaluation metrics. In Section 5.1 we introduce standard datasets as well as evaluation metrics. We report the accuracy of text2Action in Section 5.2. We compare our results to competitive approaches in Section 5.3-5.5. 5.1 Dataset and evaluation metric Due to the current lack of standardized datasets suitable for extracting motions from arbitrary texts, we supplement the textual portion of the largest annotated dataset, HumanML3D, to meet our task requirements. Following Section 4.1.1, we reorganize the dataset and conduct multiple experiments. Implementation details. For the codebook from VQ-VAE, its size is set to 512 \u00d7 512. The downsampling rate \ud835\udc59is 4. For the HumanML3D++ dataset, the motion sequences are cropped to \ud835\udc47= 64 for training. We use AdamW optimizer with [\ud835\udefd1, \ud835\udefd2] = [0.9, 0.99], batch size of 256, and exponential moving constant \ud835\udf06= 0.99. We train the first 200K iterations with a learning rate of 2 \u00d7 10\u22124, and 100K with a learning rate of 1\u00d710\u22125. \ud835\udefdand \ud835\udefcin L\ud835\udc63\ud835\udc5eand L\ud835\udc5f\ud835\udc52are set to 1 and 0.5, respectively. For the GPT, we employ 20 Transformer layers with a dimension of 1,024 and 16 heads. Following Guo et al [8], the maximum length of Motion is 196 on HumanML3D++ and HumanML3D, and the minimum length is 40 for HumanML3D++. The maximum length of the code index sequence is \ud835\udc47\u2032 = 50. We train an extra End token as a signal to stop index generation. The \fRunqi Wang, Caoyuan Ma, GuoPeng Li, and Zheng Wang\u2020 Table 2: Experiment results on HumanML3D. The training is conducted on the HumanML3D dataset, and testing is also performed on the HumanML3D dataset. Compared to them, our TAAT uses the sequential abilities of the Transformer and works well in FID, Diversity, and MModality, proving that our model generates high-quality motion. Methods R-Precision \u2191 FID \u2193 MM-Dist \u2193 Diversity \u2191 MModality\u2191 Top-1 Top-2 Top-3 Real motion 0.511\u00b1.003 0.703\u00b1.003 0.797\u00b1.002 0.002\u00b1.000 2.974\u00b1.008 9.503\u00b1.065 TM2T [9] 0.457\u00b1.002 0.639\u00b1.003 0.740\u00b1.003 1.067\u00b1.002 3.340\u00b1.008 9.188\u00b1.002 2.090\u00b1.083 MDM [32] 0.611\u00b1.007 0.544\u00b1.044 5.566\u00b1.027 9.599\u00b1.086 2.799\u00b1.072 MLD [5] 0.481\u00b1.003 0.673\u00b1.003 0.772\u00b1.002 0.473\u00b1.013 3.196\u00b1.010 9.724\u00b1.082 2.413\u00b1.079 MotionDiffuse [37] 0.491\u00b1.001 0.681\u00b1.001 0.782\u00b1.001 0.630\u00b1.001 3.113\u00b1.001 9.410\u00b1.049 1.553\u00b1.042 T2M-GPT [36] 0.417\u00b1.003 0.589\u00b1.002 0.685\u00b1.003 0.140\u00b1.006 3.730\u00b1.009 9.844\u00b10.095 3.285\u00b1.070 Ours 0.329\u00b1.003 0.489\u00b1.002 0.696\u00b1.003 0.461\u00b1.006 5.050\u00b1.009 10.038\u00b1.095 2.929\u00b1.070 Table 3: Experiment results on model generalization ability. The training is conducted on the HumanML3D dataset, while testing is performed using the HumanML3D++ dataset. We observe a certain degree of decline in metrics across all models when they are subjected to new scene text inputs. Compared to other methods, TAAT exhibits lesser degradation in metrics, proving the enhanced comprehension capability of our method when confronted with new scene text inputs. Methods R-Precision \u2191 FID \u2193 MM-Dist \u2193 Diversity \u2191 MModality\u2191 Top-1 Top-2 Top-3 Real motion 0.397\u00b1.003 0.568\u00b1.003 0.665\u00b1.003 0.006\u00b1.000 3.945\u00b1.000 8.435\u00b1.069 TM2T [9] 0.337\u00b1.002 0.496\u00b1.002 0.593\u00b1.002 2.201\u00b1.020 4.265\u00b1.008 7.286\u00b1.075 2.600\u00b1.094 MDM [32] 0.322\u00b1.004 0.481\u00b1.007 0.579\u00b1.007 0.827\u00b1.053 4.539\u00b1.019 8.249\u00b1.058 2.804\u00b1.052 MLD [5] 0.373\u00b1.002 0.534\u00b1.002 0.626\u00b1.002 0.897\u00b1.026 3.893\u00b1.010 9.289\u00b1.096 3.018\u00b1.028 MotionDiffuse [37] 0.366\u00b1.000 0.546\u00b1.000 0.637\u00b1.000 1.514\u00b1.000 3.965\u00b1.000 7.907\u00b1.000 1.813\u00b1.000 T2M-GPT [36] 0.389\u00b1.009 0.544\u00b1.009 0.633\u00b1.002 0.516\u00b1.042 4.035\u00b1.004 9.396\u00b1.232 2.499\u00b1.348 Ours 0.225\u00b1.003 0.315\u00b1.002 0.413\u00b1.003 0.488\u00b1.006 5.109\u00b1.009 8.552\u00b1.095 2.957\u00b1.070 Table 4: Experiment results on HumanML3D++. The training is conducted on the HumanML3D++ dataset, and testing is also performed on the HumanML3D++ dataset. Despite the constraints imposed by the evaluated metrics, our TAAT performs favorably in terms of FID, Diversity, and MModality, demonstrating that our model can generate high-quality and diverse motion Methods R-Precision \u2191 FID \u2193 MM-Dist \u2193 Diversity \u2191 MModality\u2191 Top-1 Top-2 Top-3 Real motion 0.397\u00b1.003 0.568\u00b1.003 0.665\u00b1.003 0.006\u00b1.000 3.945\u00b1.000 8.435\u00b1.069 TM2T [9] 0.337\u00b1.000 0.508\u00b1.000 0.616\u00b1.000 1.394\u00b1.000 4.229\u00b1.000 8.181\u00b1.000 2.701\u00b1.000 MDM [32] 0.314\u00b1.006 0.482\u00b1.008 0.588\u00b1.009 0.435\u00b1.029 4.340\u00b1.026 8.634\u00b1.057 2.901\u00b1.055 MLD [5] 0.165\u00b1.002 0.281\u00b1.002 0.368\u00b1.003 9.408\u00b1.060 5.564\u00b1.013 6.962\u00b1.063 3.086\u00b1.130 MotionDiffuse [37] 0.286\u00b1.000 0.442\u00b1.000 0.540\u00b1.000 2.688\u00b1.000 4.638\u00b1.000 7.703\u00b1.000 3.191\u00b1.000 T2M-GPT [36] 0.371\u00b1.005 0.543\u00b1.004 0.645\u00b1.005 0.316\u00b1.015 3.994\u00b1.034 8.627\u00b1.080 2.620\u00b1.067 Ours 0.235\u00b1.003 0.358\u00b1.002 0.427\u00b1.003 0.448\u00b1.006 4.712\u00b1.009 8.950\u00b1.095 3.046\u00b1.070 Transformer is optimized using AdamW with [\ud835\udefd1, \ud835\udefd2] = [0.5, 0.99] and batch size 128. The initialized learning rate is set to 1\u00d710\u22124 for 150K iterations and decayed to 5 \u00d7 10\u22126 for another 150K iterations. Since our method takes the label group of the action label as input, we follow the instructions [8] and retrain the Motion&Text Feature Extractors for Evaluation on the HumanML3D dataset, where the text part is replaced by the action label extracted from the HumanML3D dataset. In the experiment on Motion Generation on HumanML3D++, we follow the same guidance [8] and retrain Motion &Text Feature Extractors for Evaluation on the HumanML3D++ dataset. Metrics. When calculating indicators, we use a consistent evaluation method [36], input action label combinations, and corresponding actions. \fTAAT: Think and Act from Arbitrary Texts in Text2Motion Figure 4: Visual results on Action texts and scene texts. The first row displays the visual results of different models in Action texts, while the second row presents the visual results of different models in scene texts. Compared with other models, under action texts, our TAAT faithfully generates three actions in sequence when given three actions as input. Under scene texts, TAAT generates reactive actions to the situation (running away), while other models generate textual content (driving). \u2022 R-Precision: Given one motion sequence and 32 text descriptions (1 ground-truth and 31 randomly selected mismatched descriptions), we rank the Euclidean distances between the motion and text embeddings. Top-1, Top-2, and Top-3 accuracy of motion-to-text retrieval are reported. \u2022 Frechet Inception Distance (FID): We calculate the distribution distance between the generated and real motion using FID on the extracted motion features. \u2022 Multimodal Distance (MM-Dist): The average Euclidean distances between each text feature and the generated motion feature from this text. \u2022 Diversity: From a set of motions, we randomly sample 300 pairs of motion. We extract motion features and compute the average Euclidean distances of the pairs to measure motion diversity in the set. \u2022 Multimodality (MModality): For one text description, we generate 20 motion sequences forming 10 pairs of motion. We extract motion features and compute the average Euclidean distances of the pairs. We finally report the average over all the text descriptions. 5.2 Text2Action Accuracy In the task we proposed, using LLM to understand and respond to scene texts is the core of generating reasonable motion for arbitrary texts. However, there is uncertainty in generating action texts from scene texts. Specifically, one scene text may correspond to multiple reactive actions, although the results are usually reasonable, the direct use of LLM\u2019s single result does not always correspond to the ground truth action texts, which is not conducive to the existing evaluation criteria. The illusion phenomenon of LLM may have a negative impact on our setting. To test the above concerns, we generate multiple actions for each scene text. As shown in Figure 6, we evaluate the rationality of the results: we generate ten action texts for each scene text and use an evaluator to compute the similarity between each action text and Ground truth to choose the most similar action text we generated. we set the evaluation criteria that If the action texts and ground truth are extremely matched, such as both actions that are \"kick\", we determine that they are \"match\". If the action texts and ground truth actions are similar but not exactly matched, we determine that they are \"similar\". If the action texts and ground truth actions do not match at all, we determine that they are \"mismatch\". We randomly sample 10% of the results and find that 66% of the data can be similar to the ground truth. For the actions used in the second part of the test, we use a discriminator to select the result closest to the original action texts as the input for the second part. 5.3 Motion Generation on HumanML3D In Table 2, both model training and testing are performed on the HumanML3D dataset. The results data for other models are directly obtained from the respective papers. Compared to the original action texts to motion task, our TAAT demonstrates good performance in Diversity and shows promising results in metrics such as FID and MModality.This also demonstrates the efficacy of our TAAT in the original action texts to motion tasks. Our TAAT can also generate improved and more diverse motions when presented with action text inputs.TAAT combines the accuracy of generation with the diversity of generated actions. \fRunqi Wang, Caoyuan Ma, GuoPeng Li, and Zheng Wang\u2020 Figure 5: E1, E2, and E3, respectively, represent the result in table 2, table 3, and table 4. We opt for the FID metric (lower values indicating better performance) to illustrate the variations among different models under various experimental settings. Most models exhibit a decline in metrics when directly subjected to scene inputs without prior training (E2), indicating a lack of generalization capability in the preceding models. Upon retraining the models on the new task (E3), there is a noticeable improvement in metrics for most models, proving that the majority of models also possess a certain learning capability for more complex scene texts. Our method has the smallest change in indicators among the three experiments and maintains a leading level, proving that our method is more suitable for the input of arbitrary texts. 5.4 From HumanML3D To HumanML3D++ Table 3 illustrates our testing on the model\u2019s generalization capabilities. All models are trained on the HumanML3D dataset and tested on the HumanML3D++ dataset to test whether the relevant models have the generalization ability to understand and generate motion from scene texts. We conduct testing using official pre-trained models provided by each paper. Table 3 shows that preceding models demonstrate a decrease in metrics when directly subject to scene text inputs without prior training. This indicates a lack of generalization capability in the preceding models, showing them not directly applicable to the new task. Despite not being specifically trained on scene texts, our model exhibits a comparatively minor decrease in performance metrics when presented with scene text inputs. Furthermore, it achieves the best FID and satisfactory Diversity, demonstrating the ability to generate high-quality and diverse human motions. 5.5 Motion Generation on HumanML3D++ Table 4 shows the model\u2019s ability to learn scene texts and generate corresponding responsive actions. All models are trained on the HumanML3D++ dataset and tested on the HumanML3D++ dataset. All models are trained according to guidelines provided in their respective official repositories. It can be observed that our TAAT Figure 6: Accuracy of action labels generated by LLM. It can be observed that the action texts generated in our think stage closely approximate real action texts at a rate of 66%. can learn and understand the input of scene text well, generate corresponding actions, and does not produce particularly poor metrics as some models do. Despite the constraints imposed by the evaluated metrics, our TAAT performs favorably in terms of FID, Diversity, and MModality, demonstrating that our model can generate high-quality and diverse motion. 6 DISSCUSION Since the LLMs have randomness in producing action texts from scene texts, the existing methods mainly focus on aligning motion and text. This leads to our quantitative results not showing a significantly superior performance. We believe the main cause is that a scene text has various reasonable motions, and a motion may occur in various scenes, we need a better evaluation method to judge the rationality of the generated results.. Although we have already provided 6-10 scene texts for each motion, it is still insufficient for the task. 7 CONCLUSION In summary, this study introduces a novel task: inferring potential actions from scene texts (including those with no explicit actions), which has not been previously explored. Additionally, we propose a new dataset and evaluation discriminator for this task. We conduct extensive experiments to investigate the performance and generalization capabilities of existing models across the two tasks: action texts to motion and scene texts to motion. Our endeavors lay the groundwork for future research in this area. \fTAAT: Think and Act from Arbitrary Texts in Text2Motion" |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.15149v1", |
| "title": "Bias patterns in the application of LLMs for clinical decision support: A comprehensive study", |
| "abstract": "Large Language Models (LLMs) have emerged as powerful candidates to inform\nclinical decision-making processes. While these models play an increasingly\nprominent role in shaping the digital landscape, two growing concerns emerge in\nhealthcare applications: 1) to what extent do LLMs exhibit social bias based on\npatients' protected attributes (like race), and 2) how do design choices (like\narchitecture design and prompting strategies) influence the observed biases? To\nanswer these questions rigorously, we evaluated eight popular LLMs across three\nquestion-answering (QA) datasets using clinical vignettes (patient\ndescriptions) standardized for bias evaluations. We employ red-teaming\nstrategies to analyze how demographics affect LLM outputs, comparing both\ngeneral-purpose and clinically-trained models. Our extensive experiments reveal\nvarious disparities (some significant) across protected groups. We also observe\nseveral counter-intuitive patterns such as larger models not being necessarily\nless biased and fined-tuned models on medical data not being necessarily better\nthan the general-purpose models. Furthermore, our study demonstrates the impact\nof prompt design on bias patterns and shows that specific phrasing can\ninfluence bias patterns and reflection-type approaches (like Chain of Thought)\ncan reduce biased outcomes effectively. Consistent with prior studies, we call\non additional evaluations, scrutiny, and enhancement of LLMs used in clinical\ndecision support applications.", |
| "authors": "Raphael Poulain, Hamed Fayyaz, Rahmatollah Beheshti", |
| "published": "2024-04-23", |
| "updated": "2024-04-23", |
| "primary_cat": "cs.CL", |
| "cats": [ |
| "cs.CL", |
| "cs.LG" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "Bias patterns in the application of LLMs for clinical decision support: A comprehensive study", |
| "main_content": "Introduction The recent surge in the adoption of large language models (LLMs) in healthcare has brought many hopes, fears, and uncertainties about their impact. In the hope of finding long-sought solutions to problems such as provider burnout and automated claims processing, healthcare systems were among the first sectors to adopt LLMs [57]. The rapid adoption of LLMs in healthcare has had some forefront applications in areas where LLMs (with their NLP roots) shine, including summarizing medical (free-text) notes, answering patients\u2019 questions, and generating patient discharge letters [54]. There is another large application area of LLMs that is currently not on the forefront but can have a much more significant impact. This area relates to the application of LLMs in clinical decision support (CDS) [8]. Example 1. The code, data, and set up to reproduce our experiments are publicly available at https://github.com/ healthylaife/FairCDSLLM. \u00a9 R. Poulain, H. Fayyaz & R. Beheshti. arXiv:2404.15149v1 [cs.CL] 23 Apr 2024 \fBias patterns in the application of LLMs for clinical decision support applications include using LLMs for disease diagnosis, patient triage, and planning treatments [38]. The CDS application area is where some of the fundamental bottlenecks of healthcare are located, and even marginal improvements can have a significant impact on individuals\u2019 health. The high-stakes nature of these types of applications, however, brings concerns about the biased performance of LLM-based solutions. Accordingly, despite the vast potential, important unanswered questions remain about the true benefits and risks of LLM applications in clinical domains. On the one hand, generative AI tools such as LLMs can potentially reduce health disparities in ways such as offering objective tools to reduce human biases, reduce healthcare costs, and increase healthcare access and equity [53]. On the other hand, many use cases have shown that such AI-based tools can exacerbate health disparities [1; 43; 13; 42; 20; 16], especially by learning spurious relationships between the protected attributes and health outcomes and by underperforming when used on marginalized populations [37]. In the biomedical community, studies on the ethical aspects of LLMs have been mostly related to the mainstream applications of LLMs (i.e., NLP-based applications) centered around addressing toxic language, aggressive responses, and providing dangerous information [21]. In particular, several preliminary studies have been performed in the same context as general LLMs, such as investigating the biases toward different demographics in medical question answering [47; 63; 40]. Existing studies offer a limited view of the current state of biased performance clinical LLMs, by focusing on only certain architectures, like GPT-4 [63], limited scenarios, like diagnosing specific diseases [31; 7; 48], or a single prompting technique (usually either zero-shot or few-shot). What\u2019s critically missing are comprehensive studies that identify the scope of bias and fairness risks across various CDS applications of LLMs. This study fills the above gap by targeting two broad questions. First, to what degree LLMs exhibit biased patterns when used in controlled clinical tasks? Second, how do design choices (such as architecture design and prompting strategies) influence the potential biases of LLMs? To answer the first question, we follow a procedure similar to prior studies in this area. We rely on a combined series of clinical tasks that are specifically designed and standardized for LLMs and run an expansive series of evaluations across different dimensions of the LLM architectures and CDS tasks. For the second question, we reproduce some of the original experiments while investigating different popular prompting techniques. We compare the results of the different prompting techniques to quantify their impact on fairness. Specifically, we evaluate fairness on eight popular LLMs, including general-purpose and clinically-focused ones on multiple tasks and datasets. Notably, we leverage three different Question-Answering (QA) datasets using clinical vignettes (patient descriptions) and evaluate the performance of LLMs, by iterating over various sensitive attributes assigned to the patients. For our second question, we investigate and compare three different prompting techniques, namely zero shot, few-shot [11], and Chain of Thought [58], on one clinical QA dataset. To the best of our knowledge, this study is the largest comprehensive analysis of bias in clinical applications using LLMs, evaluating a multitude of different models on multiple datasets. In particular, the contributions of this paper can be formulated as follows: \u2022 We present a framework utilizing multiple clinical datasets and conduct a comprehensive evaluation to quantify social biases in large language models (LLMs) designed for clinical applications. 2 \fBias patterns in the application of LLMs for clinical decision support \u2022 We compare a multitude of popular general-purpose and clinical-focused LLMs to empirically evaluate and demonstrate the influence of various design choices on social biases. \u2022 We identify a list of tasks that are prone to the identified biases and potential at-risk subpopulations and discuss possible mitigation strategies. Generalizable Insights about Machine Learning in the Context of Healthcare Our exploration of bias in LLMs used for clinical decision support offers valuable lessons for a wider range of machine learning (ML) applications in healthcare. A key concern is bias amplification, where ML algorithms inherit and exacerbate existing biases and disparities, leading to unfair outcomes for certain patient groups. Furthermore, prompting strategies can significantly influence model outputs and biases. By encouraging models to justify their reasoning, we can reduce reliance on potentially biased shortcuts learned during training. These findings highlight the critical need for a multifaceted approach to mitigate bias in ML for healthcare. This includes not only scrutinizing training data for bias but also actively developing and implementing techniques that promote fairness, explainability, and transparency. By proactively addressing these concerns, healthcare providers can leverage the potential of ML while minimizing the risks of bias and unfair outcomes, ultimately fostering a more equitable and effective application in patient care. 2. Related Work While there are many studies closely related to our work, here we discuss a non-exhaustive list of studies related to either medical-related LLMs or the fairness of such models. 2.1. LLMs and Health Applications With the recent advances of foundation models [10], which generally follow the transformer architecture [55], many researchers in the community have started training models with a growing number of learning parameters. Such models, often referred to as LLMs (including the multimodal ones or MLLMs) are often pre-trained on internet-scale data with billions of trainable parameters [64]. A few of the more popular ones include Claude [6], Gemini [22], GPT-4 [2], LLaMa-2 [52], and Mixtral [27]. Along with all-purpose LLMs, which also demonstrate promising performance on clinical tasks, researchers have tried to fine-tune dedicated LLMs for the healthcare domain. Notably, PaLM was extended with prompt-tuning to enhance its performance on medical questions resulting in Med-PaLM [47]. Similarly, Palmyra-Med [61] extended Palmyra [60] to the medical domain through a custom-curated medical dataset. Many researchers have also fine-tuned LLaMa-2, one of the most popular open-source LLMs, using clinical and scientific corpora. For example, PMC-LLaMa [62] adapted LLaMa to the medical domain through the integration of 4.8M biomedical academic papers and 30K medical textbooks, as well as comprehensive finetuning for alignment with domain-specific instructions. MedAlpaca [25] fine-tuned LLaMa-2 with Anki flashcards, question-answer pairs from Wikidoc, StackExchange, and a dataset from ChatDoctor [33]. Lastly, Meditron [17] adapts LLaMa-2 (7B and 70B) to the medical domain and extends the pre-training process on a curated medical corpus, including selected 3 \fBias patterns in the application of LLMs for clinical decision support PubMed articles, abstracts, and internationally-recognized medical guidelines. Despite the numerous general-purpose and medical LLMs and their promising results, their fairness and the extent to which they perpetuate social biases remain understudied. 2.2. LLMs and Fairness Concerns Concerned about the implications of AI for society, the AI community has devoted unprecedented efforts to study such issues in recent years through dedicated conferences, journals, and guidelines [29; 56]. Accordingly, a large family of studies related to bias and fairness in AI exists. The existing studies can be seen through the lens of i) observational versus causality-based criteria, or ii) group (statistical/disparate impact) versus individual (similarity-based/disparate treatment) criteria [12; 36; 44]. The potential for bias in large language models (LLMs) has garnered significant attention, particularly in healthcare applications where fairness and justice are paramount. Evaluating bias in these models is crucial to ensure responsible deployment. Recent research has explored this issue using various methodologies. Specialized datasets like Q-Pain [35] provide valuable tools for assessing bias in pain management by allowing researchers to analyze potential disparities in LLM recommendations across different patient demographics. Additionally, comparative studies offer insights by measuring LLM performance against human experts. For instance, [26] compared GPT-4\u2019s diagnostic accuracy with physicians using clinical vignettes, and [40] investigated the responses of various LLMs (Bard, ChatGPT, Claude, GPT-4) to race-sensitive medical questions. These studies establish benchmarks for understanding how LLMs compare to human judgment in terms of fairness. Similarly, Pfohl et al. [41] proposed a new framework and dataset to assess LLMs\u2019 bias and fairness against human ratings and evaluated Med-PaLM on the proposed dataset. Furthermore, [63] evaluated whether GPT-4 encodes racial and gender biases and explored how these biases might affect medical education, diagnosis, treatment planning, and patient assessment. Reported findings highlight the potential for biased LLMs to perpetuate stereotypes and lead to inaccurate clinical reasoning. However, a comprehensive framework for evaluating LLM fairness across key dimensions such as different tasks, datasets, prompting techniques, and models remains necessary. This would enable a more systematic assessment of potential biases and facilitate the development of robust mitigation strategies. 3. Methods To implement our plan for a comprehensive study to assess social bias patterns in LLMs used for clinical tasks, we identify the key dimensions that determine the scope of our study (the four subsections below). We adopt question-answering (QA) datasets and tasks [35; 50; 63] standardized for bias evaluations, which allows us to leverage realistic scenarios. We also adopt \u201cred teaming\u201d strategies, implemented through adversarial prompting by rotating through patient demographics. In the controlled scenarios we study, rotating through demographics should not lead to a change in the desired outcome. We analyze responses across three categories of LLMs: open-source general-purpose, open-source domain-focused (scientific or clinical), and closed-source models. This variety allows us to assess the influence of model architecture and domain-specific training on potential biases. Finally, we explore different prompting techniques (zero-shot, few-shot, Chain of Thought) to understand how 4 \fBias patterns in the application of LLMs for clinical decision support they affect LLM performance and bias mitigation in healthcare settings. We provide an illustration of the entire evaluation framework in Figure 1. Should the patient receive pain medication? How much pain is the patient experiencing? Does the patient need a referral? Studied Scenarios Zero-shot Few-Shot Chain of Thought Prompting Techniques Large Language Models General Purpose Gemma Mixtral LLaMa-2 GPT-4 PaLM-2 Domain-focused Galactica Palmyra-Med Meditron Sensitive Attribute: Gender Race Statistical Tests: Welch's ANOVA Two-tailed t-test Chi Square Bias Evaluation Figure 1: Visual description of the evaluation framework. 3.1. Tasks and Datasets To assess and quantify the social biases encoded within LLMs in common question-answering (QA) scenarios, we leverage clinical QA datasets using vignettes. Clinical vignettes serve as standardized narratives depicting specific patient presentations within the healthcare domain. These narratives typically include a defined set of clinical features and symptoms, with the aim of simulating realistic clinical scenarios for controlled evaluation. Notably, we evaluated social biases in LLMs\u2019 answers to clinical questions using vignettes from three angles: pain management [35], nurse perception [63], and treatment recommendations [50]. To effectively assess the extent to which demographics impact LLMs\u2019 responses, we run each vignette multiple times while randomly rotating the vignettes\u2019 patient demographics and perform this process for all three tasks. All vignettes are carefully designed such that the studied sensitive attributes (gender and race) are neutral with respect to the outcomes of interest (like a certain disease). Q-Pain We used the Q-Pain dataset [35] to assess bias in pain management. This dataset presents vignettes across various medical contexts. We analyzed the probability distributions of the LLMs\u2019 outputs (yes/no for pain medication) to identify social biases in their responses. The dataset is divided into five different tasks of 10 vignettes (chronic non-cancer, chronic 5 \fBias patterns in the application of LLMs for clinical decision support cancer, acute cancer, acute non-cancer, postoperative) related to the type of pain experienced by the patients. Nurse Bias Following the work proposed in [63], we evaluated LLMs with a vignette dataset simulating triage scenarios. The LLMs rated statements about patients (pain perception, treatment decisions) on a Likert scale. By analyzing these ratings, we assessed potential biases in the models when performing a triage task. Treatment Recommendation We evaluated bias in specialist referrals and medical imaging recommendations using vignettes from NEJM Healer [50]. Similar to Q-Pain, we analyzed the probabilities in the LLMs\u2019 closed-ended responses (yes/no for referral/imaging) to assess how demographics influence their recommendations. 3.2. LLMs Evaluated In this paper, we focus on several commonly used LLMs. To cover a wide variety of models, we focus on both open and commercial, as well as general-purpose LLMs and those specifically trained in clinical (and one scientific) text to quantify the impact of domain-focused fine-tuning. The list of the LLMs are: \u2022 Open-Source: \u2013 General-purpose: LLaMa (70B) [52], Gemma (7B) [23], and Mixtral (8x7B) [27] \u2013 Domain-focused: Galactica (30B) [49], Palmyra-Med (20B) [61], and Meditron (70B) [17] \u2022 Closed-Source: \u2013 General-purpose: PaLM-2 [4], and GPT-4 [2]. This wide selection of LLMs, with different architectures and (pre-)training data, allows us to assess the potential benefits of certain architectures and domain-specific fine-tuning for clinical tasks. While some of the above models have different versions with varying numbers of parameters, we prioritize the larger and best-performing variants for each available model. 3.3. Prompting Strategies Prompting methods can play a pivotal role in enhancing the capabilities of LLMs [15]. We investigate different prompting techniques to better explore how these models engage with complex tasks and queries. Evaluating the impact of these methods is essential in understanding LLMs\u2019 biases in various domains, including healthcare [24]. Specifically, we have evaluated the three following techniques: zero-shot (no prior examples or guidance), few-shot [11] (provides a few examples to guide the LLMs), and Chain of Thought [58], which extends few-shot prompting by providing step-by-step explanations of the answers to enhance the model\u2019s reasoning capabilities and further improves the accuracy and interoperability of the LLM\u2019s answers. Since only Q-Pain [35] provides examples with detailed explanations for each sample case, we investigate the prompt engineering process on this dataset. We have used regular, zero-shot prompting, for the remaining datasets. Zero-shot prompting can depict a more accurate real-world scenario where the physician would not be adding additional detailed examples alongside their request. We provide more information on the different tasks and prompt engineering process in Appendix A. 6 \fBias patterns in the application of LLMs for clinical decision support 3.4. Bias Evaluation To quantify potential social biases in LLM responses across the three clinical tasks, we use the following statistical framework. For the Q-Pain (pain management) and treatment recommendation tasks, where LLM outputs were binary (yes/no for medication or referral), we used Welch\u2019s ANOVA tests. This non-parametric approach is robust to violations of the assumption of homogeneity of variance and allowed us to assess whether significant differences existed in the distribution of LLM responses across different demographic groups. Additionally, we performed pairwise comparisons between each demographic group using two-tailed t-tests to pinpoint specific instances of statistically significant bias. We used t-tests (as opposed to other alternatives such Mann\u2013Whitney U test) because we observed that our data for these tasks was almost normally distributed. For the Nurse Bias task, which involved LLM ratings on a Likert scale, we used Pearson\u2019s Chi-Squared tests. This test evaluated whether the distribution of LLM ratings differed significantly based on the patient\u2019s demographics. 4. Results Through extensive experiments on the vignette-based QA tasks, we evaluated the impact of demographics on multiple LLMs outputs. To avoid fairness gerrymandering [30] (where the results could be considered fair under the prism of either gender or race but not a combination of the two), we report our results as a combination of both gender and race throughout our experiments. 4.1. Performance on Vignette Question Answering We evaluated the impact of the rotating demographics on Q-Pain\u2019s vignettes [35] and report the results in Figure 2. We used Welch\u2019s ANOVA test to determine statistically significant disparities amongst subgroups. While Welch\u2019s ANOVA did not reveal statistically significant bias across all models and demographics, we delved deeper with two-tailed t-tests to identify potential biases on a pairwise level. This analysis identified concerning patterns. Notably, for the Chronic Cancer task (referring to patients suffering from chronic pain due to cancer), Hispanic women were significantly more likely (p-value \u22640.05) to be recommended pain medication by Palmyra-Med compared to four other groups (Black/Asian/White Man, and White Woman). Similarly, Meditron, another clinically-tuned model, exhibited biases on three tasks (Chronic Non Cancer, Acute Cancer, and Post Op), with Hispanic women less likely to receive pain medication. Interestingly, the general-purpose model GPT-4 showed an opposite bias on the Post Op task, favoring Hispanic women for pain medication. We have also investigated the biases in a task designed to evaluate nurses\u2019 perception of patients [63] which is particularly critical in triage. Here, the LLMs were asked about their agreement to a statement given a specific case. The models were specifically asked to answer on a 1-5 Likert scale. We report the results of our experiment on this task in a violin plot in Figure 3. Similar to the results on Q-Pain, Palmyra-Med exhibits the highest disparities among subpopulations. However, we have found no statistically significant differences (under a Pearson Chi-Squared test) in any of the LLMs tested. As opposed to Q-Pain, where we found disparities between specific demographic pairs, no differences are observed for this 7 \fBias patterns in the application of LLMs for clinical decision support 0.00 0.05 0.10 0.15 0.20 0.25 Probability (No) Model = Gemma 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Model = Mixtral 0.0 0.2 0.4 0.6 0.8 Model = LLaMa-2 0.0 0.2 0.4 0.6 0.8 Probability (No) Model = PaLM-2 0.0 0.1 0.2 0.3 0.4 Model = GPT-4 CNC CC AC ANC Post Op 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Model = Galactica CNC CC AC ANC Post Op 0.0 0.1 0.2 0.3 Probability (No) Model = Palmyra-Med CNC CC AC ANC Post Op 0.0 0.2 0.4 0.6 0.8 Model = Meditron Demographics Black man White man Asian man Hispanic man Black woman White woman Asian woman Hispanic woman Figure 2: Results on the Q-Pain dataset. The LLMs were presented with clinical vignettes describing various medical contexts and were asked whether they would prescribe pain medication to the patients. Each demographic is color-coded and the bars represent the average probability of denying the pain treatment for each tasks. The error bars show the standard deviation. CNC: Chronic Non Cancer, CC: Chronic Cancer, AC: Acute Cancer, ANC: Acute Non Cancer, Post Op: Postoperative specific task between any pair of demographics (Figure 6). It is also worth noting that, while the models seem to be robust to changes in the gender and race of the patients, they show very different distributions in their answers from one another, as seen by the very different shapes in the plot, possibly showing inconsistent reasoning patterns between models. 8 \fBias patterns in the application of LLMs for clinical decision support 1 2 3 4 5 Likert Scale Model = Gemma Model = Mixtral Model = LLaMa-2 Model = PaLM-2 White Black Hispanic Asian Race 1 2 3 4 5 Likert Scale Model = GPT-4 White Black Hispanic Asian Race Model = Galactica White Black Hispanic Asian Race Model = Palmyra-Med White Black Hispanic Asian Race Model = Meditron Gender Male Female Figure 3: Violin plot of the results on the LLMs\u2019 perception of patients based on a Likert scale. The LLMs were presented with patient summaries and statements related to pain perception or illness severity and were asked to rate their agreement with the statement. 1:Strongly disagree with the statement. 5:Strongly agree. We assessed the biases in the context of treatment recommendations, where given a summary of a patient case, the models were asked whether the patient should be referred to a specialist and whether it was necessary to perform advanced medical imaging. We report the results with both gender and race as sensitive attributes in Figure 4. Similar to our results on Q-Pain, we performed Welch\u2019s ANOVA tests for all LLMs, as well as two-tailed t-tests on all demographic pairs. We report the p-values under the t-tests in Figure 7. Consistent with our previous findings for the Nurse Bias task, we have found no significant discrepancies, either on a global or pairwise-level. It is worth mentioning that GPT-4 and Palmyra-Med seem to again show the greatest source of biases, especially between Black females and Hispanic males for the Referral Rate (p-value = 0.058), and between White males and Black females for the Imaging Rate (p-value = 0.085). We also found that Mixtral and GPT-4 were suggesting a specialist visit and advanced medical imaging to most patients. On the other hand, Gemma only seemed to promote a much more conservative approach, with its highest imaging recommendation rate of 2.8% for Hispanic males. 4.2. Impact of Prompt Engineering Our experiments on the Q-Pain dataset [35] provided the foundations to evaluate the impact of prompt engineering on social bias. Accordingly, we reproduced our experiments on the dataset while experimenting with multiple prompting techniques. To quantify social bias 9 \fBias patterns in the application of LLMs for clinical decision support 0.0 0.5 1.0 Imaging Rate Model = Gemma 0.0 0.5 1.0 Imaging Rate Model = Mixtral 0.0 0.5 1.0 Imaging Rate Model = LLaMa 2 0.0 0.5 1.0 Imaging Rate Model = PaLM-2 0.0 0.5 1.0 Imaging Rate Model = GPT-4 0.0 0.5 1.0 Imaging Rate Model = Galactica Demographics 0.0 0.5 1.0 Imaging Rate Model = Palmyra-Med Demographics 0.0 0.5 1.0 Imaging Rate Model = Meditron 0.0 0.5 1.0 Referral Rate Demographics Black Male White Male Asian Male Hispanic Male Black Female White Female Asian Female Hispanic Female 0.0 0.5 1.0 Referral Rate 0.0 0.5 1.0 Referral Rate 0.0 0.5 1.0 Referral Rate 0.0 0.5 1.0 Referral Rate 0.0 0.5 1.0 Referral Rate 0.0 0.5 1.0 Referral Rate 0.0 0.5 1.0 Referral Rate Figure 4: Results on the NEJM Healer vignettes in a treatment recommendation scenario. The LLMs were given a clinical vignette and were asked whether they would refer the patient to a specialist and medical imaging. Imaging Rate is hatched (Left side), Referral Rate is filled (Right side). Each gender is color-coded. The black vertical bar represents a standard deviation. in each scenario, we perform a Welch ANOVA test across all demographic subgroups and report the F-statistic in Figure 5. The test allows us to determine if there are statistically significant differences among the different subgroups, where a higher value indicates greater disparities, and thus higher biases. Additionally, we report the results for all demographic subgroups in Figures 8 and 9. Notably, one can observe that chain of thought prompting not only tends to administer pain medication to a greater extent (i.e., the preferred outcome), as shown by the lower probability of refusing the pain treatment but also produces less biased responses than other prompting techniques tested, on average. The lower odds for refusing to administer pain medications are particularly visible for Gemma (Figure 8), with an average refusal probability of less than 0.2%. While the biased pattern holds true for most tasks, it is worth mentioning that on the Chronic Cancer task, GPT-4 exhibits worse fairness when using CoT. Additionally, zero-shot prompting tends to have the most extreme evidence of fairness as 10 \fBias patterns in the application of LLMs for clinical decision support 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 Welch Model = Gemma 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Model = Mixtral 0.0 0.1 0.2 0.3 0.4 0.5 Model = LLaMa-2 0.0 0.2 0.4 0.6 0.8 Model = PaLM-2 CNC CC AC ANC Post Op Task 0.00 0.25 0.50 0.75 1.00 1.25 1.50 Welch Model = GPT-4 CNC CC AC ANC Post Op Task 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Model = Galactica CNC CC AC ANC Post Op Task 0.0 0.5 1.0 1.5 2.0 2.5 Model = Palmyra-Med CNC CC AC ANC Post Op Task 0.0 0.5 1.0 1.5 2.0 Model = Meditron Prompting Zero Shot Few-Shot CoT Figure 5: Results of the experiments on prompt engineering through a Welch\u2019s ANOVA test on the Q-Pain dataset. Higher values signify greater discrepeancies between demographics, indicating stronger biases. Detailed results in Figures 8 and 9. shown by the drastically tall blue bars for many tasks and models, especially for Meditron. We expected zero-shot and few-shot prompting to have the worse biases as they are more simple techniques and do not push the LLMs towards advanced reasoning steps. 5. Discussion The burgeoning integration of Large Language Models (LLMs) into clinical decision support systems (CDSs) presents a compelling opportunity to revolutionize healthcare delivery. However, as our investigation into social biases within these models reveals, careful consideration is necessary to ensure equitable and trustworthy implementation. In the journey towards leveraging LLMs in clinical settings, a \u201cdual-edged sword\u201d phenomenon has emerged. On one front, the proficiency of LLMs in parsing and understanding vast amounts of unstructured medical data offers an unprecedented opportunity for enhancing patient care and operational efficiency and possibly reducing health disparities by increasing access. On the other front, this potential is tempered by the realization that LLMs, much like their human counterparts [39], are susceptible to various types of biases. Our exploration aligns with prior research highlighting the vulnerability of LLMs to biases sourcing from various steps of their application life cycle (such as model design, training data, and deployment) [9; 21; 32]. We contribute to this body of work by specifically evaluating bias in LLMs across diverse patient demographics and clinical tasks. 11 \fBias patterns in the application of LLMs for clinical decision support Our results demonstrate notable heterogeneity across the models with only certain LLMs showing concerning signs of biases. Notably, GPT-4, Palmyra-Med, and Meditron, exhibitted concerning disparities in clinical question answering based on race and gender. For instance, with the Q-Pain dataset (Figure 2), Palmyra-Med was more likely to recommend pain medication for Hispanic women compared to other demographics. GPT-4 showed similar biases in the Post Op task, favoring Hispanic women for pain medication. These findings suggest a potential for bias amplification in clinically-tuned models, warranting further investigation into such models. Additionally, the contrasting bias pattern in GPT-4 highlights that model size (the number of parameters) doesn\u2019t necessarily correlate with bias as both Palmyra-Med, the second smallest model (20B), and GPT-4, one of the largest (rumored to be around 1.7T parameters [59]), exhibited concerning biases. This underscores the need to explore factors beyond model size that contribute to bias in LLMs. Additionally, significant variation exists between models, with PaLM-2 withholding pain medication from over 70% of patients in the Post Op task, compared to only 2% for GPT-4. A similar pattern can be observed between tasks, as shown by LLaMa-2 and PaLM-2. Both models heavily recommended pain medication to patients suffering from chronic pain due to cancer, while overwhelmingly refusing to do so on patients with postoperative pain. These variations highlight how different models assess pain based on patient context. Furthermore, the results extend to treatment recommendations as well, where Palmyra-Med showed the greatest disparities, favoring Black females in advanced imaging referrals while being the least referred group to specialists, notably compared to Asian and Hispanic males. These findings echo recent works [63; 35; 40] in the healthcare domain, emphasizing the urgency of bias mitigation strategies in these sensitive applications. What is even more concerning is the biases shown by clinically-focused LLMs, which are the ones \u201cfine-tuned\u201d for healthcare applications and often report overall higher performance in medical benchmarking tasks [28]. The potential for biased LLM outputs to exacerbate existing healthcare disparities necessitates a proactive approach toward fairness in LLM development and deployment. Our findings underscore the moral imperative to ensure equitable access to high-quality care, regardless of patient demographics. As LLMs become increasingly ubiquitous in healthcare [19], mitigating bias becomes not just a technical challenge but an ethical obligation. Our exploration into prompt engineering techniques offers promising avenues for mitigating bias in clinical LLMs. The way questions or tasks are framed to LLMs can significantly influence their performance [11; 58] and propensity for biased responses [57]. Most notably, we observed that the Chain of Thought (CoT) approach [58], by encouraging LLMs to articulate their reasoning steps, can demonstrably reduce bias compared to traditional prompting methods. This aligns with the work by Tian et al. [51] highlighting the potential of interpretable prompting techniques such as CoT in promoting fairness and identifying biases within the models\u2019 reasoning steps. By explicitly requiring justification for their conclusions, CoT prompting seems to steer LLMs away from potentially biased shortcuts present in their training data. These shortcuts can be statistical patterns that don\u2019t necessarily reflect reality, and CoT prompting forces the LLM to build its answer from the ground up, being less reliant on real-world biased patterns. Furthermore, the detailed explanation also exposes any hidden biases within its reasoning process, allowing for identification and potential correction, serving as an additional set of guardrails for the end user. These findings ignite hope that deliberate and thoughtful prompt engineering may offer a path towards more 12 \fBias patterns in the application of LLMs for clinical decision support equitable outcomes. This is especially timely as the LLMs are generally used in \u201cfrozen\u201d formats and retraining or fine-tuning those are generally not advised and not feasible for most users [46; 18; 5]. Prompt-based methods (like CoT or soft prompting) offer a pragmatic solution for many LLM applications in healthcare. Additionally, the interpretability of machine learning methods within healthcare is critical and aligns with calls for transparency in ML for healthcare applications [14; 3]. Given the high cost of training ever-larger LLMs, these findings are particularly promising as hard-prompting [15] methods can also provide interpretable and low-cost solutions, which could be key in real-world CDS applications. Mitigating bias in clinical LLMs necessitates a multifaceted approach. Firstly, prioritizing the development and adoption of prompt engineering techniques that allow for reduced biases and higher interpretability may offer a tangible pathway toward reducing bias. Secondly, concerted efforts are crucial to create diverse and representative datasets for LLM training or fine-tuning. These datasets should encompass a wide spectrum of demographics, conditions, and clinical scenarios to ensure that LLMs navigate the complexities of real-world healthcare with fairness and accuracy. Thirdly, bolstering the transparency and interpretability of LLMs is essential. Understanding how ML algorithms arrive at conclusions empowers stakeholders to identify and rectify biases more effectively [34], which is particularly critical in precision medicine. The regulatory landscape surrounding the use of LLMs in healthcare must also adapt to address these challenges. Guidelines and frameworks mandating the systematic assessment of LLM fairness and bias before clinical deployment could play a pivotal role in safeguarding patient interests. Furthermore, fostering interdisciplinary collaboration between ML practitioners, health equity experts, policymakers, clinicians, and patients is paramount. Such collaboration ensures that LLM development is guided by a comprehensive understanding of the ethical, social, and clinical implications. While LLMs present a powerful tool for enhancing clinical decision-making, their potential is contingent upon mitigating inherent biases. By embracing bias mitigation techniques, fostering inclusive training data, prioritizing interpretability, and establishing robust regulatory frameworks and guardrails, the community can ensure a more responsible and equitable deployment of LLMs in healthcare. Limitations Our study remains limited in a few ways. Throughout this paper, we have solely focused on gender and race as sensitive attributes. In practice, there are many more sources of biases in the healthcare domain, such as age and insurance type [45], or combinations of multiple factors [30]. These limitations connect directly to the challenge of structured biases, where existing societal inequalities can become embedded within healthcare data and algorithms, potentially perpetuating discriminatory practices. Our evaluation focuses on the inherent biases within the LLMs themselves. It is important to acknowledge that these biases might interact with factors like clinician judgment and real-world healthcare workflows in complex ways. Additionally, there exists a vast majority of clinical tasks that can be tackled by LLMs, in this work we have focused on a subset of the most popular ones. Lastly, this is an ever-growing field of research with new LLMs being released frequently. While we have evaluated many of the most popular and recent LLMs, our experiments do not include an exhaustive list of all available variations. 13 \fBias patterns in the application of LLMs for clinical decision support 6. Acknowledgements Our study was supported by NIH awards, P20GM103446 and P20GM113125." |
| } |
| ] |
| } |