| { |
| "url": "http://arxiv.org/abs/2404.16766v1", |
| "title": "Prefix Text as a Yarn: Eliciting Non-English Alignment in Foundation Language Model", |
| "abstract": "While supervised fine-tuning (SFT) has been a straightforward approach for\ntailoring the output of foundation large language model (LLM) to specific\npreferences, concerns have been raised about the depth of this alignment, with\nsome critiques suggesting it is merely \"superficial\". We critically examine\nthis hypothesis within the scope of cross-lingual generation tasks, proposing\nthat the effectiveness of SFT may be constrained by its reliance on prior\ntokens to guide cross-lingual generation. Based on this crucial insight, and in\nresponse to the challenges posed by the costly and limited availability of\nnon-English data for SFT, we introduce a novel training-free alignment method\nnamed PreTTY, which employs minimal task-related prior tokens to bridge the\nfoundation LLM and the SFT LLM, achieving comparable performance without\ntraining. Experiments on machine translation and part-of-speech tagging across\neight languages demonstrate the efficacy of PreTTY in cross-lingual settings.\nRemarkably, by initiating the decoding process with only one or two prior\ntokens, foundation LLMs can achieve performance comparable to their SFT\ncounterparts. This method presents a cost-effective alternative to SFT and\nadvances the democratization of multilingual LLMs.", |
| "authors": "Runzhe Zhan, Xinyi Yang, Derek F. Wong, Lidia S. Chao, Yue Zhang", |
| "published": "2024-04-25", |
| "updated": "2024-04-25", |
| "primary_cat": "cs.CL", |
| "cats": [ |
| "cs.CL", |
| "cs.AI" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "While supervised fine-tuning (SFT) has been a straightforward approach for\ntailoring the output of foundation large language model (LLM) to specific\npreferences, concerns have been raised about the depth of this alignment, with\nsome critiques suggesting it is merely \"superficial\". We critically examine\nthis hypothesis within the scope of cross-lingual generation tasks, proposing\nthat the effectiveness of SFT may be constrained by its reliance on prior\ntokens to guide cross-lingual generation. Based on this crucial insight, and in\nresponse to the challenges posed by the costly and limited availability of\nnon-English data for SFT, we introduce a novel training-free alignment method\nnamed PreTTY, which employs minimal task-related prior tokens to bridge the\nfoundation LLM and the SFT LLM, achieving comparable performance without\ntraining. Experiments on machine translation and part-of-speech tagging across\neight languages demonstrate the efficacy of PreTTY in cross-lingual settings.\nRemarkably, by initiating the decoding process with only one or two prior\ntokens, foundation LLMs can achieve performance comparable to their SFT\ncounterparts. This method presents a cost-effective alternative to SFT and\nadvances the democratization of multilingual LLMs.", |
| "main_content": "Introduction Supervised fine-tuning (SFT) refines large language models (LLMs) using task-specific instruction data to enhance their capability to follow instructions (Touvron et al., 2023; Peng et al., 2023) and to align their outputs with human preferences and safety considerations (Ouyang et al., 2022; Rafailov et al., 2023; Dong et al., 2023b; Yuan et al., 2023). This process is often termed \u201calignment\u201d, signifying the tailoring of model outputs *Work was done during a visit to Westlake University. \u0000 Co-corresponding authors. to conform to specific downstream requirements. Nevertheless, current research casts doubt on the necessity and potential adverse impacts of SFT. But the alignment achieved through SFT is often considered to be \u201csuperficial\u201d, with the process potentially repurposing pre-existing knowledge from pre-training to merely reshape outputs to meet specific criteria (Zhou et al., 2023; Lin et al., 2023). It has been observed that even a small-scale SFT training dataset can produce significant alignment effects (Liu et al., 2023; Xia et al., 2024). On the other hand, recent empirical studies (Luo et al., 2023; Dong et al., 2023a) have raised concerns that SFT might hurt the knowledge acquired during its pre-training phase, leading to serious consequences like catastrophic forgetting. Not only is there no definitive consensus on the necessity of SFT, but the majority of these studies also focus on monolingual tasks. LLMs still encounter challenges in handling complex crosslingual generation tasks (Schioppa et al., 2023; Wang et al., 2023). Current research on crosslingual alignment primarily seeks to extrapolate or align English capabilities to other languages using the SFT paradigm (Zhang et al., 2023; Chai et al., 2024; Xu et al., 2024), yet there remains a gap in exploring the specific impacts of SFT-based cross-lingual alignment. Furthermore, given the potential risk of SFT leading to the forgetting of pre-training knowledge, the question of how to achieve cross-lingual alignment without training remains underexplored. To bridge these gaps, our study conducts an indepth examination of the impact of SFT on crosslingual generation. We investigate the influence of SFT on the decoding patterns of foundation models in cross-lingual contexts, hypothesizing that the success of SFT largely hinges on the selection of initial prior tokens that are critical for eliciting taskspecific generation in the target language. Furthermore, the observed decoding similarities between 1 arXiv:2404.16766v1 [cs.CL] 25 Apr 2024 \fInstruction: Translate the following sentence from English to Ukrainian: \u201cWe now have 4-month-old mice that are non-diabetic that used to be diabetic,\u201d he added. \"They're not cured, but they're no longer diabetic.\"\\n\"We now have 4-month\u2026 \u041c\u0438 \u0442\u0435\u043f\u0435\u0440\u0456\u0448\u043d\u0456\u0445 4 \u043c\u0456\u0441\u044f\u0446\u0456\u0432 \u043c\u0430\u044e\u0442\u044c \u043c\u0438\u0448\u0435\u0439, \u044f\u043a\u0456 \u0440\u0430\u043d\u0456\u0448\u0435 \u0431\u0443\u043b\u0438 \u0434\u0456\u0430\u0431\u0435\u0442\u0438\u043a\u0430\u043c\u0438 \u2026 Foundation LLM SFT-tuned LLM SFT-based Alignment \"They're not cured, but they're no longer diabetic.\"\\n\"We now have 4-month\u2026 Foundation LLM + Prior Tokens + SFT Pipeline Pretty: Prefix TexT as a Yarn ? ? Input: [Instruction, \u201c\u041c\u0438\u201d] SFT-like LLM \u041c\u0438 \u043d\u0430\u0440\u0435\u0448\u0442\u0456 \u043c\u0430\u043b\u0438 \u043c\u0438\u0448\u0435\u0439, \u0449\u043e \u043d\u0435\u043c\u0430\u044e\u0442\u044c \u0434\u0456\u0430\u0431\u0435\u0442\u0443, \u044f\u043a\u0456 \u0440\u0430\u043d\u0456\u0448\u0435 \u0431\u0443\u043b\u0438 \u0434\u0456\u0430\u0431\u0435\u0442\u0438\u043a\u0430\u043c\u0438 \u2026 1) Understand the alignment under cross-lingual setting. 2) Propose a training-free alignment method for non-English tasks. SFT Prior Refined Prior How does SFT change the model? Pseudo Prior High Resource Low Figure 1: Illustration of our research question and proposed Prefix TexT as a Yarn (PRETTY) framework. foundation and SFT models support the extension of the superficial alignment hypothesis to crosslingual scenarios. Responding to these insights, we introduce a training-free alignment method named \u201cPRETTY\u201d for cross-lingual and non-English tasks. The Prefix TexTs act as a Yarn (PRETTY) linking the foundation LLM and the SFT LLM, eliciting the foundation LLM to exhibit near-SFT performance levels. Specifically, we augment the original input with a few tokens that serve as decoding priors, and then prompt the foundation LLM to resume decoding based on this modified input. In most cases, only one or two task-related prior tokens are needed, and the method for constructing these prior tokens is flexible across various kinds of language resources, fostering the democratization of multilingual LLMs. We conducted experiments on machine translation (Goyal et al., 2022), cross-lingual summarization (Bhattacharjee et al., 2023) and non-English part-of-speech (POS) tagging (Liang et al., 2020) tasks across eight languages. These tasks exemplify cross-lingual generation and multilingual language understanding, and they provide ample nonEnglish test data to evaluate effectiveness across varying levels of resource availability. The experimental results demonstrate that PRETTY can effectively align the foundation model to match SFT model\u2019s performance without training, by merely adding two prior tokens in the decoding. 2 Iceberg Model of SFT 2.1 Preliminaries Pre-training The pre-training (PT) of LLMs is primarily conducted through language modeling tasks on large-scale unlabeled data (Touvron et al., 2023; Achiam et al., 2023). During this phase, given a sequence XPT of length N and a context window k, the optimization objective is maximizing the joint probability PLM as: PLM(XPT) = N Y i=1 P(xi|xi\u2212k:i\u22121) (1) which encourages the model to generate text that naturally follows from the preceding context. However, this \u201ctext completion\u201d behavior can become a bottleneck when models are prompted to switch languages or follow specific instructions of crosslingual generation. It is frequently observed that when prompted with English input and instructed to produce text in a different language, as illustrated in the upper example of Figure 1, the foundation model often continues to decode in English. SFT SFT leverages labeled data pair (Xins., Y ) to empower models with the ability to follow instructions. This stage aims to maximize the probability of the expected answer Y conditioned on the 2 \finput text Xins., where Xins. consists of the task instruction and task input. PSFT(Y |Xins.) = T Y j=1 P(yj|y1:j\u22121, Xins.) (2) SFT is crucial for aligning foundation models to perform task-specific instructions, effectively transforming a general-purpose LLM into an instructionfollowing assistant. However, data quality, training costs, and the imbalance of multilingual data hinder the democratization of assistant LLM. As mentioned before, SFT may be harmful to pre-training knowledge. Thus, it is meaningful and important to understand the underlying mechanism of SFTbased alignment and propose a more efficient alignment method. 2.2 Beneath the SFT-based Alignment Prior Knowledge Hypothesis It is worth noting that pre-training corpora also contain sequences that naturally express task-specific information, which imparts certain capabilities to the foundation LLMs. For example, the presence of semantically equivalent expressions in the pre-training text may enable LLM acquire machine translation ability during pre-training stage (Radford et al., 2019). Despite its extensive prior knowledge, the foundation LLM still struggles with complex crosslingual generation tasks. Beyond existing studies, we provide more concrete insights into this issue by prompting foundation LLMs with various instructions (Bawden and Yvon, 2023). Notably, only 31.8% of these prompts successfully elicit translation capability from the foundation LLMs1. This deficiency may stem from two main factors: First, the proportion of text with the aforementioned characteristics in the pre-training corpus XPT is still relatively small, and most of it is far from resembling human instruction text Xins.. Consequently, the model is more likely to predict tokens suitable for completing formal texts than those required for task-specific instructions. As a result, the foundation LLM often fails to produce tokens y \u2208Y1:T in the intended target language. Secondly, the predominance of English in the pretraining data skews the token generation probabilities of foundation LLM. Given a cross-lingual context, the model favors predicting tokens in English, while the token probabilities for other languages remain comparatively low. For example, English data 1For detailed information, please refer to Appendix B.3. 1 3 10 20 30 40 0 20 40 60 80 100 Top-K Sampling Tokens Agreement@K (%) Foundation LLM + Prior Token Figure 2: The agreement between the SFT model and the foundation model in terms of the selection of the next token. Once the Prior Token is provided, the token chosen by the SFT model is also can be found within the Top-K candidate words of foundation model. comprises up to 90% of the Llama2 pre-training data (Touvron et al., 2023), which may lead models to generate text with an English-centric bias. The above hypothesis might be reasonable when we revisit Equation (1) and Equation (2). The probability PLM(XPT) of the next token prediction for the foundation model is conditioned on the distribution of the pre-training text XPT. SFT narrows the probability space for token selection, adjusting the parameters to better align with the distribution, i.e., the probability PSFT(y|Xins.) is conditioned on the distribution of the instruction text Xins.. Experimental Settings To validate the aforementioned hypothesis, we selected the representative cross-lingual task of machine translation as our analytical testbed. The main research method involved quantifying the differences and similarities in the decision space and token selection behavior between the foundation LLM and the SFT-aligned LLM. For the model selection, we chose the foundation Llama2 7B model and conducted supervised fine-tuning on it using the Alpaca dataset2(Taori et al., 2023). The optimization was carried out using a cosine learning rate scheduler, with the maximum learning rate set to 2e\u22125 and a warmup ratio of 0.03. Training was performed on two NvidiaH800 GPUs using LoRA parameter-efficient finetuning (Hu et al., 2022) technique, with a cumulative batch size of 64. Other hyper-parameters follow those of the original Alpaca settings. 2https://github.com/tatsu-lab/stanford_alpaca 3 \f+ Prior Token Figure 3: The probability distribution of tokens selected by various models. Incorporation of a Prior Token causes the decision probabilities of both models to converge across all data instances. 0 3 6 9 Comparison Group KL Divergence Foundation LLM vs. SFT LLM + Prior Token vs. SFT LLM 0 0.2 0.4 0.6 JS Divergence 0 5 10 15 Cross Entropy Figure 4: The divergence in probability distributions across the entire vocabulary during decoding. Prior Token significantly reduces the discrepancy between the foundation model and the SFT model. A Prior Token Elicits Silent Majority Inspired by the categorization of token shifts by Lin et al. (2023), we propose to quantify the agreement of token selection between foundation LLM \u03b8PT and SFT LLM \u03b8SFT. Given the same prefix input \u02c6 X, we aim to measure whether the next token selected by the SFT LLM, ySFT, is among the top-K tokens, yPT, with the highest probabilities in the decision space of the foundation LLM, which can be formally expressed as follows: ySFT = argmax y\u2208V P(y| \u02c6 X; \u03b8SFT) yPT = {y| arg topK y\u2208V P(y| \u02c6 X; \u03b8PT)} AggrementK = 1 L L X l=1 1ySFT\u2208yPT (3) where V is the vocabulary shared by two models, and L is the length of the dataset. We compare the agreement of the token selection made by the models under the same prefix text \u02c6 X in two different experimental setups. The first setup uses the instruction text as the prefix, i.e., \u02c6 X = Xins.; the second takes the first token decoded by the SFT model as a prior token, appending it to the original instruction prefix, i.e., \u02c6 X = h Xins., y(1) SFT i . For the SFT model, the second setup is equivalent to continuing its own decoding behavior, whereas for the foundation model, it becomes decoding with the addition of a prior token. Figure 2 illustrates the agreement between the foundation model\u2019s predictions and those of the SFT model regarding the selection of the next token, given an identical text prefix. Across the entire translation data, it is observed that after incorporating merely one prior token, the foundation model exhibits a high degree of agreement with the SFT model in terms of token selection. This demonstrates that the alignment effect of SFT in crosslingual generation tasks is also somewhat superficial. Even in instances where the token with the highest probability differs between the two models, 90.8% of the tokens chosen by the SFT model are present within the \u201csilent majority\u201d in the decision space of the foundation model, specifically, among the top 20 most probable token choices. Lens of Distribution Instead of focusing on the coverage of token selection outcomes, we also observe the decision dynamics and similarities from the perspective of the overall probability distribution, with the data settings consistent with the previous setup. First, as shown in Figure 3, after adding a prior token, the probability of the next tokens chosen by both models have closely aligned distributions. The reason that the foundation model 4 \fexhibits a high probability given the instruction text as a prefix lies in a preference for choosing to continue the instruction text rather than completing the cross-linguistic semantic transformation. Additionally, we quantify the distribution disparities between the two models through the probability distribution of the vocabulary. The disparity metrics used include Kullback-Leibler (KL) divergence, Jensen-Shannon (JS) divergence, and cross-entropy (Kullback, 1997). As depicted in Figure 4, the disparity of decision space of the foundation model significantly decreases after adding the prior token, aligning more closely with the SFT model. These findings indicate that such prior tokens serve a dual function: they not only steer the foundation model towards generating tokens pertinent to cross-lingual generation but also modulate the decision space to align more closely with the taskspecific distribution. 3 Pretty: Prefix TexT as a Yarn 3.1 Motivation The observations discussed earlier confirm that SFT effectively narrows the decision space of the foundation model during text generation that is conditioned on instruction text. The disparity in token selection between the foundation LLM and the SFT LLM, however, might not be reduced by a trainingbased transfer methodology. By appending a prior token into the instruction text, the choices of the next token between the two models tend to become largely consistent, and in the vast majority of cases, the tokens chosen by SFT model are also found within the high-probability candidate words of foundation model. These phenomena show that the alignment elicited by SFT is somewhat superficial in cross-lingual generation tasks and motivate us to propose a training-free alignment method by leveraging these prior tokens. 3.2 Formulation Upon revisiting Equation (1) and Equation (2), the goal of proposing a training-free approach is to enable the conditional decoding probability of foundation model to approximate those of SFT model. Therefore, ideally, the selected prior tokens Xpri. = {xpri.} may satisfy the following criteria: P(yPT| [Xins., Xpri.] ; \u03b8PT) \u2248P(ySFT|Xins.; \u03b8SFT) (4) where yPT and ySFT represent the outputs of the foundation and the SFT models, respectively. It is important to note that a single prior token may not serve as an optimal solution due to its non-derivable characteristic. Hence, we extend our methodological approach to include appending multiple prior tokens, grouping them to form a prefix text. 3.3 Construction of Prior Tokens To ensure that the proposed method is applicable to a wide array of languages, we propose three construction strategies based on the availability of language resources, aiming to guarantee the universality of our approach. SFT Prior represents an ideal scenario where the first few tokens generated by a SFT model are used as priors. This method is theoretically rational when the SFT model is derived from the same foundation model because it directly approximates Equation (4) by sampling xpri. \u223c{ySFT}. In practical applications, this might be suitable for high-resource languages due to the imbalanced language capabilities of other languages. Additionally, SFT could potentially degrade the knowledge and abilities that the foundation model has already acquired. In such cases, using prior tokens from the SFT model can contribute to generating better results. This situation will be discussed further in the subsequent section. Refined Prior is more readily accessible for most languages and tasks. We can utilize the output tokens generated by a smaller model trained for specific downstream tasks and use them as prior tokens to achieve weak-to-strong generalization (Burns et al., 2023). Pseudo Prior For extremely low-resource language pairs, where there is no labeled data for downstream tasks, both SFT and Refined priors are difficult to obtain. For cross-lingual tasks, we can create pseudo labels in target language as prior tokens. For instance, in machine translation tasks, we might use bilingual dictionaries to acquire pseudo prior tokens. However, the quality and accuracy of pseudo labels remain uncertain, and the extent of their impact on the generative performance of the foundation LLM is not yet clear. We will explore this problem further in the context of experimental results discussed later in the paper. 5 \f4 Experiments We examine the effectiveness of our proposed training-free alignment method on two distinct tasks: machine translation, cross-lingual summarization and non-English POS tagging. Machine translation serves as a prototypical cross-lingual generation task, entailing the transformation of a sequence from a source language to a target language (Bahdanau et al., 2015; Vaswani et al., 2017; Zhan et al., 2023). As for cross-lingual summarization, it requires the model to generate a summary of an article in a different language (Bhattacharjee et al., 2023; Chen et al., 2023). Although POS tagging (Manning, 2011; Nivre et al., 2017; Chiche and Yitagesu, 2022) primarily assesses the model\u2019s ability to understand monolingual text, we include it as multilingual experiments to show the universality of our methods. 4.1 Experimental Settings Data We use Flores-101 (Goyal et al., 2022), CrossSum (Bhattacharjee et al., 2023) as benchmarks for machine translation and cross-lingual summarization tasks, respectively. For POS tagging tasks, we choose the POS test split from the XGLUE benchmark (Liang et al., 2020), which is derived from the Universal Dependencies Treebank v2.5. To investigate the performance across various resource languages, we carefully selected eight languages based on the pre-training data proportions disclosed in the Llama2 technical report (Touvron et al., 2023). These languages are French, German, Chinese, Russian, Ukrainian, Portuguese, Hindi and Arabic. Among these, the first four languages account for more than 0.1% of the pretraining data of Llama2, while Ukrainian and Portuguese fall below 0.1%, Hindi and Arabic is below 0.05%. For the Llama2 model, we can categorize these three types of languages into high-resource languages, low-resource languages, and extremely low-resource languages, respectively. Models and Baselines The settings of Llama2 foundation model and the SFT model are consistent with those described in Section 2.1. To further demonstrate the generality of our proposed method, we incorporated the Mistral-7B LLM family (Jiang et al., 2023) into our experiments, covering both out-of-the-box SFT and foundation models. In the machine translation task, the Llama2 foundation model does not tend to generate translations when given explicit translation instructions. While this is a normal phenomenon according to our previous discussion, to ensure a fair comparison, we also searched for a better prompts for the foundation model. This prompting approach is referred to as \u201cLlama2-7BPROMPTING\u201d in subsequent sections. For POS tagging, we experimented with various instructions and selected one that consistently prompts both the foundation model and the SFT model to reliably generate classification results in text. Although we report the zero-shot performance for the aforementioned tasks, we found that even out-of-the-box SFT models cannot produce stable output for cross-lingual summarization task. Hence, we prepend a constant demonstration before the input to also assess the effectiveness of our proposed method under the in-context learning paradigm (Dong et al., 2023c). Sources of Prior Token The sources of crafting prior tokens include: \u2022 SFT Prior: We took the first k tokens of output produced by SFT model as the prior tokens. For multiple SFT models, we select the model that demonstrates better performance. \u2022 Refined Prior: We use downstream task models with smaller parameter sizes as the source of refined priors. For the different tasks, we utilized the distilled 600M variant of NLLB-200 translation model3(Costajuss\u00e0 et al., 2022), mT5 cross-lingual summarization model4 and the Unicoder-NLU model5(Huang et al., 2019), respectively. \u2022 Pseudo Prior: The pseudo prior is applied to two cross-lingual tasks since it can utilize cross-lingual language resources. We create pseudo prior tokens for machine translation task by referencing dictionary 6 entries. For cross-lingual summarization, we initially extract keywords from each passage using KeyBERT (Grootendorst, 2020) and then perform word-by-word translation. However, not all initial sentence tokens will be covered by the dictionary. To handle such instances, a backoff strategy is implemented, where the target language equivalent of the first available dictionary token is used as the prior token. 3https://huggingface.co/facebook/ nllb-200-distilled-600M 4https://hf.co/csebuetnlp/mT5_m2m_crossSum 5https://github.com/microsoft/Unicoder/ 6Please refer to Appendix B.4 for dictionary information. 6 \fEnglish-Centric Models En-Zh En-Uk Zh-En Uk-En Avg. %SFT. spBL. CoM. spBL. CoM. spBL. CoM. spBL. CoM. spBL. CoM. All Llama2-7B Llama2-7B-Alpaca 13.6 80.9 24.0 83.3 23.5 85.1 34.4 85.5 23.9 83.7 Llama2-7B-Chat 7.8 67.2 18.1 71.0 18.5 81.3 30.4 83.3 18.7 75.7 Llama2-7BPROMPTING 5.9 64.1 11.0 60.9 24.3 84.8 34.2 85.0 18.9 73.7 80.4 Llama2-7B 7.7 72.0 0.2 32.4 12.0 74.4 9.3 59.2 7.3 59.5 52.5 +PRETTY (SFT Prior) 13.3 80.0 23.0 83.1 23.7 84.9 33.6 85.3 23.4 83.3 98.8 +PRETTY (Pseudo Prior) 12.0 75.7 18.1 74.1 16.9 80.3 27.2 78.3 18.6 77.1 85.4 +PRETTY (Refined Prior) 14.2 80.5 24.1 83.8 24.0 84.9 34.6 85.6 24.2 83.7 100.9 Mistral-7B Mistral-7B-Instruct 6.6 64.6 20.3 78.2 20.5 83.2 32.9 84.8 20.1 77.7 Mistral-7B 1.2 42.6 0.3 30.8 19.9 77.1 21.5 69.4 10.7 55.0 46.2 +PRETTY (SFT Prior) 13.8 78.1 23.1 79.2 20.0 82.3 32.1 83.3 22.3 80.7 117.2 +PRETTY (Pseudo Prior) 13.3 75.8 20.1 75.7 16.5 79.7 24.9 77.3 18.7 77.1 107.2 +PRETTY (Refined Prior) 15.9 81.3 24.9 82.9 21.5 83.0 32.3 83.9 23.7 82.7 124.6 Non-English-Centric Models De-Fr Fr-De Zh-Pt Pt-Zh Avg. %SFT. spBL. CoM. spBL. CoM. spBL. CoM. spBL. CoM. spBL. CoM. All Llama2-7B Llama2-7B-Alpaca 29.8 81.5 24.1 80.9 16.6 81.4 11.3 78.6 20.5 80.6 Llama2-7B-Chat 6.2 68.0 7.3 64.5 3.0 67.8 6.2 66.6 5.7 66.7 Llama2-7BPROMPTING 22.2 77.4 15.4 73.3 14.4 78.9 4.4 64.1 14.1 73.4 78.5 Llama2-7B 1.0 51.1 3.2 54.0 0.9 61.4 7.3 70.0 3.1 59.1 47.6 +PRETTY (SFT Prior) 28.2 80.6 23.0 80.4 16.3 81.1 10.5 77.4 19.5 79.9 97.2 +PRETTY (Pseudo Prior) 18.3 68.9 17.3 72.2 11.6 70.4 5.0 65.6 13.1 69.3 73.9 +PRETTY (Refined Prior) 29.1 81.4 22.9 80.4 17.1 81.1 12.2 79.4 20.3 80.6 100.4 Mistral-7B Mistral-7B-Instruct 22.1 76.1 20.4 75.9 10.5 74.8 3.3 60.2 14.1 71.8 Mistral-7B 1.2 46.1 1.6 40.6 1.0 52.8 0.4 43.6 1.1 45.8 36.5 +PRETTY (SFT Prior) 20.1 73.3 20.7 75.1 11.0 74.7 6.8 67.3 14.7 72.6 113.8 +PRETTY (Pseudo Prior) 18.1 66.4 17.3 70.4 5.9 65.6 3.7 59.4 11.3 65.5 87.7 +PRETTY (Refined Prior) 28.3 78.8 22.3 78.5 14.2 78.6 13.6 80.6 19.6 79.1 153.8 Table 1: Translation performance of different models on Flores-101 subsets. Bold values indicate that the best performance among foundation models. The overall best results are underlined. \u201c%SFT.\u201d denotes the relative performance compared to the best SFT model of each family. For two cross-lingual task, the first k = 2 tokens are chosen as the prior tokens. This helps to avoid inadequate guidance from single non-informative tokens like punctuation or numbers. In the case of the pseudo prior, due to the back-off strategy, only one token is used for fair comparison. For POS tagging task, the strategy is more straightforward with only the first k = 1 label considered as the prior token. 4.2 Evaluation To ensure the integrity of the output data from all models, we standardized the output by cleaning it in accordance with the specific output style of each model. Subsequently, we conducted a manual inspection to guarantee that only the required labels were retained. Task-specific Metrics We use two metrics to evaluate the performance of translation quality: spBLEU7 (Goyal et al., 2022) and COMET8(Rei et al., 2020). We employed the ROUGE (Lin, 2004) and LaSE (Bhattacharjee et al., 2023) metrics for the evaluation of summarization quality. For the POS tagging task, we report both the precision score and F1 score. Relative Performance We further compute the ratio of the performance scores of the foundation model to the scores of the SFT model with the application of different strategies. This ratio serves 7https://github.com/mjpost/sacrebleu/ 8https://github.com/Unbabel/COMET 7 \fModels En-Zh En-Hi Uk-Pt Ar-Ru Avg. %SFT. R2 RL LS R2 RL LS R2 RL LS R2 RL LS R2 RL LS All Llama2-7B w/ Constant 1-Shot Demonstration Llama2-7B-Alpaca 7.0 12.4 11.9 1.7 10.7 17.3 1.5 6.1 5.8 0.1 0.5 1.3 2.6 7.4 9.1 Llama2-7B-Chat 6.3 11.6 8.7 1.5 11.7 27.1 2.5 8.3 7.1 0.0 0.3 0.2 2.6 8.0 10.7 Llama2-7B 9.3 16.6 29.2 1.6 10.2 15.3 0.8 4.0 1.9 0.6 4.1 15.5 3.1 7.6 12.1 262.4 +PRETTY (SFT Prior) 7.4 13.9 25.9 1.5 9.7 12.9 1.9 6.7 9.8 0.1 0.4 0.8 2.7 6.7 9.8 106.3 +PRETTY (Pseudo Prior) 8.0 14.5 29.1 1.4 9.9 14.5 2.5 9.1 13.6 1.2 5.9 23.5 3.3 8.5 15.4 387.5 +PRETTY (Refined Prior) 11.2 19.0 32.6 1.6 10.8 15.9 3.4 10.5 11.3 1.5 7.9 30.1 4.4 10.5 17.5 490.6 Mistral-7B w/ Constant 1-Shot Demonstration Mistral-7B-Instruct 5.9 12.2 17.2 1.0 10.3 23.4 1.5 6.2 17.7 0.4 2.6 12.8 2.2 7.8 17.8 Mistral-7B 12.3 20.9 44.5 1.6 10.6 17.6 4.8 12.9 27.7 1.8 6.5 23.3 5.1 11.2 21.6 206.1 +PRETTY (SFT Prior) 9.7 17.6 40.7 1.4 10.0 17.0 2.3 7.9 17.5 0.2 1.1 3.2 3.4 8.0 15.0 114.5 +PRETTY (Pseudo Prior) 9.9 17.5 41.0 1.4 9.9 17.4 3.1 11.6 35.1 1.7 7.9 32.9 4.0 10.2 23.5 195.8 +PRETTY (Refined Prior) 15.0 24.1 49.6 1.8 11.3 19.7 5.5 16.5 46.9 2.6 10.9 42.0 6.2 13.8 29.7 275.6 Table 2: Summarization performance of different models on CrossSum subsets. \u201cR2/L\u201d and \u201cLS\u201d refer to the ROUGE and LaSE score, respectively. Bold values indicate that the best performance among foundation models. The overall best results are underlined. \u201c%SFT.\u201d denotes the relative performance compared to the best SFT model. Models Fr Zh Pt Ru Ar Avg. %SFT. Prec. F1 Prec. F1 Prec. F1 Prec. F1 Prec. F1 Prec. All Llama2-7B-Alpaca 48.2 42.8 38.6 36.3 40.7 35.9 42.3 36.7 34.4 30.8 38.7 Llama2-7B 45.0 37.9 39.8 36.2 39.8 33.2 42.5 33.8 36.5 32.1 37.7 97.4 +PRETTY (SFT Prior) 54.8 50.0 38.0 33.5 49.1 45.3 49.7 44.1 35.1 31.1 43.1 111 +PRETTY (Refined Prior) 59.3 54.8 43.0 38.8 54.5 50.6 55.3 49.2 44.0 39.6 48.9 126 Table 3: POS tagging performance of different Llama2 models on XGLUE subsets. Bold values indicate that the best performance among foundation models. The overall best results are underlined. \u201c%SFT.\u201d denotes the relative performance compared to Alpaca model. as a metric for assessing the extent to which the foundation model approximates the SFT model\u2019s performance when different strategies are applied. 4.3 Main Results Machine Translation As shown in Table 1, for the machine translation task, we use up to two prior tokens as decoding guidance, allowing the base model to achieve performance comparable to that of a model after SFT. Moreover, in some language pairs, the translation performance outperforms SFT model when guided by Refined Prior tokens from a smaller model. For Llama2 model family, the prior tokens provided by the SFT model, although slightly less effective, still allow the foundation model to achieve 98% of the performance of SFT model. On the other hand, the use of pseudo labels derived from a dictionary exhibits the least effectiveness, yet this strategy still surpasses the results achieved through costly prompt engineering. Cross-lingual Summarization The results presented in Table 2 indicate that the foundation model exhibited superior performance compared to the SFT model in this in-context learning scenario. For prior-guided decoding, the performance of the foundation model was degraded when using prefix tokens from the SFT model, and the small performance gap in this setting suggests that the alignment achieved by the SFT model is relatively \u201csuperficial\u201d. Notably, the performance of Llama2 foundation model significantly improved when other priors were provided, even when using translated keywords as pseudo labels. Non-English POS tagging The performance results of POS tagging task are presented in Table 3. These results align with the insights gleaned from the machine translation task, specifically regarding the strategy of prior token construction. Notably, for POS tagging task, the performance of SFT model on most language pairs falls short of the foundation model, suggesting that SFT detrimentally affect the knowledge learned at the pretraining stage. Encouragingly, when the foundation model empowered by auxiliary prior token surpasses the performance of SFT model as well as the prompting results of itself, highlighting the poten8 \ftial of our proposed method in mitigating the catastrophic forgetting problem associated with SFT. 5 Analysis and Discussion 5.1 Quality of Prior Tokens To investigate the quality of prior tokens from different sources and how they impact the final performance, we further analyze why the prior tokens given by the SFT model are less effective than those from external auxiliary models in POS tagging task. Unlike the machine translation task, the positional result for the POS task is definite, so we are able to verify whether it corresponds to a ground truth label. The results in Table 4 confirm two points. First, even if the prior tokens provided by the SFT model are of low quality, the foundation model does not suffer from severe error propagation. Secondly, the final performance of proposed method is still associated with the quality of prior tokens. This suggests that prior tokens closely aligned with the ground truth can steer the foundation model towards a more accurate decision trajectory, thereby yielding superior performance. Fr Zh Pt Ru Ar SFT Prior 18.3 18.3 3.74 16.3 12.1 Refined Prior 88.9 88.9 88.54 87.7 79.6 Table 4: Accuracy of prior tokens used in POS tagging task. SFT prior tokens are of inferior quality. 5.2 Choice of Prior Tokens Based on the findings from the previous section, if incorrect labels used as prior tokens can still elicit the ability of foundation model, then could random prior tokens in the target language trigger crosslingual generative capabilities? To investigate this, we attempted to use random tokens of different parts of speech as the prior tokens in the EnglishChinese machine translation task. For instance, \u201cModal Prior\u201d refers to the use of randomly picked modal verb in Chinese as the initial token. The results shown in Table 5 indicate that the model could not be aligned to a better decision trajectory by these random prior tokens, whether they were function words or tokens with actual meaning. This supports the validity of our proposed methods for constructing prior tokens and also supplements previous findings. From this, we can summarize some rules about prior tokens: they can be of low quality but should not be completely unrelated to the target sequence. spBLEU COMET BLEU Llama2-7B 7.7 72.01 16.1 + Modal Prior 8.0 68.29 16.0 + Adverb Prior 6.4 63.72 13.1 + Random Prior 6.2 57.11 11.5 Table 5: Comparison of translation performance using three types of random prior tokens. 5.3 Number of Prior Tokens Figure 5 depicts the relationship between the number of preceding tokens provided and the resulting changes in translation performance. It becomes apparent that performance generally improves with the addition of more tokens. Additionally, we note that introducing two prior tokens appears to be a performance inflection point, which may be due to instances where the initial token is a punctuation mark or a number. 1 2 3 4 5 85 90 100 110 Number of Prior Tokens %SFT. En-Zh De-Fr Pt-Zh Zh-Pt Figure 5: Impact of incrementally adding refined prior tokens on performance across Flores-101 subsets. 6", |
| "additional_info": [ |
| { |
| "url": "http://arxiv.org/abs/2404.14050v1", |
| "title": "Unlawful Proxy Discrimination: A Framework for Challenging Inherently Discriminatory Algorithms", |
| "abstract": "Emerging scholarship suggests that the EU legal concept of direct\ndiscrimination - where a person is given different treatment on grounds of a\nprotected characteristic - may apply to various algorithmic decision-making\ncontexts. This has important implications: unlike indirect discrimination,\nthere is generally no 'objective justification' stage in the direct\ndiscrimination framework, which means that the deployment of directly\ndiscriminatory algorithms will usually be unlawful per se. In this paper, we\nfocus on the most likely candidate for direct discrimination in the algorithmic\ncontext, termed inherent direct discrimination, where a proxy is inextricably\nlinked to a protected characteristic. We draw on computer science literature to\nsuggest that, in the algorithmic context, 'treatment on the grounds of' needs\nto be understood in terms of two steps: proxy capacity and proxy use. Only\nwhere both elements can be made out can direct discrimination be said to be `on\ngrounds of' a protected characteristic. We analyse the legal conditions of our\nproposed proxy capacity and proxy use tests. Based on this analysis, we discuss\ntechnical approaches and metrics that could be developed or applied to identify\ninherent direct discrimination in algorithmic decision-making.", |
| "authors": "Hilde Weerts, Aislinn Kelly-Lyth, Reuben Binns, Jeremias Adams-Prassl", |
| "published": "2024-04-22", |
| "updated": "2024-04-22", |
| "primary_cat": "cs.AI", |
| "cats": [ |
| "cs.AI", |
| "cs.CY" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "Emerging scholarship suggests that the EU legal concept of direct\ndiscrimination - where a person is given different treatment on grounds of a\nprotected characteristic - may apply to various algorithmic decision-making\ncontexts. This has important implications: unlike indirect discrimination,\nthere is generally no 'objective justification' stage in the direct\ndiscrimination framework, which means that the deployment of directly\ndiscriminatory algorithms will usually be unlawful per se. In this paper, we\nfocus on the most likely candidate for direct discrimination in the algorithmic\ncontext, termed inherent direct discrimination, where a proxy is inextricably\nlinked to a protected characteristic. We draw on computer science literature to\nsuggest that, in the algorithmic context, 'treatment on the grounds of' needs\nto be understood in terms of two steps: proxy capacity and proxy use. Only\nwhere both elements can be made out can direct discrimination be said to be `on\ngrounds of' a protected characteristic. We analyse the legal conditions of our\nproposed proxy capacity and proxy use tests. Based on this analysis, we discuss\ntechnical approaches and metrics that could be developed or applied to identify\ninherent direct discrimination in algorithmic decision-making.", |
| "main_content": "INTRODUCTION The challenges posed by bias in algorithmic systems have received extensive scrutiny in computer science and legal scholarship. Discrimination is unlawful in both United States (US) and European Union (EU) law when an individual is either treated less favourably because of a protected characteristic (known as disparate treatment or direct discrimination, respectively) or when the application of a seemingly neutral provision, criterion, or practice puts certain individuals at a particular, unjustifiable, disadvantage (disparate impact or indirect discrimination) [33]. A rich literature working to connect legal non-discrimination frameworks to statistical fairness metrics [5, 35, 53, 56, 57] has primarily focused on disparate impact or indirect discrimination as the key avenue to challenge automated bias [7]. This approach has potential advantages, particularly in the context of complex systems: focusing on their effects avoids difficult technical questions, from causation to interpretability. But there are also clear drawbacks: disparate impact and indirect discrimination will not be unlawful where the deployer of a system can provide a proportionate justification. A ride-hailing company might argue, for example, that even though a facial recognition system has higher error rates for certain populations, its use is the only feasible way of ensuring passenger safety. This focus is (doctrinally) justified in US law: a clear requirement of intentional discrimination or explicit protected characteristicbased classification severely limits the scope of disparate treatment. In consequence, most scholars have assumed that unless variables have been purposefully selected to disadvantage protected groups (which is likely to be rare in practice), excluding protected characteristics from the features of a machine learning model is necessary and sufficient for avoiding disparate treatment. In EU law, on the other hand, the picture is more complex. As an emerging body of work has argued, the scope of direct discrimination is significantly broader [3, 7]. In the absence of any requirements for intent or moral culpability, at least certain types of algorithmic bias are caught in the scope of direct discrimination \u2013 with significant practical implications: treating an individual less favourably \u2018on grounds of\u2019 of a protected characteristic is near-universally unlawful; only very arXiv:2404.14050v1 [cs.AI] 22 Apr 2024 \fFAccT \u201924, June 3\u20136, 2024, Rio de Janeiro, Brazil limited justifications can be adduced. A reframing of algorithmic discrimination as direct thus has clear legal attractions. However, the ensuing shift from effects-based to reason-focused treatment raises significant technical questions. How can a claimant show the required connection between their protected characteristic and the ensuing adverse treatment meted out by a complex algorithmic system? Previous work on algorithmic direct discrimination has identified two strands in the relevant EU case law [3].1 First, a decision may be inherently discriminatory because it was made using a criterion which is inextricably linked to a protected characteristic. In Dekker [11], for example, the Court regarded pregnancy as inextricably linked to sex and found a decision based on pregnancy to be inherently discriminatory on grounds of sex. This form of direct discrimination closely resembles notions of proxy discrimination in the algorithmic fairness literature [e.g., 5, 47, 52], which occurs when one or more input features of a protected characteristic act as a stand-in for a protected characteristic. Second, a decision may be made through subjectively discriminatory mental processes. This will occur when a protected characteristic plays a role (whether consciously or subconsciously) in a decision maker\u2019s less favourable treatment of a person [15, 17]. Given the emphasis on mental processes in human decision-making, subjective discrimination raises complex questions when translated into the algorithmic context. For example, case law on subconscious subjective discrimination has been developed intuitively by judges who generally ask on the facts whether a given (human) decision was \u2018on grounds of\u2019 a protected characteristic. As a result, the jurisprudence does not establish any clear test for subjective discrimination of a type which could be translated into a machine context.2 In this work, we therefore focus on the first subcategory: inherent direct discrimination. The primary contribution of this work is a framework to guide both litigants and developers in identifying unlawful inherent discrimination. Taking inspiration from notions of proxy discrimination proposed in the computer science literature [52], we suggest that the measurement of inherent discrimination can be subdivided into two distinct problems: proxy capacity and proxy use. We suggest that to produce an inherently discriminatory outcome, a model must (i) contain a criterion (or criteria) which is (or are) inextricably linked to a protected characteristic (proxy capacity) and (ii) apply that criterion (or criteria) with the result that an individual is treated less favourably (proxy use). Before turning to a detailed development of this framework, an important preliminary point should be raised. Translating complex legal requirements into strict mathematical tests is an exercise fraught with risk, not least given the danger of legal nuance being erased in the process [54]. That risk is particularly pronounced in the present context, given that the Court of Justice of the European Union (CJEU) has approached the concept of inherently discriminatory criteria in an intuitive, and consequently sometimes inconsistent, manner. 1We have adopted the approach, also taken by Adams-Prassl et al. [3], of leveraging the detailed reasoning of the UK courts to understand the EU jurisprudence. We have done so in light of the close relationship between the two regimes, which have developed to mirror one another over a period of decades. 2More generally, identifying a framework for the assessment of implicit algorithmic bias will be challenging in circumstances where machine learning models are trained on data from an inherently unequal world. Our proposed framework should therefore not be understood as an exhaustive technical definition of directly discriminatory algorithmic systems: some forms of proxy discrimination will not be unlawful in this way; conversely a model may still be directly discriminatory where the proxy capacity and/or proxy use framework is not satisfied. This is driven both by inconsistencies in the judicial interpretation of inherent discrimination, and the existence of additional categories, notably subjective direct discrimination, as well as other potential forms of direct discrimination as yet to be defined by the courts (some of which may be novel to the algorithmic context [7]). These are beyond the scope of the present paper. Our purpose in this work is to map out the clearest basis on which a claimant might argue successfully that an algorithm is inherently directly discriminatory within the current legal framework. Our focus on a \u2018core\u2019 case of unlawful inherent discrimination consequently means that our framework is not designed to be translated into a test which, if met, would indicate that a model is free from any risk of inherent direct discrimination. Rather, it is intended to be of use to prospective claimants and to algorithmic developers seeking to identify preliminary \u2018red flags\u2019 in their models. The development of a framework is important, given the significant potential for undetected inherent algorithmic discrimination. This is particularly true in the context of machine learning, where machine learning practitioners may not know that certain features are linked to protected characteristics, especially where such a link only arises when multiple features are combined. In developing this analysis, the remainder of this paper is structured as follows. Section 2 begins with a discussion of existing work on algorithmic proxy discrimination. In Section 3, we then turn to the legal background to our proposed proxy capacity and proxy use tests. Based on this analysis, Section 4 covers technical approaches and metrics that could be developed or applied. The technical and legal implications of this approach are discussed in Section 5. Section 6 concludes. 2 ALGORITHMIC PROXY DISCRIMINATION Before articulating why we believe inherent direct discrimination can be understood as a type of proxy discrimination, it is necessary first to summarise some key definitions and distinctions around this concept, as established in the existing algorithmic fairness literature on which our argument builds. Proxy discrimination occurs when a facially neutral feature is used in a predictive model as a stand-in for a protected characteristic [5, 47, 52]. Historically, the term proxy discrimination was used in the US to refer to forms of intentional discrimination in human decision-making, where the use of a proxy was motivated by its association with protected group membership [47]. Perhaps the most well-known example is redlining, a practice in which services were denied to residents of particular neighbourhoods considered \"risky\" on the basis of racial or ethnic composition. With the proliferation of algorithmic decision-making, concerns regarding unintentional forms of proxy discrimination have become more prominent. It is widely acknowledged that the exclusion of protected characteristics from predictive models, where predictions produce particular benefits or burdens, is insufficient to avoid unfavourable treatment [30]. Specifically, if a \fUnlawful Proxy Discrimination: A Framework for Challenging Inherently Discriminatory Algorithms FAccT \u201924, June 3\u20136, 2024, Rio de Janeiro, Brazil target variable is associated with a protected characteristic, any sufficiently accurate predictive model will reproduce this association. Even if a protected characteristic is excluded from the training data, it can often still be inferred (implicitly) through correlations with other features in the training data. For example, in cities where neighbourhoods are segregated along ethnic lines, postal code can be a proxy for ethnicity. While the term \u2018proxy discrimination\u2019 frequently occurs in the literature, (implicit) definitions vary. To clarify the different concerns captured by proxy discrimination, we leverage the framework proposed by Tschantz [52], which suggests that proxy discrimination consists of two primary elements: proxy capacity and proxy use. 2.1 Proxy Capacity Proxy capacity refers to the extent to which \ud835\udc34, a variable that represents a protected characteristic, can be recreated from a proxy variable \ud835\udc43. Both statistical and causal notions of proxy capacity have been proposed. Statistical measures of proxy capacity typically quantify some form of statistical association between \ud835\udc43and \ud835\udc34, such as correlation. Causal notions of proxy capacity take into consideration the causal relationships between \ud835\udc43and \ud835\udc34. Assumed causal relationships are typically modelled in the form of a directed acyclic graph,3 in which each node represents a variable and each directed edge the existence of a causal relationship between two variables [46]. Kilbertus et al. [41] constrain the notion of \u2019proxy\u2019 to variables that are descendants of the protected characteristic in such a causal graph. To make clear the distinction between statistical and causal capacity we can consider the causal graphs depicted in Figure 1. Assuming the causal structure in Figure 1a, both \ud835\udc34 and \ud835\udc43are caused by \ud835\udc48, but are otherwise not causally related. The causal graph in Figure 1b depicts the same assumptions, in addition to the assumption that \ud835\udc34causes \ud835\udc43. Both graphs depict a scenario in which proxy \ud835\udc43has statistical capacity for \ud835\udc34, but only Figure 1b adheres to the causal definition of proxy capacity put forward by Kilbertus et al. [41]. 2.2 Proxy Use Some form of proxy capacity seems necessary to refer to a variable as a proxy for a protected characteristic. However, proxy capacity alone is insufficient to speak of proxy discrimination. For example, consider a face recognition data set. Modern machine learning algorithms would likely be able accurately to predict skin colour, which falls under the protection of race, from an image. While this can be problematic in many scenarios,4 it does not imply that all applications of facial recognition based on that data set actually use the capacity to proxy race in a way that disadvantages members of a protected group (in this case, racial groups). This is captured by the second primary element of proxy discrimination, proxy use, which considers the extent to which a proxy \ud835\udc43induces predictions \u02c6 \ud835\udc4c. For intrinsically interpretable models, proxy use can be deduced from the internal model structure. For example, in a linear regression 3In mathematics, a graph is a structure which denotes a set of objects (\"nodes\") and the relationships between them (\"edges\"). In a directed acyclic graph, relationships are directed explicitly from one node to another (\"directed\") and none of the edges form a directed cycle (\"acyclic\"). 4For example, Spanton and Guest [51] draw parallels with the pseudoscientific practice of physiognomy. \ud835\udc34 \ud835\udc43 \ud835\udc48 (a) \ud835\udc34and \ud835\udc43are associated through confounder \ud835\udc48(via the path \ud835\udc34\u2190 \ud835\udc48\u2192\ud835\udc43), but \ud835\udc43is not a causal descendent of \ud835\udc34. \ud835\udc34 \ud835\udc43 \ud835\udc48 (b) \ud835\udc43is a causal descendent of \ud835\udc34, indicating causal proxy capacity proposed by Kilbertus et al. [41]. Figure 1: Two causal graphs that can produce statistical proxy capacity. model, the strength of the coefficient that corresponds to the proxy variable \ud835\udc43indicates proxy use. For more complex model classes, proxy use can be deduced via an interventional analysis that reveals the influence of a proxy \ud835\udc43on predictions \u02c6 \ud835\udc4c. Again, Kilbertus et al. [41] take a broader perspective on proxy use and define proxy discrimination to occur if a causal intervention on a proxy variable changes the predicted outcome. This notion of proxy discrimination considers not only the effect of changing the proxy variable in isolation but also the downstream effects that intervening on the proxy would have on the other input features, due to causal relationships between the proxy and those features.5 Regardless of exactly how proxies and their causal effects on \u02c6 \ud835\udc4c are dealt with, the distinction between capacity and use is important. Many sets of features used in a model may have the capacity to act as a proxy for protected characteristics, without the model using that capacity when generating its outputs. As we will argue below, this key distinction separates (inherent) direct discrimination from indirect discrimination. 2.3 What Makes A Proxy A Proxy? In addition to the two primary elements of proxy use and proxy capacity, some authors have further qualified algorithmic proxy discrimination based on the reason a proxy acts as a proxy. 5Kilbertus et al. [41] follow the causal calculus (\u2019do-calculus\u2019) paradigm as proposed by Pearl et al. [46]. Causal intervention in this example means that we assume a world in which we have changed education major to be a certain value, e.g., all applicants majored in computer science. Following causal calculus theory, such interventions can be purely hypothetical: it is not necessary to be able to execute the intervention in practice to make causal inferences. For example, imagine a linear regression model used for selecting resumes for a software engineering position that includes an applicant\u2019s education major and years of programming experience as input features. We are interested in potential gender proxy discrimination based on an applicant\u2019s major. As it turns out, the model has a high, positive coefficient for years of experience, but a negligible coefficient for major. Assuming that a person\u2019s major has a causal influence on years of experience (because students majoring in software engineering are programming during their studies, compared to others who are more likely to only begin after graduation), intervening on major would affect years of programming experience. The causal perspective on proxy use taken by Kilbertus et al. [41] could therefore consider this model to proxy discriminate on major \u2013 even though the variable that represents major hardly affects an applicant\u2019s prospects. \fFAccT \u201924, June 3\u20136, 2024, Rio de Janeiro, Brazil In some cases, this qualification is implicit. For example, Kilbertus et al. [41] define a proxy as \"a descendant of [the protected characteristic] in the causal graph we choose to label as a proxy\", which could suggest a distinction between causal descendants that would be wrong to act upon (i.e., proxies) and others (i.e., non-proxies). Other authors are more explicit. Taking a legal perspective, Prince and Schwarcz [47] view proxy discrimination as a subset of disparate impact (or, analogously, indirect discrimination) in which the proxy feature derives its predictive power from its association with a protected characteristic, which Tschantz [52] refers to as capacity-induced proxy discrimination. Prince and Schwarcz [47] argue that to distinguish between proxy discrimination and other forms of indirect discrimination, we must consider the causal mechanisms that are responsible for proxy capacity (i.e., a relationship between \ud835\udc34and \ud835\udc43) and proxy use (i.e., the extent to which predictions \u02c6 \ud835\udc4care induced by \ud835\udc43). As machine learning models are designed to accurately predict target variable \ud835\udc4c, the latter is highly related to the association between proxy \ud835\udc43and target variable \ud835\udc4c. According to the definition put forward by Prince and Schwarcz [47], proxy discrimination occurs when the association between proxy \ud835\udc43and target outcome \ud835\udc4cis the result of a causal link between the protected characteristic \ud835\udc34 and target outcome \ud835\udc4c, possibly mediated by another unavailable, unmeasurable, or excluded variable. \ud835\udc34 HTT mutation \ud835\udc43 patient support group health outcomes \ud835\udc4c (a) Example of proxy discrimination as defined by [47]. \ud835\udc43is associated with \ud835\udc4c, due to a direct causal link between \ud835\udc34and \ud835\udc4c(via path \ud835\udc4c\u2190 \ud835\udc34\u2192\ud835\udc43). \ud835\udc34 gender \ud835\udc43 part-time work crash \ud835\udc4c driver care \ud835\udc48 (b) Example of proxy discrimination as defined by [47]. \ud835\udc43is associated with \ud835\udc4c, due to an indirect causal link between \ud835\udc34and \ud835\udc4c(via path \ud835\udc4c\u2190\ud835\udc48\u2190\ud835\udc34\u2192\ud835\udc43). \ud835\udc34 religion \ud835\udc43 vocabulary qualifications \ud835\udc4c level of education \ud835\udc48 (c) Example of indirect non-proxy discrimination, according to [47]. \ud835\udc43is associated with\ud835\udc4c, but the predictive power is not derived from its relationship with\ud835\udc34but its relationship with\ud835\udc48(via path \ud835\udc43\u2190\ud835\udc48\u2192\ud835\udc4c). Figure 2: Examples of causal graphs in which \ud835\udc34and \ud835\udc43are associated with \ud835\udc4c. For example, Huntington\u2019s disease is directly caused by a mutation in the HTT gene, which affects healthcare outcomes. An insurer could hypothetically use membership of a patient support group as a proxy for the HTT mutation to predict health outcomes (Figure 2a). Similarly, women tend to drive more safely than men, resulting in fewer car insurance claims (Figure 2b). If more direct measures of driver care are unavailable, a model may use a facially neutral characteristic that proxies for gender, such as part-time work, to predict insurance claims. In contrast, Prince and Schwarcz [47] argue that if a model uses a proxy feature that \u2018fortuitously happens\u2019 to be correlated with a protected characteristic, this should not be viewed as proxy discrimination, but as a more general form of indirect discrimination. For example, we may assume that (1) religious affiliation is influenced by the level of education, where a higher level of education can cause a loss of religious belief [4, 58], and (2) one\u2019s level of education affects one\u2019s vocabulary (Figure 2c). A hiring algorithm could pick up on the vocabulary used in an application letter to predict qualifications. Due to the assumed relationship between level of education and religion, the use of vocabulary in a predictive model could disproportionately affect individuals with religious beliefs. According to the definition of Prince and Schwarcz [47], this would not be a case of proxy discrimination. As discussed below, these qualifications and distinctions help clarify how the legal concept of inherent discrimination can be considered through the framework of proxy discrimination, without thereby collapsing the distinction between direct and indirect discrimination. 3 LEGAL CONDITIONS FOR INHERENT DIRECT DISCRIMINATION Having covered how the algorithmic fairness literature has qualified the nature of proxy discrimination, we can now explain why inherent discrimination in an algorithmic context can be viewed as a special case of algorithmic proxy discrimination, in which the proxy is not only associated with a protected characteristic but \u2018inextricably linked\u2019 to it. As such, inherent discrimination as a form of direct discrimination in EU law shares several commonalities with notions of proxy discrimination in the algorithmic fairness literature. At the same time, there are important differences. Notions of proxy discrimination have been primarily motivated by, or analysed under, the disparate impact or indirect discrimination doctrines, resulting in an emphasis on discriminatory effects and potential justifications. In contrast, as a form of direct discrimination, legal conditions for inherent discrimination are reason-focused and generally do not leave room for justifications. Consequently, the emphasis shifts from potential justifications of discriminatory effects to establishing an \u2018indissociable\u2019 or \u2018inextricable\u2019 connection between a protected characteristic and a criterion. Following the general characterisation of algorithmic proxy discrimination put forward in the previous section, we suggest that a successful case of inherent direct discrimination requires showing (1) proxy capacity: there exists an \u2018inextricable link\u2019 between a protected characteristic \ud835\udc34and its proxy \ud835\udc43, and (2) proxy use: a causal \fUnlawful Proxy Discrimination: A Framework for Challenging Inherently Discriminatory Algorithms FAccT \u201924, June 3\u20136, 2024, Rio de Janeiro, Brazil relationship between the proxy \ud835\udc43and the predicted outcome \u02c6 \ud835\udc4csuch that members of a protected group are treated less favourably.6 The case law is not always explicit on why a criterion is \u2018inextricably linked\u2019 to a protected characteristic, or how significant a criterion has to be in order for its application to constitute or result in less favourable treatment. In the remainder of this section, we explain how the courts have approached each of the two steps. 3.1 Proxy Capacity What does it mean to speak of an \u2018inextricable link\u2019 between a protected characteristic and its proxy? Many inextricably linked criteria recognised by the courts have some intuitive appeal. Pregnancy and sex, as mentioned above, is one example. While not a necessary or sufficient condition, the capacity for pregnancy is clearly linked to being assigned female at birth. Another example is marriage and sexual orientation in countries which prohibit same-sex marriage. If a couple is unable to marry, a credit scoring model which uses marriage to predict creditworthiness will be inherently discriminatory on grounds of sexual orientation, as the CJEU recognised in Tadao [14]. What is less clear is where the boundary lies. How strong a relationship must there be between the protected characteristic \ud835\udc34 and its proxy \ud835\udc43to speak of an \u2018inextricable link\u2019? In positing that some forms of proxy discrimination are best viewed as inherent direct discrimination, we are not claiming that all cases of proxy discrimination are inherent direct discrimination. The notion of inextricability thus needs to be able to pick out that subset of proxy discrimination which is (inherent) direct discrimination, without also including cases of indirect discrimination. In the United Kingdom (UK), judges have approached this challenge by holding that a criterion will only be inextricably linked to a protected characteristic if it \u2018exactly corresponds\u2019 with it [21]. UK courts state that while it is not necessary for every person with the protected characteristic to be disadvantaged by the criterion used [22], it is necessary that every person disadvantaged by the criterion have the protected characteristic. Put differently, the only people disadvantaged by the criterion must be those who fall within the defined protected group. The classic case that UK judges cite to make this point [20, 21, 23] is James v Eastleigh [10], a case in which free entry was provided to a swimming pool for those who had reached statutory retirement age. At the time, the statutory retirement age was 60 for women, but 65 for men. As such, women aged 60 to 64 could enter the pool for free, but men aged 60 to 64 could not: there was an \u2018exact correspondence\u2019 between those disadvantaged by the \u2018free for retirees\u2019 policy and those with the protected characteristic of being male. It did not matter that the policy only disfavoured a subgroup of men (i.e. those between the ages of 60 and 64), as long as only men were disadvantaged by the policy. This test works well when a criterion is based on an expressly discriminatory policy (even when that policy lies outside the organisation\u2019s own remit, like the national statutory retirement age policy), because such situations permit bright lines to be drawn. 6Note that the causal relationship between proxy \ud835\udc43and predictions \u02c6 \ud835\udc4crefers only to the predictive model the causal relationship may or may not exist between \ud835\udc43and real-world outcome \ud835\udc4c. Where the relationship between the criterion and the protected characteristic arises instead from \u2018organic\u2019 social conditions, it is less apt. In the recent case of WABE [24], for example, the CJEU considered whether a rule which prohibits the wearing of all visible political, philosophical or religious signs in the workplace would constitute inherent direct discrimination. The CJEU answered this question in the negative, since \"every person may have a religion or belief\". The Court went on to find, however, that a prohibition which is limited to the wearing of conspicuous, large-sized signs of political, philosophical or religious beliefs is liable to constitute direct discrimination, because certain religions \"require the wearing of a large-sized sign, such as a head covering\". The approach developed by the CJEU is thus broader than the UK courts\u2019 \u2018exact correspondence\u2019 test: not all individuals who wear conspicuous, large-sized signs of political, philosophical or religious beliefs do so for religious reasons: indeed, they might be choosing conspicuously to display a political or philosophical view (res ipsa loquitur). In other words, the criterion was liable to disadvantage (some) individuals without the protected characteristic; but the Court was alive to the particular proximity between the criterion and the requirements of certain religions. In short, the courts have not expressly set out a reliable test with clear boundaries. How, then, should we define a threshold for determining whether a criterion is \u2018inextricably linked\u2019? In challenging cases of alleged inherent discrimination, courts frequently revert to stating that allegations \"can often be answered by asking the \u2019but for\u2019 question: but for the [claimant\u2019s protected characteristic], would she have been treated more favourably?\" [18].7 This makes sense, given that the fundamental test is whether the outcome was on grounds of the protected characteristic. Take each of the cases mentioned above. A claimant in a country with a sex-based retirement age could reasonably argue that he would have reached the retirement age but for his sex; a same-sex couple who live in a country which prohibits same-sex marriage could readily say that they would be married but for their sexual orientation; a woman could reasonably argue that she would not be pregnant but for her sex; and a Muslim woman could argue that she would not wear a headscarf but for her religion. It therefore appears that while a probabilistic relationship between a criterion and a characteristic must be established in order for indirect discrimination to be made out, for direct discrimination a deterministic relationship is required. The above examples are simple: each refers to a single criterion with an intuitive relationship with the protected characteristic. Such simplicity is likely to be more elusive in the algorithmic context. In particular, previous work has established that multiple variables may constitute an inextricably linked proxy when used together [7]. We will refer to this as a complex proxy. To take a very basic example, a school might have previously operated on a boys-only basis but switched to a co-educational model in a particular year. 7Note that the \u2018but for\u2019 test is just one potential way of understanding the causation standard applied by courts in both the EU and the UK, and reflects the vast majority of outcomes in direct discrimination cases. However, neither UK nor EU courts have held that the concept of \u2019on grounds of\u2019 or \u2019because of\u2019 can be answered solely by reference to a \u2018but for\u2019 test, including because the protected characteristic may form part of the background but not act as a causative factor (a point discussed further below). Our reasoning here thus contains some inevitable simplification. As noted above, nuance in the case law is one reason that a definitive mathematical \u2019test\u2019 for algorithmic direct discrimination cannot be laid down. \fFAccT \u201924, June 3\u20136, 2024, Rio de Janeiro, Brazil Attendance at the school might not be inextricably linked to sex on its own, because after some time there may be plenty of female graduates. However, a requirement that a person both attended the school and is above a certain age will be inherently discriminatory if the only people who can satisfy that requirement are those who attended the school when it was single-sex (i.e. male graduates). In the machine learning context, a range of criteria might be identified that, in combination, are inherently discriminatory. It remains to be seen how vast a range of complex proxies the courts will be willing to consider when identifying an inextricable link. The above approach \u2013 which essentially involves asking whether a criterion would apply to a person but for their protected characteristic \u2013 also raises challenging philosophical questions. While counterfactual approaches towards measuring discrimination have been common in law and the social sciences [42] and, more recently, have been proposed in computer science literature [25, 43, 45, 46], analysing the causal effects of social categories in this way has been contested in the causal inference literature as well as the philosophy of social science. Some arguments against counterfactual accounts of discrimination are practical: if protected categories cannot be manipulated, we cannot empirically verify whether a protected characteristic was a cause of unfavourable treatment [42]. Other critiques of counterfactual reasoning are of a more theoretical nature, arguing that reducing protected characteristics to simplistic categories that can be manipulated in isolation (e.g., in causal graphs such as Figure 1) fails to represent them meaningfully as social constructs [36, 39, 40, 42]. In contrast, proponents of counterfactual analysis argue that it has the advantage of requiring auditors to make explicit the exact causes of unfavourable treatment, be it a direct effect of a relevant but simplistic measure of a protected characteristic or the consequences of a macro-level social structure [8, 44, 48]. The law has some way to go to address such critiques. At present, it remains the case that courts will routinely deploy a hypothetical comparator to examine whether direct discrimination arose in a given case. To the extent that this paper proposes a means by which that case law can be understood from the perspective of algorithmic discrimination, we also adopt the approach taken by the courts while recognising potential methodological, philosophical, and political issues it gives rise to. 3.2 Proxy Use Direct discrimination not only requires that the proxy be inextricably linked to a protected characteristic: it also has to be used, such that the individual is treated less favourably than they otherwise would have been. Courts often make reference to the \u2018but for\u2019 test here, too \u2013 although some caution is warranted. The fact that a protected characteristic \"is a part of the circumstances in which the treatment complained of occurred, or of the sequence of events leading up to it, does not necessarily mean that it formed part of the ground, or reason, for that treatment\" [19]. To take a simple example, a female employee at a women-only gym who steals from a bag left in the changing rooms at her workplace and is consequently dismissed could perhaps say that she would not have been dismissed \u2018but for\u2019 having had access to the female changing rooms (an inherently discriminatory criterion) \u2013 but she would not have a good claim for direct discrimination, because her changing room access was not the reason for her dismissal. It was merely a background factor. A real-life example of this reasoning comes from the UK case of B v A [13]. In that case, a female employee who had been in a romantic relationship with her manager was dismissed following the breakdown of the relationship. The employee made a claim of direct sex discrimination, arguing that she would not have been dismissed but for the fact that she was a woman. Her claim was unsuccessful. The Tribunal held that \"[t]he dismissal occurred because of relationship breakdown, nothing more and nothing less than that\"[13]. In other words, the employee\u2019s sex was a background factor, but not a reason for the dismissal in itself. The best way to put it is that while the proxy criterion does not have to have been the only or even the main cause, it does have to have had a significant influence on the outcome [9]. While the courts have not explicitly designated this \u2018significant influence\u2019 threshold as a bright-line test, scholars have identified that it best reflects the consistent approach in the jurisprudence [7]. The concept of \u2019less favourable treatment\u2019 is a broad one. It includes, for example, refusal of entry to a restaurant or shop; receipt of a smaller pension or lower pay; subjection to verbal abuse; or exclusion from certain educational opportunities [1, 32]. In the algorithmic context, the most likely form of less favourable treatment will be a poorer algorithmic score, and the implications that carries in the particular context. Other examples might include (for example) a large language model generating racist or sexist text [2]. Note that once it is shown that an inextricably linked proxy had a significant influence on a given claimant receiving a more disadvantageous outcome, there is (generally) no opportunity for the decision-maker to try to justify it by explaining why. This distinguishes the use of inextricably linked proxies from the use of proxies which are only weakly causally linked to protected characteristics, or linked by statistical association alone, and which thus fall within the realm of indirect discrimination law (unless caught by another form of direct discrimination). The use of the latter types of proxy may be justified by reference to reasonable and proportionate ends, whereas direct discrimination (as noted above) cannot be justified. For example, in Tests-Achats [12], the CJEU ruled that the use of gender in the determination of insurance premiums was incompatible with the principle of equal treatment, even though gender is genuinely useful in helping to predict insurance claims. 4 IDENTIFYING POTENTIAL INHERENT DIRECT DISCRIMINATION The current leading approach for algorithmic fairness assessments is disaggregated analysis, in which a particular statistic (e.g., the proportion of predicted positives) is compared across groups [e.g., 55]. While some types of disaggregated analyses have been argued to be consistent with the prima facie evidence that is required for indirect discrimination [53], such an effects-based analysis is insufficient for identifying and assessing inherent direct discrimination. In this section, we outline technical approaches and metrics that could be developed or applied to measure proxy capacity and proxy use in the context of inherent direct discrimination. \fUnlawful Proxy Discrimination: A Framework for Challenging Inherently Discriminatory Algorithms FAccT \u201924, June 3\u20136, 2024, Rio de Janeiro, Brazil 4.1 Identifying Potential Proxy Capacity From our legal analysis in the previous section, we can identify several conditions that shape our technical problem formulation. First, while it is unclear from case law how strong the relationship between the protected characteristic and its proxy must be, it is clear that the variables must at least be highly associated and possibly even deterministically related. Importantly, the inextricable link need not hold for all protected groups. For example, in jurisdictions where same-sex couples cannot get married, marital status is inextricably linked to homosexuality but not heterosexuality: a straight person could be married, divorced, widowed, or never married, while a person who engages exclusively in same-sex relationships would never be married. Second, while it remains to be seen what the courts\u2019 stance is on complex proxies consisting of multiple variables, it is likely that at least a limited set of complex proxies would be considered. Third, the link need not hold for all members of a protected group \u2013 it is sufficient if it holds for a subgroup (e.g., men aged 60 to 64, as in James v Eastleigh or women who have attended a same-sex school). The technical problem formulation thus becomes: can we identify a (set of) variable(s) that are highly associated with (a subgroup of) a protected group? 4.1.1 Measuring Simple Proxy Capacity. The most straightforward way of measuring potential proxy capacity is via measures of statistical association. Depending on the type of data (numerical, ordinal, binary, categorical), different measures could be appropriate (e.g., Pearson correlation, Spearman\u2019s rank correlation, mutual information). Measures of association are widely known and easy to compute. As explained above, the inextricable link need not hold for all protected groups. Consequently, metrics that take into consideration all categories of a protected characteristic may not be able to capture an inextricable link. For example, consider the variables sexual orientation and marital status in a jurisdiction where same-sex couples cannot get married. Computing association between these variables (e.g., via mutual information) could reveal an association, but the strength would be limited: while sexual orientation=\u2018gay or lesbian\u2019 would virtually exclusively co-occur with marital status=\u2018never married\u2019, the co-occurrence of sexual orientation=\u2018straight\u2019 and the different categories of marital status would not be deterministic. Considering categorical variables, we can learn more about the strength of a potential association by inspecting the contingency table of the two variables, which shows the frequencies of the co-occurrence of categories. Example: proxy capacity in the Adult Dataset. We illustrate a proxy capacity analysis on the UCI Adult dataset [6]. This dataset was derived from the US Census of 1994 and contains demographic information of 48842 individuals. The Adult dataset is commonly used in (fairness-aware) machine learning benchmarks, where the associated prediction task is to predict whether an individual earns more or less than $50.000 per year.8 8We emphasise that, disconnected from a real-world use case, it is impossible to make conclusive claims regarding fairness or discrimination. For example, the protected status of characteristics differs across sectors, with the widest protection in employment. Similarly, measured proxy capacity can differ depending on which population the dataset was sampled from. A proxy for gender or race in one particular context may not be a proxy for a protected characteristic at a population level (or vice versa). For example, an insurer may have a particular type of customer that self-selects based on The Adult dataset contains several variables that, depending on the sector in which a model trained on this dataset is applied, could fall within the material scope of EU law: sex, race, and age. As discussed in Section 3.1, protected characteristics such as race and gender are best viewed as multi-dimensional social constructs. Choosing one measurement over another can be more or less appropriate, depending on the context of an algorithmic decision-making system. In the case of the Adult dataset, responses for race are based on self-identification and sex is worded to capture a person\u2019s biological sex (as opposed to gender). In the absence of a specific application, it is unclear whether these are appropriate measurements. In this example, we test whether there are potential simple proxies for the sex and race variables in the dataset. As a measure of association, we use mutual information: an information-theoretic measure of dependence between variables that measures the extent to which knowing one variable reduces the uncertainty regarding the value of another variable. Mutual information varies between 0 (the variables are independent) and 1 (the variables are interchangeable). None of the mutual information scores are close to 1, but the most likely proxy candidates are: relationship as a proxy for sex (0.271) and native-country as a proxy for race (0.094) (Table1). We can further explore these potential proxies using contingency tables. The relationship variable denotes the relationship the individual has to the householder. Considering sex and relationship, we observe a very clear pattern: virtually all instances for which relationship=Husband we have sex=Male, and vice versa, virtually all instances for which relationship=Wife we have sex=Female (Table 2). This analysis reveals that the relationship variable has a strong proxy capacity for sex. If, as we have suggested in Section 3.1, the Court uses a counterfactual notion of \u2018inextricable\u2019, a statistical proxy capacity measure must be accompanied by a plausible causal explanation. In the case of relationship and sex, there is a clear counterfactual connection: Husband and Wife are gendered definitions, implying that but for their sex, a person would be classified differently. As such, these categories are almost certainly inextricably linked to sex. Considering the native-country variable, we can identify a similar pattern: native-country=Laos co-occurs exclusively with race=AsianPac-Islander (Table 3). However, in contrast to the relationship, the court is less likely to accept this variable as an inextricable link. First, there is the problem of statistical significance: only 23 instances in the dataset have native-country=Laos. Second, a counterfactual explanation is missing: would native-country have been different but for race=Asian-Pac-Islander? In Jyske Finans[16], the CJEU has made it clear that it cannot be presumed that all citizens of a country are of a single ethnic origin.9 various factors, such as exposure to marketing and the policies offered. In this paper, we merely use the Adult dataset for illustrative purposes. 9\"Ethnic origin cannot be determined on the basis of a single criterion but, on the contrary, is based on a whole number of factors, some objective and others subjective [...] As a consequence, a person\u2019s country of birth cannot, in itself, justify a general presumption that that person is a member of a given ethnic group such as to establish the existence of a direct or inextricable link between those two concepts. Furthermore, it cannot be presumed that each sovereign State has one, and only one, ethnic origin.\" [16, paras 19-21]. Note that EU Council Directive 2000/43/EC of 29 June 2000 refers to \"racial or ethnic origin\" as a single concept [27]. \fFAccT \u201924, June 3\u20136, 2024, Rio de Janeiro, Brazil Table 1: Mutual information between two potential protected characteristics (sex and race) and other categorical features in the Adult dataset. workclass education marital-status occupation relationship native-country sex 0.013 0.005 0.112 0.099 0.271 0.002 race 0.007 0.001 0.012 0.012 0.017 0.094 Table 2: A contingency table of the sex and relationship variables in the Adult dataset. Husband Not-in-family Other-relative Own-child Unmarried Wife Female 1 5870 689 3376 3928 2328 Male 19715 6713 817 4205 1197 3 Table 3: A contingency table of the race and native-country variables in the Adult dataset, including only the categories Laos and United-States. Laos United-States Amer-Indian-Eskimo 0 452 Asian-Pac-Islander 23 429 Black 0 4269 Other 0 189 White 0 38493 4.1.2 Measuring Complex Proxy Capacity. While association measures can be used to flag simple potential proxies, they are less likely to identify complex proxies. Many association measures rely on assumptions, particularly regarding the linearity of the relationship between the proxy and the protected characteristic. However, even a relatively simple model such as a decision tree can capture non-linear relationships. Moreover, measures of association are usually only suitable for measuring the relationship between two variables. Complex machine learning models could (unintentionally) rely on complex proxies: a set of features that together are predictive of a protected characteristic. In some cases, we can work around this limitation by creating a new variable that represents the intersection of multiple features. For example, consider the above example of a school that transitioned from being single-sex to co-educational after a particular year. We could transform two features, school_attended and years_since_graduation, to devise a new variable attended_singlesex_school, which will be highly associated with sex. However, it can be difficult to anticipate the correct transformation in advance \u2013 especially if the associations identified by the machine learning model become more complex and less intuitive. One example would be geolocation data [38]. Various places in physical space may be inextricably linked with a protected characteristic; for instance, gender-segregated and disabled bathrooms, or places of religious worship. An algorithm for targeting adverts based on geolocation data, designed to optimise click-through rates, would naturally end up responding to any differences in responses based e.g. on gender, religion, or pregnancy status where these correspond with location. While Google and Apple\u2019s location-based advertising networks only offer coarse-grained geo-targeting, other advertising technologies, including Beacons, enable targeting to the nearest metre \u2013 more than sufficient to distinguish between male and female bathrooms in the same building [49]. Assessing proxy capacity for such forms of high-dimensional data may require substantial auxiliary data, and be highly context-dependent. An alternative measure for capacity that circumvents these limitations, at least to some extent, is to consider how well a protected variable can be predicted from a proxy feature set. A similar approach was proposed by Feldman et al. [31], who set out a test for disparate impact that measures the extent to which a protected variable can be predicted from other features in a data set. For example, we can build a decision tree that predicts sex=Female from school_attended and years_since_graduation. We can then use any appropriate measure of predictive performance as a measure of proxy capacity. If we use the same model class (linear regression, decision tree, random forest, neural network etc.) for measuring capacity as the model that is deployed, the problem of restrictive assumptions of statistical association measures mentioned above can be mitigated. In other words, the model used to check for proxy capacity should be capable of identifying whatever kind of relationships non-linear, interaction between variables that may be present in the deployed model. 4.1.3 Discovering Proxy Capacity. In addition to an adequate measurement of proxy capacity, our technical problem formulation also points to the need for a search procedure that allows us to identify potential proxies that score high on capacity. Depending on the context, there could be multiple protected characteristics and various potential proxy features. Moreover, our legal analysis has shown that a proxy does not need to have the capacity for all members of a protected group: it is sufficient if the proxy has capacity for a subgroup within a protected group. This is a typical use case for local pattern mining approaches, such as subgroup discovery [37] and exceptional model mining [29]. These frameworks consist of several components, including the possible subgroup descriptors, the definition of a quality measure that defines the \"interestingness\" of a subgroup, and a search strategy. Considering potential proxy discovery, we could apply an exceptional model mining instance to search for subgroups in the dataset, defined by at least one protected characteristic, in which a potential proxy variable has a high capacity for a protected characteristic, possibly weighted by the coverage of the subgroup. As a complete search quickly becomes computationally expensive, heuristic search \fUnlawful Proxy Discrimination: A Framework for Challenging Inherently Discriminatory Algorithms FAccT \u201924, June 3\u20136, 2024, Rio de Janeiro, Brazil strategies such as beam search or evolutionary algorithms could be applied. A challenge of computational analyses that attempt to discover potential proxy capacity is the problem of multiple comparisons. Considering multiple protected characteristics, multiple potential simple proxies, potential complex proxies, and subgroups within protected groups, the number of evaluations increases exponentially. As a result, there exists a substantial risk of a type I error: identifying potential proxy capacity in the data sample, due to chance, while the proxy does not have capacity in the population the sample was drawn from. 4.2 Measuring Proxy Use The fact that a set of features could be a proxy for another feature does not directly imply that (1) the model uses them, and (2) the use results in the protected group being treated less favourably. For example, if the proxy variable is not predictive of the target variable, it is unlikely that the proxy will be used to make predictions. Additionally, if the input data contains other more informative features, a model that is penalised for complexity (e.g. via L1-regularization) may resort to these features over the potential proxy feature. Considering proxy use, the question we need to answer is: when does a (set of) variable(s) influence the predictions of a subgroup defined by at least one protected characteristic in such a way that we consider the subgroup to be treated less favourably? Identifying a causal relationship between a proxy and predictions constitutes an interventional analysis, in which we test whether an intervention on the proxy affects the predictions. Considering simple proxies, an interventional analysis is relatively straightforward if one has access to an API of the machine learning model. For each instance that is suspected to be directly discriminated against, we can simply determine the effect of changing the proxy variable on the output of the predictive model. If the counterfactual results in a different score or even a different classification (e.g., getting hired), this could provide evidence for illegitimate proxy use. Considering continuous variables, such as age or income, the effect could be visualised using an individual conditional expectation (ICE) plot [34], in which the outcome of the model (e.g., predicted score) is plotted against the range of potential feature values. Determining whether the observed effect would be considered \u2019less favourable\u2019 remains highly contextual. Claimants are most likely to be successful if they can show that the use of the proxy resulted in a worse outcome for them. For example, the change in the predicted score may exceed the decision threshold such that the decision changes to a less favourable outcome, such as being denied a benefit (e.g., a job interview in resume selection) or being subjected to a burden (e.g., a more thorough manual inspection in fraud detection). Interventional analyses become less straightforward when we consider complex proxies. For example, skin colour is strongly related to the colour of pixels in a photo. Specific combinations of subranges of pixel colour and position are therefore likely to have proxy capacity for a person\u2019s skin colour. However, this does not mean that the model uses that particular combination of pixels to make predictions. It is difficult to precisely determine the influence of pixels as a proxy of skin colour on the outcome of the model. Commonly used explanation techniques for deep neural networks, such as saliency maps [50], indicate which pixels, if altered slightly, would result in the largest change in predicted probability. A saliency map thus indicates the sensitivity of the prediction to the pixels in a person\u2019s face but does not tell you whether the sensitivity is directly related to skin colour or perhaps to other aspects captured by the pixels unrelated to race. Instead, an interventional analysis of pixels as a proxy for skin colour would require a set of instances with differing \u2018skin colours\u2019 but otherwise identical features. In other words, to perform an interventional analysis, we must be able to specify which values of the complex proxy correspond to social categories. Such precise specifications are particularly difficult for unstructured data, though difficulty will vary across applications. While meaningful counterfactuals of subtle indicators of an applicant\u2019s gender, such as writing style, are difficult to obtain, an interventional analysis would likely be able to identify resumes that are downgraded because they contain the word \u2018women\u2019s\u2019. Apart from technical challenges, disentangling a complex proxy from other characteristics associated with protected group membership invites theoretical critiques similar to those of counterfactual tests of (non-proxy) discrimination. 5 DISCUSSION Our work opens up several directions for future research. From a legal perspective, we limit ourselves to setting out the principles necessary to examine how the established proposition that algorithms can directly discriminate should be understood in practice. Our legal analysis adds two elements to the existing scholarship [7]. First, we propose that the assessment of inherent discrimination requires an examination of (i) the inextricably linked nature and (ii) the application of a criterion (or criteria) which is (or are) inextricably linked to a protected characteristic. This approach is widely accepted in EU discrimination law but has never been translated into a framework for technical analysis. Second, and more significantly, our legal analysis suggests that the courts require a deterministic relationship between a proxy and a protected characteristic in order to find that there is an inextricable link between them. Although this requirement emerges from an examination of the cases, it has never been explicitly spelled out in either the case law or (to our knowledge) in the legal literature. Most scholarly work thus far instead (implicitly) assumes that the relationship is a statistical one, despite pointing out the unprincipled nature of that approach [26]. The apparent existence of a deterministic relationship between protected characteristics and inextricably linked proxies is a topic for further discussion in legal scholarship. On the technical side, we have only been able to touch upon some of the potential approaches to show proxy use and capacity. Future work is needed to further develop and test these measures and approaches. Finally, future work could focus on other types of direct discrimination, particularly subjective discrimination, in the algorithmic context. \fFAccT \u201924, June 3\u20136, 2024, Rio de Janeiro, Brazil 6" |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.14716v1", |
| "title": "Bayesian Example Selection Improves In-Context Learning for Speech, Text, and Visual Modalities", |
| "abstract": "Large language models (LLMs) can adapt to new tasks through in-context\nlearning (ICL) based on a few examples presented in dialogue history without\nany model parameter update. Despite such convenience, the performance of ICL\nheavily depends on the quality of the in-context examples presented, which\nmakes the in-context example selection approach a critical choice. This paper\nproposes a novel Bayesian in-Context example Selection method (ByCS) for ICL.\nExtending the inference probability conditioned on in-context examples based on\nBayes' theorem, ByCS focuses on the inverse inference conditioned on test\ninput. Following the assumption that accurate inverse inference probability\n(likelihood) will result in accurate inference probability (posterior),\nin-context examples are selected based on their inverse inference results.\nDiverse and extensive cross-tasking and cross-modality experiments are\nperformed with speech, text, and image examples. Experimental results show the\nefficacy and robustness of our ByCS method on various models, tasks and\nmodalities.", |
| "authors": "Siyin Wang, Chao-Han Huck Yang, Ji Wu, Chao Zhang", |
| "published": "2024-04-23", |
| "updated": "2024-04-23", |
| "primary_cat": "cs.CL", |
| "cats": [ |
| "cs.CL", |
| "cs.AI", |
| "cs.CV", |
| "cs.SD", |
| "eess.AS" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "Large language models (LLMs) can adapt to new tasks through in-context\nlearning (ICL) based on a few examples presented in dialogue history without\nany model parameter update. Despite such convenience, the performance of ICL\nheavily depends on the quality of the in-context examples presented, which\nmakes the in-context example selection approach a critical choice. This paper\nproposes a novel Bayesian in-Context example Selection method (ByCS) for ICL.\nExtending the inference probability conditioned on in-context examples based on\nBayes' theorem, ByCS focuses on the inverse inference conditioned on test\ninput. Following the assumption that accurate inverse inference probability\n(likelihood) will result in accurate inference probability (posterior),\nin-context examples are selected based on their inverse inference results.\nDiverse and extensive cross-tasking and cross-modality experiments are\nperformed with speech, text, and image examples. Experimental results show the\nefficacy and robustness of our ByCS method on various models, tasks and\nmodalities.", |
| "main_content": "Introduction Large language models (LLMs) (Touvron et al., 2023b; OpenAI, 2023a) have achieved great success on many text-based natural language processing (NLP) tasks. By connecting with extra visual and audio encoders (Sun et al., 2023b; Radford et al., 2023), the resulting multimodal LLMs can also achieve remarkable performance on imagetext and audio-text tasks (Li et al., 2023; OpenAI, 2023b; Tang et al., 2023). With the ability of incontext learning (ICL) (Brown et al., 2020), LLMs can adapt to new tasks easily and efficiently in a training-free manner, to generate output following the prompting paradigm based on a few input-label pairs pre-pended to the test input. The existence of ICL ability has also been verified on image-text and audio-text tasks (Tsimpoukelli et al., 2021; Wang et al., 2023c; Hsu et al., 2023; Pan et al., 2023). (i) Random Selected Example(s) (ii) Inverse Inference (iii) Bayesian Selected Example(s) text similarity score-based reranking estimated probabilities datastore (few-shot with k samples) (k samples in-context learning) Figure 1: A brief illustration of the proposed Bayesian in-context example selection includes: (i) first randomly selecting k examples; (ii) examining the examples in the datastore through \u201cinverse inference,\u201d where the test input-label pair serves as the in-context example; and (iii) selecting samples with correct label predictions as good examples (colored in blue), considered to have high mutual information interaction with the test input. Although ICL requires no gradient descent and thus does not suffer from the instability caused by stochastic optimisation compared to other testtime adaptation approaches, care still needs to be taken when selecting the in-context examples since they often lead to distinct ICL performance variations (Zhao et al., 2021; Min et al., 2022; Lu et al., 2022b). Prior work on in-context example selection trains an example retrieval module (Rubin et al., 2022; Zhang et al., 2022; Lu et al., 2022a; Wang et al., 2023b), selects close examples in embedding space (Liu et al., 2022; An et al., 2023; Qin et al., 2023), or leverages the feedback of LLMs to score the examples (Su et al., 2022; Nguyen and Wong, 2023; Iter et al., 2023; Mavromatis et al., 2023). While boosting ICL performance, most methods treat in-context examples and test input separately, overlooking their mutual interactions. This paper proposes ByCS (Bayesian in-Context example Selection), a novel in-context example selection approach focusing on mutual information interactions based on the Bayesian formula. Refer to the inference of test input conditioned on in-context examples as ICL inference, and the inference of in-context example\u2019s input based on the test input-label pair as the inverse inference. arXiv:2404.14716v1 [cs.CL] 23 Apr 2024 \fBy introducing inverse inference via Bayes\u2019 theorem, ByCS leverages the inverse inference result to evaluate the quality of each in-context example. Assuming the contextual information interaction is mutual, an accurate inverse inference is likely to result in an accurate inference. Examples with accurate inverse inference results are selected as optimal examples. Extensive experiments across audio, image, and text modalities are conducted to verify the effectiveness and robustness of ByCS, such as ASR, visual question answering (VQA), as well as NLP tasks (including topic classification, sentiment analysis, and text-to-SQL etc). Our main contributions are summarised as follows: \u2022 ByCS, a novel in-context example selection method inspired by Bayes\u2019 theorem, is proposed. To improve the efficiency, the use of a smaller model for fast inverse inference implementation and a ranking-based pre-selection to reduce the number of in-context examples are also proposed in this paper. \u2022 The method is verified using both \u201cdecoderonly ICL\" on NLP tasks and \u201cencoderdecoder\u201d ICL on ASR and VQA. To the best of our knowledge, this is the first work of an in-context example selection method verified across text, audio, and visual modalities as shown in Figure 2. 2 Related Work Multimodal ICL. Inspired by the decoder-only ICL in text-based NLP, efforts have been made to extend such a few-shot learning ability to other modalities, in particular image and audio. Frozen (Tsimpoukelli et al., 2021) is the first attempt to exploit ICL ability in the vision-language model (VLM). By using a vision encoder to map the input image to textual tokens in the input embedding space of a frozen text language model, Frozen can handle interleaved image and text input and achieve image-text ICL. Other work manages to improve VLM\u2019s ICL ability by using adapter blocks (Eichenberg et al., 2022), adding blockwise modality fusion structures (Alayrac et al., 2022) and scaling up the model size (Sun et al., 2023a). In audio modality, Borsos et al. (2023) proposed AudioLM, a language model based on quantised audio tokens for audio generation tasks, which exhibits ICL ability for audio continuation. Similarly, Speech example inputs Speech test input Text example labels Answer \u201c\u597d\u7747\u3002\u201d \ud835\udc4b \ud835\udc36!\"#$% \ud835\udc36&'()& \ud835\udc4c Text example inputs Text test input Answer Albert Einstein was Marie Curie was Polish. \ud835\udc4c \ud835\udc4b \ud835\udc36!\"#$% Text example labels \ud835\udc36&'()& German. \u201c\u7747\u569f\u3002\u201d Image example inputs Text example inputs \ud835\udc36!\"#$% Text example labels Image test input Text test input Answer \ud835\udc36&'()& \ud835\udc4b \ud835\udc4c Does this type of train transport people or cargo? What is this vehicle used for? Transporting goods. Cargo. (a) text ICL (b) ASR ICL (c) VQA ICL Figure 2: Multimodal ICL. Although ICL on different modalities shares the same formula expression, the actual inputs and inference model architectures differ. For ASR ICL on Whisper, the speech is fed into the encoder while the text example is labelled into the decoder, which is aware of speech input through cross-attention with the encoder. For VQA ICL, images are first encoded to the same embedding space of LM\u2019s input, then interleaved images and texts are fed into decoder LM. Wang et al. (2023a) proposed VALL-E, a controllable text-to-speech synthesis system with ICL ability based on audio and text prompts. Wang et al. (2023c) presented the first ICL work for ASR based on paired speech-text examples, which adapted the Whisper (Radford et al., 2023) model to receive considerable word error rate (WER) reductions on unseen Chinese dialects. Further explorations enabled the recent speech-language models to perform ICL on more speech input tasks through warmup training (Hsu et al., 2023) or speech instruction-tuning (Pan et al., 2023). In-Context Example Selection Methods. Rubin et al. (2022) proposed a scoring LM to retrieve incontext examples using contrastive learning, which can also be trained with reinforced learning algorithms, such as Q-learning (Zhang et al., 2022) and policy gradient (Lu et al., 2022a). Alternatively, examples that are semantically similar to the test input can be selected. Liu et al. (2022) proposed to select the k nearest neighbours (kNN) in the embedding space of the examples. When combining with chain-of-thought (Wei et al., 2022), Qin et al. (2023) proposed to select examples in the embedding space of the reasoning path. LLM feedback is often used in in-context example selection. Iter et al. (2023) selected in-context examples with cross-entropy differences of the fine-tuned model \f\ud835\udc36 \"!\"#$! = arg max \ud835\udc43(\ud835\udc36!\"#$!|\ud835\udc7f, \ud835\udc80 /, \ud835\udc36%&'()) \ud835\udc7f \ud835\udc80 # \ud835\udc36!\"#$% \ud835\udc36&'()& \ud835\udc36 $&'()& Text similarity measurement Example Score \ud835\udc44 Select examples with max(\ud835\udc78) \ud835\udc4c 3 = arg max \ud835\udc43(\ud835\udc80|\ud835\udc36%&'(), \ud835\udc36!\"#$!, \ud835\udc7f) \ud835\udc44= \ud835\udc46\ud835\udc56\ud835\udc5a\ud835\udc56\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66(\ud835\udc36!\"#$!, \ud835\udc36 \"!\"#$!) First-round inference Inverse inference \u2460 \u2461 \u2462 Figure 3: The detailed pipeline of our ByCS method includes: First, conduct the first-round inference to estimate the label of the test input. Then, perform inverse inference on each example in the datastore, where the test input and the estimated label serve as in-context examples. A detailed illustration of inverse inference can be found in Figure 5 in the Appendix. Finally, rank in-context examples by the text similarity between the inverse inference result and the true context label. Examples with high similarity scores are selected due to their high mutual information interaction. based on the assumption that ICL may act as implicit gradient descent (Dai et al., 2022). Nguyen and Wong (2023) identified highly impactful examples according to the proposed influence score. Although ByCS also uses LLM feedback when evaluating the quality of in-context examples through inverse inference, it leverages the text-similarity of the inverse inference results and the corresponding ground-truth labels, in no need of complete output probability distributions which are often not available for commercial LLMs. Wang et al. (2023d) selected optimal in-context examples in the Bayesian framework by viewing LLMs as latent variable models and ICL as latent concept learning. In comparison, ByCS directly extends the ICL inference probability using Bayes\u2019 theorem. Xu and Zhang (2024) selected examples with high discrepancy between the labels and LLM\u2019s outputs when performing question answering. ByCS also selected examples from candidates in a datastore based on LLM\u2019s outputs but computes the mutual information interactions between the in-context examples and test input. 3 Methodology As shown in Figure 3, given a test input X and paired in-context examples (Cinput, Clabel), LLMs predict the most possible answer \u02c6 Y by maximising the inference probability P(Y|Cinput, Clabel, X): \u02c6 Y = arg max P(Y|Cinput, Clabel, X), (1) where Cinput and Clabel are the inputs and labels of different data types in different tasks. Regarding text-based NLP tasks, Cinput and Clabel are referred to as text questions and corresponding answers. Regarding ASR, Cinput and Clabel are speech audio and corresponding text transcriptions. Regarding VQA, Cinput are images and text questions based on the images and Clabel are the text answers. The inference probability can be extended using Bayes\u2019 theorem: P(Y|Cinput, Clabel, X) = P(Clabel|X, Y, Cinput)P(Y|X, Cinput) P(Clabel|X, Cinput) . (2) The likelihood P(Clabel|X, Y, Cinput) is termed as inverse inference probability, since it can be interpreted as the probability of the context label Clabel when the test input-label pair (X, Y) is inversely treated as the in-context example. ByCS is focused on the inverse inference probability and assumes the influence of the prior P(Y|X, Cinput) is subordinate for simplification. In practice, since the ground-truth label Yref of the test input X is not available, the correct likelihood P(Clabel|X, Yref, Cinput) is approximated by P(Clabel|X, \u02c6 Y, Cinput), where \u02c6 Y is produced by the first-round inference. Specifically, \u2022 First, the first-round inference is performed to produce a hypothesized label \u02c6 Y based on the test input X, which can be achieved using decoding rule without any in-context examples by \u02c6 Y = arg max P(Y|X). Better performance can be achieved when using the hypothesized label obtained by in-context examples by \u02c6 Y = arg max P(Y| \u02dc Cinput, \u02dc Clabel, X) based on Eqn. (1), where ( \u02dc Cinput, \u02dc Clabel) is a pair of first-round in-context example selected either randomly or using other example selection methods. \u2022 Next, for the datastore with all candidate incontext examples, generate the inverse infer\fence result in \u02c6 Clabel for every candidate example based on the approximated inverse inference probability P(Clabel|X, \u02c6 Y, Cinput) by \u02c6 Clabel = arg max P(Clabel|X, \u02c6 Y, Cinput). \u2022 Last, compute Q = Similarity(Clabel, \u02c6 Clabel) as the text similarity between Clabel and \u02c6 Clabel, and use Q as the metric for the evaluation of the quality of inverse inference. Since more accurate inverse inference probability often results in higher text similarity, ByCS selects the in-context examples with higher Q. Note that Q is adopted since it does not require to assessment of the model\u2019s output probability distribution of the LLM, which is often unavailable for commercial LLMs. To reduce the computation cost of inverse inference, two methods are used when the number of examples in the datastore is large: \u2022 Conduct inverse inference using a model in the same model family as our inference model but has a smaller model size. \u2022 Apply ByCS to a small number (e.g. N) of pre-selected candidate examples. In preselection, all examples in the datastore are first ranked, and only the top N best examples are reserved as the pre-selected candidates. The pre-selection is performed using fast rankingbased algorithms like kNN. 4 Experimental Setup 4.1 Models Experimental results are performed on audio, text, and image modalities. For audio-text and imagetext tasks, ASR and VQA are used to evaluate the ICL ability of encoder-decoder structured models. For text-only NLP tasks, topic classification, sentiment analysis, and text-to-SQL are used to evaluate the ICL performance with decoder-only models. Regarding the NLP tasks, experiments are conducted using GPT-3.5-Turbo and GPT-4 (OpenAI, 2023a). For the ASR task, the open-sourced Whisper model (Radford et al., 2023) is used, which is a series of speech models released by OpenAI. The Whisper model family uses vanilla encoderdecoder Transformer (Vaswani et al., 2017) architecture ranging from 39 million (M) parameters (tiny) to 1.55 billion (B) parameters (large). Specifically, the Whisper small (244M) and Whisper largev2/-v3 (1.55B) models are used. For the VQA task, experiments are performed on Emu2 (Sun et al., 2023a) and GPT-4V (OpenAI, 2023b). Emu2 is a 37B text-image model (VLM) which leverages pretrained EVA-02-CLIP-E-plus (Sun et al., 2023b) and LLAMA-33B (Touvron et al., 2023a), which has ICL ability when taking interleaved inputs of images and texts. For experiments on Emu2, the outputs are generated using a greedy decoding setting for fast evaluation. GPT-4V is a GPT4 variant that can directly perceive image inputs, showing state-of-the-art image understanding performance. 4.2 Datasets Seven datasets covering NLP, ASR and VQA are used in this paper. For text-only ICL, four datasets are used in four different task categories: the TREC dataset for topic classification (Voorhees and Tice, 2000), the SST2 dataset for sentiment analysis (Socher et al., 2013), the Spider dataset for text-to-SQL (Yu et al., 2018), and the CHiME4 (Vincent et al., 2017) split of the HyPoradise dataset (Chen et al., 2023) for generative language model re-scoring to correct pre-generated ASR transcriptions. For audio-text ICL, Two datasets are used for ASR tasks, namely RASC863 (ChineseLDC.org, 2004) and CORAAL (Gunter et al., 2021). RASC863 is a commonly used Chinese dialect ASR dataset and its dialectal words split of Chongqing and Guangzhou dialects are used. CORAAL is an English corpus with speech recordings from regional African Americans. For imagetext ICL, VQA experiments are conducted on OKVQA (Marino et al., 2019), a dataset that requires methods to draw upon external knowledge to answer the visual questions. 4.3 Baselines On all three modalities, random selection and improved KATE (Liu et al., 2022) are used as baseline approaches. For random selection, in-context examples are uniformly selected from the example datastore three times and the average results are reported. For KATE (Liu et al., 2022), k neighbours that are nearest to the test input in the embedding space in terms of Euclidean distance are selected. For ASR ICL, the encoder of Whisper large-v2 acts as the embedding retrieval module on the Chinese dataset, while on the English dataset, we use the encoder of Whisper large-v3. In text-ICL, OpenAI text-embedding-ada-002 is used as the embedding retrieval model. For VQA ICL, KATE is only based on the embedding space of the query \fCorpus & In-context example number k Setting RASC863 Chongqing RASC863 Guangzhou CORAAL <15s k = 1 k = 2 k = 3 k = 4 k = 1 k = 2 k = 3 k = 4 k = 1 random 67.1 56.1 52.7 51.0 61.7 38.3 31.2 28.8 12.4 KATE+ 67.1 54.7 51.3 49.7 61.3 36.1 26.9 24.8 12.0 ByCS 62.4 53.4 50.6 48.6 49.5 31.9 27.1 26.6 11.7 oracle ByCS 62.4 52.4 49.5 47.2 49.4 30.7 25.8 24.7 11.7 (a) Results with Whisper-large-v2 Corpus & In-context example number k Setting RASC863 Chongqing RASC863 Guangzhou CORAAL <15s k = 1 k = 2 k = 3 k = 4 k = 1 k = 2 k = 3 k = 4 k = 1 random 68.9 60.3 57.0 55.7 67.1 42.8 38.3 35.2 11.6 KATE+ 68.1 58.2 54.8 54.1 67.7 41.3 34.3 31.6 11.4 ByCS 63.5 56.3 53.5 51.8 50.7 36.7 33.0 31.5 11.3 oracle ByCS 63.4 55.2 53.0 50.7 51.3 35.6 31.9 30.7 11.2 (b) Results with Whisper-large-v3 Table 1: %WERs on RASC863 dialectal word dataset and CORAAL with different in-context example selection methods. For RASC863, the example datastore is the RASC863 dialectal word dataset of the corresponding dialect. For CORAAL, the size of the example datastore for ByCS is narrowed down to 10 using kNN algorithm. For the \u201coracle ByCS\u201d setting, the ground-truth label Yref is used in the inverse reference. image and EVA02-CLIP-bigE-14-plus (Sun et al., 2023b) serves as the embedding retrieval module. We use the term \u201cKATE+\u201d to refer to the baseline in our paper, putting stress on the fact that it is actually an improved KATE version enhanced using stronger embedding retrieval models, which results in better performance. For text ICL, bm25 (Robertson et al., 1995) and LLM-R (Wang et al., 2023b) are also compared as baselines. bm25 is a ranking metric originally designed for search engines to estimate the relevance of documents to a given query based on word-overlapping similarity. LLM-R provides a recent and preferment dense retriever distilled using a reward model trained based on LLM feedback. 5 Results 5.1 ASR ICL Results in WER are reported for ASR tasks in Table 1, and here in Chinese WER is calculated based on Chinese characters, which is also termed as character error rate. The ByCS method outperforms the KATE+ baseline in most cases, showing the robustness and effectiveness of our method. When the number of in-context examples k is small, ByCS surpasses KATE+ baseline in a large margin, with a 10.25% relative WER reduction on average when k = 1. Such performance advantage of ByCS reduces when the number of in-context examples increases, which may be attributed to the fact that ByCS performs the inverse inference of each in-context example individually by applying an independence assumption that ignores the contextual interactions between different in-context examples. The use of Yref in \u201coracle ByCS\u201d further boosts the performance gain, indicating the upper bound of our method with the same number of k. 5.2 Ablation study on ASR ICL 5.2.1 Inverse decoding option The influence of different decoding options of inverse inference is studied on the RASC863 dialectal word dataset. The results are shown in Table 2. For the setting notation, \u201cnoprompt\u201d denotes decoding in the default decoding option, and \u201cprompt\u201d means to decode with a specially designed prompt \u201c\u8bc6\u522b\u65b9\u8a00\u201d (meaning to \u201crecognize dialect speech\u201d). \u201cLID\u201d denotes decoding with the correct language identity of Chinese (\u201czh\u201d). The results show that among the three inverse decoding options, \u201cnoprompt\u201d obtains the best performance, \u201cprompt\u201d becomes the second, and \u201cLID\u201d the worst. The WERs of inverse inference are re\fported in Table 3. The WERs under the \u201cnoprompt\u201d setting are more than 100% due to the high insertion error rate. The repeated outputs are not removed when calculating the WERs of inverse inference and when calculating the text similarity, making a more obvious distinction between the examples with high mutual information interaction and those with low. Although it may be a little counter-intuitive that low inverse inference accuracy results in high ByCS selection performance, it is reasonable since inverse inference in ByCS helps to separate good in-context examples from the rest, which can be better achieved by using worse decoding options during inverse inference. This is because our decoding options can often make the model make more mistakes for worse in-context examples. Setting Corpus Text Inverse RASC863 Chongqing RASC863 Guangzhou similarity decoding measurement option Jaccard coefficient noprompt 62.4 49.5 prompt 62.9 50.7 LID 64.1 52.3 BERT wordvecs noprompt 62.4 51.5 prompt 63.5 56.8 LID 64.5 57.7 Table 2: %WERs of Whisper large-v2 on RASC863 dialectal word dataset using ByCS method with different inverse decoding options and text similarity measurements. The number of in-context examples is k = 1. Inverse decoding option Corpus RASC863 Chongqing RASC863 Guangzhou noprompt 91.5 125.2 prompt 70.2 70.1 LID 54.6 61.7 Table 3: Inverse inference %WERs of Whisper largev2 on RASC863 dialectal word dataset with different inverse decoding options. 5.2.2 Text similarity measurement The results of ByCS with different text similarity measurements are also reported in Table 2. For the setting notation, the \u201cJaccard coefficient\u201d is a comSetting In-context example number k k = 1 k = 2 k = 3 k = 4 KATE+ 67.1 54.7 51.3 49.7 ByCSlargev2 62.4 53.4 50.6 48.6 ByCSsmall 64.2 53.3 50.5 48.7 (a) Results with Whisper large-v2 Setting In-context example number k k = 1 k = 2 k = 3 k = 4 KATE+ 68.1 58.2 54.8 54.1 ByCSlargev3 63.5 56.3 53.5 51.8 ByCSsmall 64.4 56.5 54.1 51.7 (b) Results with Whisper large-v3 Table 4: %WERs on RASC863 Chongqing dialectal word dataset with ByCS with different inverse inference models. ByCSlargev3 and ByCSsmall use Whisper-largev3 and Whisper-small as the inverse inference model separately. monly used statistic to gauge similarity, defined as the intersection over the union of two sentences. \u201cBERT wordvecs\u201d is to measure similarity based on the Euclidean distance in the embedding space of BERT encoded word vectors. The embedding retrieval module is bert-base-chinese 1. ByCS with the Jaccard coefficient as text similarity have lower WERs, which may be because the training data of the BERT model doesn\u2019t include sufficient dialectal Chinese words and expressions. It also indicates that ByCS can work well with even a simple rule-based text similarity measurement, further verifying its high robustness. The Jaccard coefficient is used as the text similarity measurement in later experiments unless explicitly specified, due to the performance and simplicity. 5.2.3 Inverse inference model The inverse inference with different models is also investigated, with the results displayed in Table 4. A smaller model is used for inverse inference to speed up ByCS, since it is expensive to perform inverse inference using the inference model for every candidate example in datastore. Replacing Whisper-large-v2/v3 with Whisper-small will speed up six times2. For the notation, the subscript denotes the inverse inference model. For example, ByCSsmall is the ByCS method with Whisper small 1https://huggingface.co/ bert-base-chinese 2https://github.com/openai/whisper \fCorpus & In-context example number k Setting TREC(%Acc. \u2191) SST2(%Acc. \u2191) Spider(%Acc. \u2191) HyPoradise CHiME-4 (%WER \u2193) k = 1 k = 2 k = 4 k = 1 k = 2 k = 1 k = 1 k = 2 k = 5 default 63.0 92.92 67.41 8.0 random 63.5 72.7 75.3 94.96 94.80 67.02 7.5 7.5 7.3 KATE+ 78.8 86.4 91.0 95.05 94.69 69.44 7.7 7.1 6.8 bm25 74.6 89.4 89.8 95.27 95.40 67.41 7.4 7.5 8.1 LLM-R 78.0 88.8 90.4 95.05 94.02 67.82 7.4 6.9 7.0 ByCS 81.2 88.0 90.6 95.16 95.04 69.63 7.1 6.8 6.4 (a) Results using GPT-3.5-Turbo Corpus & In-context example number k Setting TREC(%Acc. \u2191) SST2(%Acc. \u2191) Spider(%Acc. \u2191) HyPoradise CHiME-4 (%WER \u2193) k = 1 k = 2 k = 4 k = 1 k = 2 k = 1 k = 1 k = 2 k = 5 default 75.2 95.01 69.63 11.6 random 81.3 82.5 84.6 96.38 96.11 70.66 6.9 6.8 6.5 KATE+ 88.2 91.6 93.4 96.43 95.85 71.95 7.0 6.3 5.8 bm25 81.8 87.4 91.4 96.19 96.09 71.47 6.8 6.6 6.3 LLM-R 88.2 91.0 93.6 95.74 95.06 72.63 6.8 6.3 5.9 ByCS 88.6 92.4 93.6 96.55 96.31 72.82 6.7 6.3 5.9 (b) Results using GPT-4 Table 5: Results of four text ICL tasks on two GPT-family models with different in-context example selection methods. The evaluation metrics are denoted in the brackets. The example datastore is narrowed down to a small size using kNN for ByCS. In the \u2018default\u2019 setting, the answers are generated directly with the questions without ICL. as an inverse inference model. ByCSsmall has similar results to ByCSlargev2 and ByCSlargev3, verifying the effectiveness of using a smaller model from the same family for inverse inference. This is intuitive since Whisper-small is trained using the same data and settings compared to the inference model Whisper-large-v2 and Whisper-large-v3, which therefore processes information similarly and can serve as a good alternative when evaluating the quality of the in-context examples. The smaller size of Whisper-small makes ByCS a more practical method in cost-sensitive scenarios. 5.3 Text ICL Text-only ICL results are shown in Table 5. As shown, ByCS outperforms all baselines on most dataset settings, showing not only the effectiveness but also the robustness of ByCS. In particular, ByCS outperforms the best baseline on the generative ASR rescoring dataset HyPoradise with a considerable 4.7% relative WER reduction with GPT3.5-Turbo. On TREC and SST2 datasets, ByCS does not always outperform the baselines. This indicates that ByCS is more suitable for open-ended long-answer datasets due to the calculation of text similarity in ByCS, in which answers are much more diverse and examples with rich information interactions can be better separated. In contrast, in multi-choice classification datasets, only a few short answers are often available, containing little contextual information. As the example shown in Figure 4, the distribution of the text similarity for ranking the examples is often sharp, merging the optimal and the suboptimal examples. Furthermore, considering the hypothesized labels of the test inputs for inverse inference, the hypothesized answers in open-ended datasets (in the form of long sentences) are often more similar to their corresponding references compared to those in the multi-choice classification datasets (in the form of a word or phrase or just an index of choice). It is observed that different in-context example selection methods perform differently with different models, even though on the same dataset. The bm25 method outperforms the KATE+ method with GPT-3.5-Turbo on the SST2 dataset, but not with GPT4. Compared to KATE+ and bm25 that is \fmodel-free in the actual selection step, the performance advantage of ByCS is more consistent since it takes into account the influence of the model. The outputs of the inverse inference model are used, which can serve as a good approximation to the inference model as verified in Section 5.2.3. Note that for ByCS on GPT-4, although the inverse inference procedure is conducted on GPT-3.5Turbo, the performances of ByCS are still superior. This further verifies that smaller models from the same model family can serve as a good low-cost approximation of the inverse inference model. (a) Distribution on SST2 (b) Distribution on HyPoradise Figure 4: The distribution of text similarity scores on different datasets. The text similarity score is the Jaccard coefficient. The entropy of distribution is calculated and placed on the upper left. The distribution on the multichoice classification dataset SST2 (blue) is much sharper than that of the open-ended dataset HyPoradise (red). 5.4 VQA ICL ByCS is tested on VQA ICL and the results are reported in Table 6. ByCS outperforms the KATE+ baseline on the VQA ICL task, demonstrating strong performances across modalities. The performance improvement from ByCS is not as obvious as in audio and text tasks, since the answers of VQA are usually short (usually a word or phrase), lacking sufficient contextual information. ByCS on In-context example number k Example selection method KATE+ ByCS k = 2 40.47 40.12 k = 4 45.11 45.14 (a) Results with Emu-2 In-context example number k Example selection method KATE+ ByCS k = 2 52.54 52.86 k = 4 54.00 54.39 (b) Results with GPT-4V Table 6: Results of VQA ICL with different in-context example selection methods and numbers of examples on OKVQA dataset. the VQA dataset suffers from the problem of having sharp text similarity score distributions, similar to the multichoice classification dataset. For ByCS with GPT-4V, inverse inference results on Emu-2 are used to pre-select the candidate examples, and ByCS still outperforms the KATE+ baseline. The performance may be further improved if GPT-4V is also used for inverse inference. This demonstrates that ICL may perform similarly cross models not only on speech and text, but also on images. 6" |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.13855v1", |
| "title": "Understanding the role of FFNs in driving multilingual behaviour in LLMs", |
| "abstract": "Multilingualism in Large Language Models (LLMs) is an yet under-explored\narea. In this paper, we conduct an in-depth analysis of the multilingual\ncapabilities of a family of a Large Language Model, examining its architecture,\nactivation patterns, and processing mechanisms across languages. We introduce\nnovel metrics to probe the model's multilingual behaviour at different layers\nand shed light on the impact of architectural choices on multilingual\nprocessing.\n Our findings reveal different patterns of multilinugal processing in the\nsublayers of Feed-Forward Networks of the models. Furthermore, we uncover the\nphenomenon of \"over-layerization\" in certain model configurations, where\nincreasing layer depth without corresponding adjustments to other parameters\nmay degrade model performance. Through comparisons within and across languages,\nwe demonstrate the interplay between model architecture, layer depth, and\nmultilingual processing capabilities of LLMs trained on multiple languages.", |
| "authors": "Sunit Bhattacharya, Ond\u0159ej Bojar", |
| "published": "2024-04-22", |
| "updated": "2024-04-22", |
| "primary_cat": "cs.CL", |
| "cats": [ |
| "cs.CL" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "Multilingualism in Large Language Models (LLMs) is an yet under-explored\narea. In this paper, we conduct an in-depth analysis of the multilingual\ncapabilities of a family of a Large Language Model, examining its architecture,\nactivation patterns, and processing mechanisms across languages. We introduce\nnovel metrics to probe the model's multilingual behaviour at different layers\nand shed light on the impact of architectural choices on multilingual\nprocessing.\n Our findings reveal different patterns of multilinugal processing in the\nsublayers of Feed-Forward Networks of the models. Furthermore, we uncover the\nphenomenon of \"over-layerization\" in certain model configurations, where\nincreasing layer depth without corresponding adjustments to other parameters\nmay degrade model performance. Through comparisons within and across languages,\nwe demonstrate the interplay between model architecture, layer depth, and\nmultilingual processing capabilities of LLMs trained on multiple languages.", |
| "main_content": "Introduction Large Language Models (LLMs) are consistently getting better at multilingual NLP, e.g. at machine translation (MT) or performing cross-lingual tasks. But is this enough to conclude that these models are learning representations or patterns that generalise across multiple languages? Such an expectation is an old one, dating back to the proposal of \u201cuniversals of language\u201d (Greenberg, 1966). Previous work (see Section 2) with word embeddings and intermediate representations of layers learnt by auto-encoding models such as BERT (Rogers et al., 2021) has shown some degree of alignment in internal representation among languages. However, similar analysis of decoder-only Transformer models and especially their Feed-Forward Network (FFN) layers is so far insufficient. In this work, we focus on the FFNs of large autoregressive language models. FFNs are characterised by their layer-by-layer processing of input representations and make up the bulk of the total parameters of all LLMs. They process individual tokens in parallel, with all cross-token information transfer happening in the self-attention layers. Still, despite their seemingly straightforward structure, their function in Transformer models has not been fully understood yet. Although recent work has explored how the FFNs process layerwise intermediate representations leading to the next word prediction in LLMs, there are still many unanswered questions. Multilingualism (how different languages are represented inside the language models) in FFNs is one such topic. Add and Norm Combinator (Synthesis) GELU (Selection) Detector (Conceptualization) Add and Norm Feed Forward Net Multihead Self Attention Input representation Output representation Feed-Forward Network Figure 1: Transformer block and the structure of FFN In this work, we build upon the view of the FFNs introduced by Bhattacharya and Bojar (2023). We consider the lower sublayer of the FFN as detectors and the higher sublayer as combinators (Figure 1) (Figure 1). Over the duration of training, the detectors at the different layers start getting triggered by specific patterns in the input data (Geva et al., 2021). The activation function \u201cselects\u201d the important aspects (selection) and the combinators \u201ccombine\u201d them to emit an output which can be interpreted as a prediction of the planned next token for that layer (Belrose et al., 2023; Geva et al., 2022; Dar et al., 2023). In fact, the final prediction in the Transformer models might result from a process of \u201cincremental prediction\u201d (based on the idea of iterative inference in Jastrz\u02db ebski et al., 2017) through the layers. Hence, during training, the model is trained to slowly convert the input representations close to the expected output, one layer (processing step) at a time. Thus, the arXiv:2404.13855v1 [cs.CL] 22 Apr 2024 \fintermediate activations during inference can reveal how input representations evolve within the model, mapping input prefixes to predicted output. In this paper: \u2022 Refine the analysis method of Bhattacharya and Bojar (2023) and propose a novel measure called activation flatness to identify and study activity patterns corresponding to multiple languages in LLMs. \u2022 Describe how this activity varies across model layers and across model sizes. \u2022 Identify distinct zones of multilingual and language-specific sub-layers in the FFNs. We study the working of detectors and combinators across layers of four XGLM models (Lin et al., 2022) (568M, 1.7B, 2.9B, 7.5B) which are freely available, share the same architecture (decoder-only Transformer) and are trained on the same data comprising of 500B tokens from 30 diverse languages. We select this particular model family because the training data curation involved up-sampling of under-resourced languages to achieve a balanced language representation. This makes the models perfect candidates to study multilingual representations in LLMs. 2 Related Work Multilingual alignment in LLMs. There is no consensus yet on multilinugal LLMs learning universal patterns across languages. However there is clear evidence that they learn embeddings which have high overlap across languages, primarily between those of the same family (Doddapaneni et al., 2021). Muller et al. (2021) show that a multilinugal BERT model can be seen as a stack of a language-specific encoder at the lower layers and a language-agnsotic predictor in the upper layers. The language agnosticity of the upper layers was also confirmed by (Liu et al., 2020; Pires et al., 2019). (Sta\u00b4 nczak et al., 2022) have demonstrated significant cross-lingual overlap between neurons in multilingual auto-encoder language models. A probing based analysis of multilinguality of autoregressive models was done by Mueller et al. (2022). But on the question of language models learning universal patters, recently Gurnee et al. (2024) have demonstrated the existance of universal neurons in GPT2 models. (Xie et al., 2021) attempted to identify language-specific neurons in an encoder-decoder based machine translation systems using Transformers. Model Sparsity. In recent literature (Liu et al., 2023; Mirzadeh et al., 2023), sparsity has been thought of as a way to increase the efficiency of models during inference. To do that, analyses of models employing the ReLU activation functions have been done. Some works have also explored the \u201cReLU-fication\u201d (Song et al., 2024) of pretrained networks. Sparse models also should be easier to interpret (Tamkin et al., 2023). 3 Methodology 3.1 Model snapshots In order to better understand how language models arrive at their prediction during inference, we collect \u201cmodel snapshots\u201d, i.e. intermediate representations of the layers of different models of the XGLM family. To do that, we feed the language models with prefixes of sentences one word at a time and ask them to predict the next word. For an English sentence like \u201cElementary, my dear Watson.\u201d, we first remove all punctuations except the last i.e. period in this case. For this sentence, we obtain 8 subwords: [\u2018Elementar\u2019, \u2018y\u2019, \u2018my\u2019, \u2018de\u2019, \u2018ar\u2019, \u2018Watson\u2019, \u2018.\u2019].1 The prefixes (at the subword level) would thus be: [Elementar\u2019, \u2018y\u2019], [\u2018Elementar\u2019, \u2018y\u2019, \u2018my\u2019], [\u2018Elementar\u2019, \u2018y\u2019, \u2018my\u2019, \u2018de\u2019, \u2018ar\u2019], [\u2018Elementar\u2019, \u2018y\u2019, \u2018my\u2019, \u2018de\u2019, \u2018ar\u2019, \u2018Watson\u2019] and [\u2018Elementar\u2019, \u2018y\u2019, \u2018my\u2019, \u2018de\u2019, \u2018ar\u2019, \u2018Watson\u2019, \u2018.\u2019]. 3.2 Snapshots from parallel test-sets Corresponding to each prefix, we save the model snapshots of all the layers, focusing only on the FFN layers (i.e. the detector and combinator sublayers) and discarding everything else. Since we are explicitly interested in the multilingual aspects of the model(s), we feed them with prefixes obtained from sentences across 4 different WMT2 test-sets. Each test set contains parallel sentences in English and another language (Czech, French, German or Hindi). Hence, for each language pair, we end up with model snapshots for prefixes of both languages. Thus, each prefix yields a detector and combinator representation for each layer of the model in the form of a tensor of shape (l, d) where l indicates the number of subwords in the prefix and d indicates the dimension of the layer. So, for a layer of the model with 1024 neurons, a prefix with 4 subwords would yield a vector of shape (4, 1024). This notation will be relevant later in the paper. For the most of our analyses, we extract the representation of the last subword of the prefix assuming that it contributes most closely to the prediction of the next word, leading us to a vector of shape (1,1024). Later, in Section 4.2, we will use of all the prefix representations. 1Note that the subword model used in XGLM does not distinguish between word-internal and word-final subwords, increasing the conflation of tokens across languages (e.g. \u201ca\u201d being the determiner in English and a conjunction in Czech) 2https://machinetranslate.org/wmt \f3.3 Focus of Interest Our primary objective is to reveal and understand \u201cconcentrated activity\u201d across the layers in LLMs in the context of multilingual language processing. We do it in three levels of granularity. We start by observing the patterns of sparsity in the models across languages. Then we dive deeper and focus on the nature of distribution of the activations. And finally, we determine the degree of multilinguality of the neurons through the layers by looking at patterns of shared activation. 4 Observations 4.1 Sparsity patterns across languages We start with a very basic analysis of the activations of detectors and combinators of the model. The XGLM models utilize the GELU activation function which, similarly to ReLU, allows the training to reach a sparse representation. Thus, we start with examining if the processing of certain languages correspond to sparser representations. In order to assess the level of sparsity across model sizes and layers, we use activation frequency. Given a set of model snapshots over a set of input prefixes, we define the activation frequency for a particular neuron as the ratio of activation counts (number of instances where the activation value was non-zero) to the total number of prefixes. This gives an idea about the overall importance of this neuron for the given test inputs. Activation frequency close to one indicates that the neuron has fired for almost all prefixes. For each layer, we collect activation frequencies of all the neurons and consider its average and, more importantly, standard deviation. Highly varying activation frequency across neurons of a layer indicates that some neurons are very important and some are very unimportant for the given set of prefixes, i.e. that the overall representation is sparse. We present the average and standard deviation of activation frequency of detectors across the studied model sizes in Figure 2 0 5 10 15 20 Layers 0.90 0.95 1.00 1.05 Sparsity xglm-564M en de hi 0 5 10 15 20 Layers 0.2 0.4 0.6 0.8 1.0 Sparsity xglm-1.7B en de hi 0 10 20 30 40 Layers 0.6 0.7 0.8 0.9 1.0 1.1 Sparsity xglm-2.9B en de hi 0 10 20 30 Layers 0.85 0.90 0.95 1.00 1.05 Sparsity xglm-7.5B en de hi Figure 2: Activation frequency for detectors along with standard deviation plotted for English, German and Hindi. We observe that for the detectors, the average activation frequency drops near the output layers for all languages. But it remains stable (and close to 1) for all other layers. Also, the standard deviation of activation frequencies increase near the input and output layers. In other words, input and output layers have far sparser detector representations than the middle layers. We also observe that the representations in middle layers become increasingly dense (smaller standard deviations) with increase in model size. Finally, the greatest drop in the activation frequency occurs for languages like English, French and German. In Figure 2, we show results from only three languages (i.e. English, German and Hindi) for readability. 0 5 10 15 20 Layers 0.85 0.90 0.95 1.00 1.05 Sparsity xglm-564M en de hi 0 5 10 15 20 Layers 0.6 0.7 0.8 0.9 1.0 Sparsity xglm-1.7B en de hi 0 10 20 30 40 Layers 0.85 0.90 0.95 1.00 1.05 Sparsity xglm-2.9B en de hi 0 10 20 30 Layers 0.925 0.950 0.975 1.000 1.025 1.050 Sparsity xglm-7.5B en de hi Figure 3: Activation frequency for combinators along with standard deviation plotted for English, German and Hindi. For the combinators (Figure 3), we see a similar pattern, i.e. representations are sparser close to the input and output than the middle layers. The authors of the XGLM model claim that the training data was curated to reflect a balanced representation of languages. Yet, it appears that lesser-represented languages (in terms of total tokens in the training data) like Czech and Hindi exhibit more combinator sparsity while well-represented languages like English, French and German exhibit greater detector sparsity in the later layers. One possible interpretation for this observation is that owing to their relative under-representation in the training data, languages like Czech and Hindi learn to utilise only a subset of the total neurons in the combinators for generation (greater sparsity). We therefore posit that over-sampling under-resourced languages does not improve the model\u2019s capabilities to generate tokens in those languages. 4.2 Activation flatness across layers Do neurons in FFNs show similar activation patterns across languages? How does the distribution of the values of activation change across layers? Are they more peaked for certain languages in some layers? To answer questions like these, we make use of a novel \fmetric called \u201cactivation flatness\u201d. We describe the metric and the observations from using it, in the section below. Activation flatness We define the activation flatness for a particular layer as : A = n X i=1 flatness(Xi) (1) where flatness(X) = \u2212Pm i=1 S(xi) log2(S(xi)) m (2) Here m corresponds to the number of neurons in the layer and S(X) is the neuron activation value scaled linearly to the range [0,1] within the layer, formally defined as: S(xi) = xi \u2212min(x1, x2, ..., xn) max(x1, x2, ..., xn) \u2212min(x1, ..., xn) (3) At a high level, activation flatness measures the entropy of normalized neuron activations in a layer. If all neurons return similar values, the entropy will be high, making our measure of flatness high. If only a handful of neurons fire, the entropy of these activations will be low. Thus, the activation flatness measures if the activations for a particular layer are more peaked or uniform. Lower flatness indicates that the activations were more peaked at a few neurons than layers with higher flatness. 0 3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 48 Layers 0 1000 2000 3000 4000 5000 6000 7000 8000 Activation Flatness xglm-564M en cs de fr hi 0 3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 48 Layers 0 1000 2000 3000 4000 5000 6000 7000 8000 Activation Flatness xglm-1.7B en cs de fr hi 0 3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 48 Layers 0 1000 2000 3000 4000 5000 6000 7000 8000 Activation Flatness xglm-2.9B en cs de fr hi 0 3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 48 Layers 0 1000 2000 3000 4000 5000 6000 7000 8000 Activation Flatness xglm-7.5B en cs de fr hi Figure 4: Activation flatness for detectors The activation flatness pattern for detectors (Figure 4) shows a consistent pattern of decreasing flatness through the layers across all models. In other words, the activations in the detectors get more and more peaked with increasing layer depth. We see this clearly in Figure 5 where we look at the activations across all prefixes for detectors of some layers. The activation patterns for layers 1 and 27 (uniform), are for instance very different from those of layers 45 and 48 (peaked at certain neurons). Figure 5: Normalized activations for detectors: XGLM 2.9B for layers 1,27,45,48 0 3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 48 Layers 0 1000 2000 3000 4000 5000 6000 7000 8000 Activation Flatness xglm-564M en cs de fr hi 0 3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 48 Layers 0 1000 2000 3000 4000 5000 6000 7000 8000 Activation Flatness xglm-1.7B en cs de fr hi 0 3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 48 Layers 0 1000 2000 3000 4000 5000 6000 7000 8000 Activation Flatness xglm-2.9B en cs de fr hi 0 3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 48 Layers 0 1000 2000 3000 4000 5000 6000 7000 8000 Activation Flatness xglm-7.5B en cs de fr hi Figure 6: Activation flatness for combinators The activation flatness pattern for combinators (Figure 6) shows that all models follow a common pattern. Regardless of the number of parameters and the total number of layers, there is a distinct decrease in the activation flatness near layer 20 for all models. A visual inspection of Figure 7 shows that the activations for the prefixes indeed show a pattern of certain \u2018peakedness\u2019 in layers of low activation flatness (i.e. layers 1, 21, 22 vs. 47). Also the drop in activation flatness occurs across all languages for every model. From the detector-combinator perspective, the detector does pattern matching from the input representation. The combinator then uses the output of the detector (after being passed through an activation function) to make an intermediate prediction for that layer. Any drop in the activation flatness for combinators thus indicates more \u2018focused\u2019 firing along specific dimensions. To better illustrate how activation flatness works, we also compare the activation snapshots for layer 1 of the 2.9B and 7.5B parameter model in Figure 8. We observe that for the first layer of the 2.9B parameter model, where the combinator activation flatness value was around 1000 for all languages, there is a single distinct peak in the activation distribution compared to the \fFigure 7: Normalized activations for combinators: XGLM 2.9B for layers 1, 21, 22, and 47 Figure 8: Combinator activation for all prefixes for layer 1: 2.9B (top, activation flatness value of \u223c1000) vs. 7.5B (bottom, activation flatness of \u223c2000) models. first layer of the 7.5B parameter model (with activation flatness of around 2000 across languages). Upon closer inspection we find that for the first layer of the 2.9B model, the maximum activation is always recorded for neuron 1849 for every prefix across all languages. Also across all languages, the maximum standard deviation across prefixes was recorded for neuron 218 for all languages. The single peak that we observe for the 2.9B model in Figure 8 corresponds to the neuron 218. Thus, the aggregate value of our activation flatness reflects the intuitive differences in representations. For both detectors and combinators, we observe that the low values of activation flatness occurs similarly across all languages. We therefore investigate the extent to which the prefixes across languages exhibit similarity in their layer-wise representations. Representational similarity would point to the presence of multilingual neurons in those layers. We posit that if a layer has more multilingual neurons, Input: Layer snapshot L1 with shape (m, d) and snapshot L2 with shape (n, d) Output: Aggregate minimum distance across all rows of L1, i.e. all prefixes of the source sentence and their most similar counterpart prefixes in the target language total_dist \u21900; for i \u21901 to m do min_dist \u2190+\u221e; for j \u21901 to n do score \u2190dist(L1[i], L2[j]; if score < min_dist then min_dist \u2190score end end total_dist \u2190total_dist + min_dist end return total_dist Algorithm 1: Representation Distance Score then the representation of two sentence prefixes with the same meaning should be similar in that layer. In other words, the representation of \u201cI am a computer\u201d should be very similar to \u201cJa jsem po\u02c7 c\u00edta\u02c7 c\u201d (in Czech). Thus for every prefix in one language, we find the representational difference with all prefixes in the second language. We then select the minimum distance among them. The idea is to identify \u2018similar\u2019 prefixes across both languages. For simplicity, we assume monotonic translation, which more or less holds for our studied language pairs. We then aggregate the distances across all prefixes. The layer with the minimal aggregate distance should exhibit the greatest representational similarity for the two languages, as formally captured in Algorithm 1. 0 5 10 15 20 Layers 40000 60000 80000 100000 120000 140000 160000 180000 Distance xglm-564M cs-Encs fr-Enfr de-Ende de-cs de-fr de-hi fr-cs hi-Enhi hi-cs hi-fr 0 5 10 15 20 Layers 25000 50000 75000 100000 125000 150000 175000 200000 Distance xglm-1.7B cs-Encs fr-Enfr hi-Enhi cs-fr de-Ende de-cs de-fr de-hi hi-cs hi-fr 0 10 20 30 40 Layers 0 50000 100000 150000 200000 Distance xglm-2.9B cs-Encs de-Ende hi-Enhi de-cs de-fr fr-Enfr fr-cs hi-cs hi-de hi-fr 0 5 10 15 20 25 30 Layers 100000 150000 200000 250000 300000 350000 Distance xglm-7.5B cs-Encs fr-Enfr de-Ende de-cs de-fr de-hi fr-cs hi-Enhi hi-cs hi-fr Figure 9: Distance between representations: Detectors (568M, 1.7B. 2.9B, 7.5B) From Figure 9 we see that the representational distance for detectors decreases with layer depth for all models except the 7.5B parameter model. Figure 10 shows that for the 2.9B parameter model, the representa\f0 5 10 15 20 Layers 40000 60000 80000 100000 120000 140000 160000 180000 Distance xglm-564M cs-Encs fr-Enfr de-Ende de-cs de-fr de-hi fr-cs hi-Enhi hi-cs hi-fr 0 5 10 15 20 Layers 0 50000 100000 150000 200000 Distance xglm-1.7B de-Ende cs-Encs cs-fr de-cs de-fr fr-Enfr hi-Enhi hi-cs hi-de hi-fr 0 10 20 30 40 Layers 0 50000 100000 150000 200000 250000 Distance xglm-2.9B cs-Encs fr-Enfr hi-Enhi cs-fr de-Ende de-cs de-fr de-hi hi-cs hi-fr 0 5 10 15 20 25 30 Layers 50000 100000 150000 200000 250000 300000 Distance xglm-7.5B de-Ende hi-Enhi cs-Encs cs-de cs-fr cs-hi de-hi fr-Enfr fr-de fr-hi Figure 10: Distance between representations: Combinators (568M, 1.7B. 2.9B, 7.5B) tional distance for combinators decreases in the middle, before increasing again. Interestingly, the pattern of detector representational distance for 7.5B seems similar to the pattern of combinator representational distance for the 2.9B model. We also observe that around layer 30 in the 7.5B parameter model, the representational distance drops significantly for the combinators before increasing abruptly through layers 31 and 32. We label the layers with the minimal distance between prefixes of different language pairs as relatively more multilingual. Figure 11: Representational Similarity between models (combinators): 1.7B vs 2.9B, 2.9B vs 7.5B, 1.7B vs 7.5B Finally, we also compare the \u201crepresentational similarity\u201d (Kriegeskorte et al., 2008) for German between the 1.7B, 2.9B and 7.5B parameter models in Figure 11. We calculate the representational similarity on the basis of the normalized values of activation corresponding to each prefix for the detectors/combinators of all layers. We see that the similarity is the maximum for 2.9B and 7.5B models. Particularly, some of the middle layers of the 7.5B parameter model shows high correlation with early and middle layers of the 2.9B parameter model. The representational distance for all language pairs at the last layers of the combinator is higher for the models with larger parameters. In other words, it appears that the final layer of the larger models can make distinction between different languages. We posit that this might be an explanation why the 7.5B model outperforms models with smaller parameters of the same family as reported by Lin et al. (2022). Thus, based on the observations of activation flatness and representational distances/similarity, we have a comprehensive view of the nature of multilinguality in detectors and combinators. For all models except 7.9B, the detectors keep getting more multilingual with layer depth. For the 7.5B model, the middle layers are more multilingual than the layers near the input/output. For the combinators, final layers are the most multilingual except the 2.9B parameter model where the middle layers are the most multilingual. 4.3 The strange behaviour of the 2.9B model We find that the combinators of the 2.9B billion parameter behave very differently through the layers than the combinators of the other models. In terms of the architectural micro-details, model dimension size and the hidden dimension size of the 2.9B model is the same as that of the 1.7B model i.e. 2048 and 8192 respectively. The 2.9B model differs from the 1.7B model only in terms of layer depth i.e. 24 vs 48. There is also representational similarity across layers of the two models. We see that for both models, there is a minimum in activation flatness (as well as minimum representational distance) near layer 20. For the 2.9B billion parameter model, we also observe that the representational distance gradually increases in the later layers. For the final layer, we also find that for both the 7.5B and 2.9B models, the representational distance among language pairs is almost comparable (\u223c20000). One major point of difference between the 2.9B and 7.5B parameter model (except the architectural micro-details) is the number of layers in the model. We suspect that this might make the 2.9B model bad at generation (in comparison to the 7.5B model). We thus posit that the increase in number of layers harms the generation quality of the 2.9B parameter model (overlayerization). We suspect that longer training of the 2.9B model could have fixed this but we have no further details about the training of XGLM. The authors of the XGLM report that the 7.5B parameter model outperforms other smaller models of the same family in mutilingual tasks including few-shot translation. However, recent work by Zhang et al. (2023) involving QLoRA fine tuning of LLMs for translation shows that after finetuning, the XGLM 2.9B parameter model outperforms larger XGLM models in translation tasks. Is this boost in performance somehow related to the activation flatness patterns observed by us? We plan \fto investigate this aspect in future work, hypothesizing that flatter activation patters could be easier to adapt in QLoRA. 4.4 Activation similarity through languages We have already seen some evidence that certain layers are more multilingual than others based on our metric of representational distance/similarity. In this section, we validate the observations and confirm if the same neurons process different languages in the layers. For the model snapshots of detectors and combinators from each layer, we take the sum of activations across all prefixes and rank the neurons based on that. The motivation here is that an aggregate picture might reveal which neurons \u2018light up\u2019 the most across prefixes. We then calculate the Spearman rank correlation for a layer between language pairs. Our hypothesis posits that high rank correlations between two languages within a layer suggest similarities in their processing patterns. We present the results below. Figure 12: Rank correlation of neurons in detectors Figure 13: Rank correlation of neurons in combinators We observe that layers of detectors (Figure 12) exhibit decreasing rank correlation across languages with increasing model size. However, the rank correlation is consistently high for German-English-French language pairs across models. From Figure 12 and Figure 13, we also find that the combinators show greater rank correlation than detectors across languages and across layers, aligning again with the observations made previously with representational similarity. For both detectors and combiantors, we observe high degrees of correlation in the early and middle layers for all models. For the combinator we observe an \u2018emerging\u2019 pattern where the final layers initially exhibit greater values of rank correlation (568M<1.7B) before decreasing again (1.7B>2.9B>7.5B). We also observe that the middle combinator layers of the 7.5B model exhibited the greatest rank correlation for the English-German-French pairs. Thus, the early detectors are more multilingual for all models. We also observe that the early and middle combinator layers become multilingual with increasing model size. Figure 14: Similarity of ranks (English) : detectors Figure 15: Similarity of ranks (English) : combinators Intra-language comparison Next, we compare English sentences across different test sets. Figure 16 \fshows the extent of overlap between the sentences across the 4 different English test-sets used for the analysis. The hypothesis is that the analysis should show if there were dedicated layers for language specific shallow processing in the detectors or combinators. de_en cs_en fr_en hi_en de_en cs_en fr_en hi_en 31 31 31 31 34 67 31 34 34 31 67 34 Overlap of English sentences across datasets 35 40 45 50 55 60 65 Figure 16: Overlap of English sentences We see from Figure 14 that detectors in the early and middle layers of all models exhibit high degrees of rank correlation. We had already posited that the early detectors are multilingual. We use these results to extend that argument and state that this indicates the presence of \u201cshallow\u201d detectors i.e. they do not deal with the \u2018semantic\u2019 content of the sentences. On the other hand, the combinators (Figure 15) exhibit the maximal rank correlation in the later layers (close to the output). We posit that this indicates the presence of greater language specific neurons close to the output. Figure 17: Similarity of ranks (parallel data) : detectors Parallel Data Comparison Finally, we compare languages from parallel test-sets. The idea is to observe if certain layers processed \u201csemantic\u201d content similarly for certain models. We observe that for the EnglishGerman and English-French data, the detectors exhibit high degree of correlation in the early layers reaffirming earlier observations about their multilinugal and shallow processing abilities. For the combinators we find high values of rank correlation across layers for smaller models. The correlation is in fact highest when comparing German, French and English sentences. The combinaFigure 18: Similarity of ranks (parallel data) : combinators tors of the 7.5B parameter model however exhibits rank correlation close to zero across all layers for all language pairs except German-English and French-English. Infact, for 7.5B, the similarity is the most at layer 15, which also recorded the maximal representational similarity in Figure 10. We posit that like detectors, the combinators are multilinugal at early layers. The fact that the last layers of the XGLM models are language specific is pretty interesting as it has already been observed that for BERT, the upper layers (close to the ouptut) are more language specific. We note that this is an interesting parallel between the two kinds of LLMs (i.e. autoregressive versus auto-encoder). 5" |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.13299v1", |
| "title": "PCQA: A Strong Baseline for AIGC Quality Assessment Based on Prompt Condition", |
| "abstract": "The development of Large Language Models (LLM) and Diffusion Models brings\nthe boom of Artificial Intelligence Generated Content (AIGC). It is essential\nto build an effective quality assessment framework to provide a quantifiable\nevaluation of different images or videos based on the AIGC technologies. The\ncontent generated by AIGC methods is driven by the crafted prompts. Therefore,\nit is intuitive that the prompts can also serve as the foundation of the AIGC\nquality assessment. This study proposes an effective AIGC quality assessment\n(QA) framework. First, we propose a hybrid prompt encoding method based on a\ndual-source CLIP (Contrastive Language-Image Pre-Training) text encoder to\nunderstand and respond to the prompt conditions. Second, we propose an\nensemble-based feature mixer module to effectively blend the adapted prompt and\nvision features. The empirical study practices in two datasets: AIGIQA-20K\n(AI-Generated Image Quality Assessment database) and T2VQA-DB (Text-to-Video\nQuality Assessment DataBase), which validates the effectiveness of our proposed\nmethod: Prompt Condition Quality Assessment (PCQA). Our proposed simple and\nfeasible framework may promote research development in the multimodal\ngeneration field.", |
| "authors": "Xi Fang, Weigang Wang, Xiaoxin Lv, Jun Yan", |
| "published": "2024-04-20", |
| "updated": "2024-04-20", |
| "primary_cat": "cs.CV", |
| "cats": [ |
| "cs.CV" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "The development of Large Language Models (LLM) and Diffusion Models brings\nthe boom of Artificial Intelligence Generated Content (AIGC). It is essential\nto build an effective quality assessment framework to provide a quantifiable\nevaluation of different images or videos based on the AIGC technologies. The\ncontent generated by AIGC methods is driven by the crafted prompts. Therefore,\nit is intuitive that the prompts can also serve as the foundation of the AIGC\nquality assessment. This study proposes an effective AIGC quality assessment\n(QA) framework. First, we propose a hybrid prompt encoding method based on a\ndual-source CLIP (Contrastive Language-Image Pre-Training) text encoder to\nunderstand and respond to the prompt conditions. Second, we propose an\nensemble-based feature mixer module to effectively blend the adapted prompt and\nvision features. The empirical study practices in two datasets: AIGIQA-20K\n(AI-Generated Image Quality Assessment database) and T2VQA-DB (Text-to-Video\nQuality Assessment DataBase), which validates the effectiveness of our proposed\nmethod: Prompt Condition Quality Assessment (PCQA). Our proposed simple and\nfeasible framework may promote research development in the multimodal\ngeneration field.", |
| "main_content": "Introduction With the proliferation of Artificial Intelligence Generated Content (AIGC) technologies, AIGC images or videos have gradually appeared in people\u2019s view. The creation, sharing, and interaction of images and videos based on the AIGC technologies calls for their quality assessment. The Quality of Experience (QoE) guarantees that the AIGC works should align with the human aesthetic point of view and the lofty moral sense in artistic appreciation, like rejecting the vulgar stuff. *Xi Fang, Weigang Wang, and Xiaoxin Lv contributed equally to this work. \u2020Jun Yan is corresponding author. Figure 1. Overview of the Prompt Condition Quality Assessment (PCQA) model. The AIGC content and the corresponding prompt used to generate it are input separately. The information from the prompt will be encoded by the hybrid CLIP text encoder and used as a condition for visual quality assessment, with the trainable feature adapter to align the feature from different modals. The final MOS regression result is obtained through a feature mixer and an MLP regressor. Currently, the quality assessment of User Generated Content (UGC) via deep neural networks is matured. The local binary pattern features based on the distortion aggravation mechanism can measure the similarities between the distorted image and the multiple pseudo reference images (MPRIs) [1]. The renaissance of deep learning brings a new paradigm of UGC quality assessment. The deep bilinear convolutional neural network (BCNN) can help the users implement the blind image quality assessment (BIQA) [2]. The end-to-end spatial feature extraction networks can directly learn quality-aware spatial feature representations of video frame pixels. The hierarchical feature fusion and iterative mixed database training can boost the QoE [3]. The arXiv:2404.13299v1 [cs.CV] 20 Apr 2024 \fcontrastive language image pre-training (CLIP) method [4] builds the vision-language correspondence and forms a new multitask learning perspective in blind image quality assessment [5]. Meanwhile, the no-reference video quality assessment can be realized by the deep neural networks (DNNs) based on the content dependency and temporal-memory effects with the intuitions of human visual system [6], multiscale quality fusion strategy [7], or the disentangle of aesthetic perspectives and technical elements [8]. The CLIP method has also been scaled to the in-the-wild video quality assessment task [9]. In contrast to the assessment of User-Generated Content (UGC) quality, the evaluation of AI-Generated Content (AIGC) quality would place a greater emphasis on the high-level semantic information over the low-level details. The content produced by AI systems would exhibit a more profound alignment with the initial prompt, demonstrating enhanced coherence and relevance. This distinction underscores the AI\u2019s capability to synthesize and contextualize information, producing outputs that are not merely superficially related to the prompt but are intrinsically connected through higher-order semantic relationships. Benjamin was one of the earliest thinkers to focus on the impact of technological advances, particularly the development of mechanical reproduction, on works of art [10]. The proliferation of technologies precipitates the erosion of the \u201cAura\u201d, catalyzing the engagement of the wider public in both the creation and critique of art. As the societal value of art recedes, a growing chasm develops between critical and appreciative interactions from the audience. This divergence is becoming more pronounced in the epoch of AIGC, and machine evaluations inherit the same kind of subjectivity and bias that human evaluation has experienced. It raises a worthwhile scientific problem: What kind of machine-learning-based assessment model can evaluate the artistic caliber of AIGC content with relative objectivity and less bias? In this study, we propose a unified framework for AIGC quality assessment based on the specified prompt condition denoted by Figure 1. It employs a dual-source CLIP text encoder (Open-Clip [4, 11] and EVA-CLIP [12]) to interpret the prompts for the pair projections with the visual features extracted by the Vision Transformers (ViTs) [13] and ConvNeXts [14]. Then, a feature mixer module blends the text and image features to construct the correlations between images/videos and assessment quality comments. Such a pipeline can drive the evaluation paradigm to focus on the high-level features of AIGC works. We adopt moderate training-time data augmentation to realize the trade-off between data diversity and aesthetic standards. The erasing of \u201cAura\u201d has caused subjectivity and bias in the aesthetic evaluation [10]. We design an ensemble method to mitigate the bias in the scoring process so that the decisions from the different vision backbones would be averaged. It mimics the scoring of multiple human reviewers at many art-judging events. The experimental results on AIGIQA-20K [15] (AI-Generated Content Quality Assessment dataset) and T2VQA-DB (Text-to-Video Quality Assessment DataBase) [16] validate the effectiveness of our proposed method. In summary, the major contributions of our work are listed as follows: \u2022 We propose a unified framework of AIGC image or video quality regression with the prompt condition, which focuses more on the high-level semantic information. \u2022 We design the mechanism based on the feature adapter and feature mixer to enable effective interaction between the prompt condition and visual features. \u2022 We propose a novel ensemble method to mitigate the bias in the quality assessment scoring process. The organization of this manuscript is arranged as follows. Section 2 describes the related work in this research field. Section 3 provides a general view of our proposed method. Section 4 demonstrates the considerable experimental results of the proposed method in this study. Section 5 concludes the study and gives some further perspectives. 2. Related Works This section reviews the research development in UGC quality assessment, AIGC technologies, and mainstream multimodal learning methods in the past few years. 2.1. UGC quality assessment The development of UGC quality assessment experiences three eras: the era before the proliferation of deep learning, the era with the utilization of deep learning technologies, and the era with the applications of multimodal learning. The classical blind image/video quality assessment depends on the signal processing and classical machine learning methods, like wavelet transform [17], DCT transform [18], feature learning [19], rank learning [20], and multiple pseudo reference image distortion aggregation [1]. These methods depend on the handcrafted feature engineering work. The blooming of deep learning brings the new paradigm of UGC quality assessment. The milestone study utilizes shallow convolutional neural networks (CNNs) to implement the no-reference image quality assessment [21]. The first image quality assessment network based on deep neural networks comprises ten convolutional layers and five pooling layers for feature extraction, and two fully connected layers for score computation [22]. At the same time, it is demonstrated that the distortion identification and quality prediction tasks can be jointly optimized in an end-to-end CNNs [23], or handled by the bilinear convolutional neural network [2]. The influence of feature learning \fbased on the pre-trained image classification task is also explored and exploited by the research community [24]. Previous studies have proposed a hierarchical network to integrate the extracted features based on an iterative mixed database training strategy to realize the problem of qualityaware feature representation and insufficient training samples in terms of the content and distortion diversity [3]. Recently, Wu et al. [25] have proposed a multi-sequence network called Assessor360 for blind omnidirectional image quality assessment. The method achieves efficient assessment of omnidirectional image quality by designing Recursive Probabilistic Sampling (RPS) to generate viewport sequences, combining Multi-scale Feature Aggregation (MFA) and Distortion-Aware Blocks (DAB) for distortion and semantic features, as well as Temporal Sequence Modeling Module (TMM) for learning temporal variations of viewports. The no-reference video assessment problem is also significant. Li et al. [6] have investigated the problem of automating the quality assessment in in-the-wild videos and proposed a unified framework that improves the performance of video quality assessment models by combining the content-dependent and temporal memory effects of the human visual system and by employing a mixed dataset training strategy. Wang et al. [26] provide an in-depth analysis of the correlation between video quality assessment model performance and video content, technical quality, and compression level. Sun et al. [7] have proposed a simple but effective deep learning-based reference-free quality assessment model, which learns quality-aware spatial features of video frames through an end-to-end spatial feature extraction network and combines them with motion features to assess video quality, uses a multilayer perceptron (MLP) network for quality regression, and employs a multiscale quality fusion strategy to deal with the problem of assessing the quality of videos with different spatial resolutions. Wu et al. [8] propose a model called DOVER, which evaluates the quality of UGC videos from both aesthetic and technical perspectives and predicts the overall video quality through a fusion strategy with a subjective heuristic strategy. CLIP [4] is a multimodal pre-training technique that is trained on large-scale image and text datasets to achieve strong cross-modal comprehension and generalization. The novel technology has been applied to the UGC image [5] and video [9] quality assessment. It inspires further exploration in the AIGC quality assessment task. 2.2. Generative Models In the past decade, the generative models have demonstrated the surprising ability of content creation, including Generative Adversarial Networks (GANs) [27] and Variational Autoencoders [28]. The diffusion model is a neural network model based on the Markov decision process, which realizes the content creation with the multiple-step forward addition of noise and reverses operation of denoising [29, 30]. The Stable Diffusion model [31] improves the image quality and the computation efficiency compared to the vanilla diffusion models [29], and realizes the variety and consistency in image generation quality. Recently, the variant of the Stable Diffusion model realizes the generation of the images with the 1024\u00d71024 resolution [32]. Surprisingly, the stable latent diffusion models can also be applied to create the video content [33]. Peebles et al. [34] have explored a new class of diffusion models based on transformer architectures [13, 35], which boosts the feature extraction and representation ability of the diffusion models. The diffusion transformer (DiT) model inspires the surprising Sora Large Vision Model (LVM) [36]. DreamBooth [37] is an extension of the text-to-image diffusion models that allows for fine-tuning the specific prompts. The DALL-E models are also the variants of diffusion models, which can generate high-quality images with the guidance of CLIP method [38] or Large Language Models (LLMs) [39]. Based on the proliferation of AIGC technologies, some benchmarks have also been released. The AGIQA-3K is an open database for the generative image quality assessment [40]. Recently, two larger datasets of image [15] and video quality [16] assessment have been released. It lays a cornerstone for the subsequent research on related evaluation methods. 2.3. Contrastive Language-image Pre-training One of the beautiful wishes of computer vision is that machine vision can operate like human vision. The research community has made huge efforts to learn visual representations that correspond with the semantic information. The CLIP method [4] is trained using large-scale image and text pairs, which can be natural language descriptions, labels, or other forms of annotations. During training, the model is optimized to ensure that a specific language signal is close to its corresponding image in the feature space while leaving itself in the feature space for mismatched image and text pairs. The SLIP (Self-supervised LanguageImage Pre-training) method [41] introduces the auxiliary enhancement of feature representation via the self-learning paradigm. The BLIP (Language-image Pre-training) methods [42, 43] focus on learning the complex relationships between images and text through bidirectional reconstruction. Recently, Yang et al. [44] propose an attentive token removal approach based on the random mask to accelerate the training time of CLIP. Overall, this field is just starting and will exert a methodological influence on the AIGC quality assessment task. 3. Proposed Approach We propose a prompt-conditional quality assessment method, which can be utilized for both AIGC image and \fvideo quality assessment tasks. First, our method encodes the image features and prompts text features separately, using trainable image encoder and frozen CLIP text encoder. Subsequently, the text features are employed as conditions to interact with the image features, culminating in the regression of the Mean Opinion Score (MOS). 3.1. Quality Assessment with Prompt Condition In the domain of quality assessment, traditional approaches have separated image and video QA tasks into distinct categories and only take a single image or single video as input. For image QA, the score is determined solely based on the quality of the image input, denoted as Eq. (1). \\la b el {ymosiqa} y_{\\operatorname {mos}}=f_{I Q A}(x) (1) where the symbol \u02c6 ymos stands for the image quality assessment (IQA) on the data x. Similarly, video QA tasks utilize a separate scoring function, given by the formulation defined in Eq. (2) \\la b el {ymosvqa} y_{\\operatorname {mos}}=f_{V Q A}(x) (2) AIGC content is accompanied by the prompt text that generates it, and assessing the quality of AIGC content must take into account the alignment between these contents and their corresponding prompts. Traditional methods, however, overlook this aspect. Thus, we proposed the Prompt Conditional Quality Assessment (PCQA) method, which uses prompt as a condition to assess the AIGC quality, denoted as Eq. (3), where x is the AIGC video input and t is the prompt text input as a condition. When the video has only one frame, this method degenerates into the image quality assessment. \\la b el {pcqa} \\hat {y_{mos}}=f_{PCQA}(x | t) (3) The whole framework of our proposed PCQA method has been demonstrated in Figure 1. The network architecture comprises a trainable visual encoder, a frozen hybrid text encoder, as well as trainable feature adapters, feature mixers, and regression heads. We simultaneously input AIGC images or videos, along with the corresponding prompt texts that are used to generate this content and regard the prompt texts as conditions for the Mean Opinion Score (MOS) regression. 3.2. Hybrid Text Encoder CLIP [4] model is pretrained on large number of image-text pairs. It enables the cross-modal understanding between language and images. In particular, it can guide image or video generation and editing through natural language, as known as \u201cprompt\u201d, which opens up the possibility of creating and understanding new works of art. Therefore, it inspires us to encode our prompt information in AIGC quality assessment task based on the CLIP mechanism. Figure 2. Feature mixer and regression head. Concatenation or dot product are used as feature mixer. This enables the visual features and the textual features of the prompt to interact. However, the CLIP text encoder is typically challenging to finetune. Too many trainable parameters in the model make training more hardware demanding and hyperparameters tuning more difficult. So we froze the parameters of CLIP text encoder during training and added a trainable feature adapter which enabled the output features to better adapt to the task. Freezing the text encoder during training can make the entire model more amenable to training. This is also proven in the experiment. For further details, refer to Section 4.4. The text encoders used in AIGC methods originate from various sources. We have integrated multiple CLIP text encoders and concatenated their outputs to enhance the content of the extracted textual information. Utilizing the frozen CLIP text encoder, we encode prompts from two distinct open-source implementations: Open-CLIP[11] and EVA-CLIP[12], which pretrained on diverse datasets such as DFN-5B[45], LAION-5B[46], DataComp-1B[47], and WebLI[48]. This design enhances the information from text condition. 3.3. Feature Adapter and Mixer We introduce a trainable dense layer that functions as a prompt adapter to enhance the synergy between textual and visual elements. For the visual aspect, we utilize a trainable vision backbone equipped with ImageNet pre-trained weights to extract the visual features, with ConvNeXtSmall [14] serving as the standard choice. This visual backbone, characterized by its modern architecture and extensive receptive field, is adept at extracting high-level semantic information from images. Regarding video input, we methodically extract visual features from a maximum of 16 frames and integrate these features by applying a 1D Convolutional Neural Network supplemented by mean pooling. After extracting the visual features, a trainable dense layer is employed as a vision adapter. Similarly, the concatenated textual features from CLIP are processed through a train\fable dense layer designated as a feature adapter for prompt information. We then integrate a feature mixer module, as shown in Figure 2, which employs both the dot product and concatenation techniques to foster a compelling interplay between the adapted prompt and vision features, akin to the crossattention in transformers. The dot product mixer excels at capturing the correlation between the generated images and the prompts, while the concatenation treats the prompt as a conditional factor. These mixers are applied by various experts and contribute to the model blending. Finally, the merged features are fused by a two-layer MLP to predict the ultimate quality score. This approach ensures a nuanced and comprehensive quality assessment sensitive to the alignment between the AIGC image or video and its generating prompts. 3.4. Ensemble Method in Quality Assessment We ensemble three different models with different vision backbones: ConvNeXt-S, EfficientVit-L, and EVA02Transformer-B. Firstly, we normalize the predicted scores of each model on the test dataset, so that different models have the same mean value and variance of prediction. After the normalization operation, we blend all the models by averaging the MOS prediction. Figure 3 shows the pipeline of our ensemble method for quality assessment. We adopt a strategy of normalizing the output of each individual model before average blending. By combining multiple models, this ensemble method can reduce the prediction variance, thus reducing the overall generalization error. This ensemble approach can also prevent biases in the prediction of MOS scores across different models, ensuring that each model contributes equally to the final predicted score. Through this normalization process, we can balance the influence of each model, making their roles in the ensemble prediction more fair and effective. This not only enhances the accuracy of the prediction but also strengthens the robustness of the ensemble modeling method. More imaginatively, we can think of this ensemble approach as different experts in the assessment of human art. The individual evaluations may be potentially subjective, but the integration of multiple experts removes the variance. We calculate the mean of normalized predicted values from multiple models as the final prediction of the MOS value. Eq. (4) formulates the mechanism. The symbol x denotes the input of an image or video, and the symbol t denotes the input prompt text. For the model fi in our normalized average blending method, \u00b5i, \u03c3i are the mean and variance for the prediction in the testing dataset, respectively. \\la b el {blendi n g} \\ hat {y_{mos}} = \\text {E}_{i}(\\frac {f_{i}(x | t) \\mu _{i}}{\\sigma _{i}}) (4) Figure 3. Overview of the final quality score computation strategy by model ensemble. The final score is average blending 3 models with different vision backbone. 4. Experimental Results 4.1. Datasets During the NTIRE 2024 Competition, two novel datasets are introduced for the assessment of AI-generated content (AIGC) quality. Track 1 of the NTIRE Quality Assessment for AI-Generated Content Competition presents a benchmark dataset AIGIQA-20K [15] designed for the evaluation of image quality. Concurrently, Track 2 delivers a benchmark dataset T2VQA-DB [16] tailored for the quality assessment of video content. These datasets contribute significantly to the accurate prediction of quality in AI-generated images and videos, thereby laying a crucial benchmark for advancements in multimodal learning methodologies. AIGIQA-20K [15] represents a new dataset featuring a broad array of content pertaining to art creation. It encompasses a training corpus of 14,000 images generated through the use of textual prompts. The primary predictive label employed is the mean opinion score (MOS). For evaluative purposes, the dataset is divided into a validation set, which consists of 2,000 samples and is used for Leaderboard-A rankings, and a test set, comprising 4,000 samples with their corresponding prompts, which is designated for Leaderboard-B assessments. T2VQA-DB [16] offers an extensive collection designed for text-to-video generation research. It comprises 7,000 training videos, each accompanied by a textual prompt, and evaluated using mean opinion score (MOS) metrics. Additionally, the dataset includes 1,000 validation videos, complete with prompts for preliminary assessment on the Leaderboard-A, and a set of 2,000 test videos, also with associated prompts, for the evaluation on the Leaderboard-B. 4.2. Implement Details The backbone of the proposed framework is ConvNeXtSmall [14], which serves as the feature extractor for our \fvision encoder. To enhance the model\u2019s performance, we incorporate EfficientVit-Large [49] and EVA-02 [50] model to form a hybrid network. These models contribute to a strong ensemble effect. For the construction of the hybrid text encoder, we employ the \u201cViT-H-14-quickgelu-dfn5b\u201d parameters sourced from the Open-CLIP model [11], which are pre-trained on the DataComp-1B dataset [45], as well as adopting the EVA-CLIP [12] weights. These weights play a crucial role in the encoding of prompt features and remain unchanged during our training phase to preserve the integrity of their pre-learned representations. To seamlessly integrate visual and textual data into a unified space, we employ the vision and prompt adapters that leverage a dense layer with a 1024-dimensional latent space. When processing video inputs, we apply a two-layer convolutional neural network with a kernel size of three, followed by a pooling layer to extract relevant features. Additionally, a multi-layer perceptron (MLP) serves as the regression head to refine the model\u2019s predictions. The training process involves 50 epochs with the AdamW optimizer [51], which uses a weight decay of 1 \u00d7 10\u22122 and a learning rate of 2 \u00d7 10\u22125. We also implement a cosine learning rate decay, a warm-up strategy, automixed-precision training, and a gradient clipping method with a normalized value of 1.0 to ensure the stable and efficient optimization. All the experiments are done with only one NVIDIA V100 card. Throughout the training process, we modulate the input resolution from 448\u00d7640 pixels, striking a balance between the computational demand and the model efficacy with a consistent batch size of 16. To enhance the model robustness, we implement the data augmentation techniques, such as random horizontal flips, slight random resized crops, and subtle brightness and contrast adjustments, which are designed to be non-intrusive and maintain the images\u2019 perceptual quality. The training objective is the reduction of mean squared error (MSE) between the predicted outputs and the normalized Mean Opinion Score (MOS), eschewing the use of ancillary external datasets. Employing normalized MOS as our regression target substantially bolsters the stability of the model throughout its training. 4.3. Main Results SRCC (Spearman\u2019s Rank Correlation Coefficient) and PLCC (Pearson Linear Correlation Coefficient) represent the performance metrics within the validation dataset. Val Score is the mean value of SRCC and PLCC in the validation dataset (Leaderboard-A). The Test Score, on the other hand, refers to the competition\u2019s final score on the test dataset. The calculation method of the Test Score is the mean of SRCC and PLCC on the testing set (LeaderboardTable 1. Competition Results in AIGIQA-20K Method SRCC PLCC Val Score Test Score StairIQA[3] 0.61 0.65 0.63 0.62 Ours 0.90 0.93 0.92 0.92 Table 2. Competition Results in T2VQA-DB Method SRCC PLCC Val Score Test Score SimpleVQA[7] 0.65 0.67 0.66 0.65 Ours 0.82 0.84 0.83 0.82 ResNet-50 ConvNeXt-Small EfficientViT-L EVA-02-B Vision Encoder 0.85 0.86 0.87 0.88 0.89 0.90 Score 0.862 0.883 0.891 0.875 Ablation Study on Vision Encoder Figure 4. Ablation study on vision encoder choice. The score represents the average of SRCC and PLCC obtained through crossvalidation on the AIGIQA-20K dataset. B). Table 1 and 2 demonstrate the considerable performance of the PCQA method in the AIGC image and video assessment. It is notable that the proposed method significantly surpasses the baseline method [3] [7]. 4.4. Ablation Studies To validate the effectiveness of the selection strategy of the image encoders, the text encoders, the design of feature mixers, and the model integration, we have devised a series of ablation study experiments. The score denotes the mean value of PLCC and SRCC on the five-fold cross-validation experiment. Vision Encoder: We have explored various vision backbones and input resolution strategies through the empirical study. As shown in Figure 4, we have found that compared to the traditional design paradigm visual backbone like ResNet-50 [52], novel network architectures, such as ConvNeXt[14] and variants of ViT[12, 49, 53], achieve significantly better performance. Under similar model sizes and inference latency, those networks that are designed with larger receptive fields and possess enhanced capabilities for high-level feature extraction yield better performance. We also explore the impact of input resolution and model \f384 448 512 640 768 Resolution 0.855 0.860 0.865 0.870 0.875 0.880 Score ConvNeXt-nano ConvNeXt-small ConvNeXt-large Figure 5. Ablation study on input resolution and model size. Input resolution between 448 to 640 leads to better results. Models with medium or larger sizes are also more likely to achieve better results. Table 3. Ablation Study on Text Encoder Text Encoder Trainable AIGIQA-20K EVA-CLIP[12] \u2713 0.680 EVA-CLIP[12] \u00d7 0.902 Open-CLIP[11] \u00d7 0.903 Long-CLIP[54] \u00d7 0.901 Hybrid 2 Text Encoder \u00d7 0.905 Hybrid 3 Text Encoder \u00d7 0.905 size. The result is shown in Figure 5. A medium-sized model can achieve relatively better results at higher input resolutions. Text Encoder: We compare the pretrained text encoders implemented with different CLIP approaches and found that the scheme utilizing a frozen pair of text encoders yielded the best results. The use of more than two text encoders does not lead to additional improvements. Additionally, we observe that many prompt texts are lengthy, which inspires us to explore the utilization of the text encoder based on Long-CLIP [54]. However, it does not result in any performance enhancement. The details are displayed in Table 3. Feature Mixer: We have conducted the experiments with two distinct feature mixing approaches, precisely the operation of concatenation and dot product, and have observed no significant performance disparity between them. Consequently, we randomly select one of these methods for model construction and utilize it in the process of model fusion to enhance diversity. Model Ensemble: We explore the efficacy of ensemble methods in enhancing the robustness of models. Specifically, we have conducted experiments involving horizontal flipping of images in the test set as a form of test-time augmentation. Additionally, we explore the model ensemble mechanism with different visual backbones. Our ensemble Table 4. Ablation Study on Feature Mixer Feature Mixer AIGIQA-20K T2VQA-DB concatenation 0.898 0.799 dot product 0.903 0.795 Table 5. Ablation Study on Model Ensemble in AIGIQA-20K Backbone Val Score ConvNeXt-Small 0.898 + TTA 0.911 (+0.013) + Ensemble EfficientViT-L 0.915 (+0.017) + Ensemble EVA-02 0.916 (+0.018) Table 6. Ablation Study on Model Ensemble in T2VQA-DB Backbone Val Score ConvNeXt-Small 0.799 EfficientViT-L 0.803 Ensemble both 0.815 method utilizes mean blending, which involves averaging the outputs of the models. In the image track, we ensemble three models, while in the video track, we ensemble two models. We can observe that the model ensemble method through mean blending has led to improvements in both two track tasks (AIGC image QA and AIGC video QA). The results indicate that employing an ensemble learning approach is an extremely effective strategy for the AIGC-QA task. 4.5. Discussion 4.5.1 Aspect Ratio Preservation Our approach involves directly resizing the original images to achieve computational efficiency. Such a mechanism compromises the aesthetic information inherent in the native aspect ratio. This methodological choice may lead to a loss of critical visual cues that contribute to the overall quality assessment. Future work should explore alternative preprocessing techniques that preserve the aspect ratio while maintaining computational efficiency, such as adaptive resizing or aspect ratio-aware cropping, to ensure a more interpretable representation of the image\u2019s visual content. 4.5.2 Spatial Information Preservation In an effort to adapt our model to both AIGC image and video quality assessment tasks, we utilize the embeddings extracted at the backbone stage 4 (after global average pooling) from the visual backbone. While streamlining the \fmodel and significantly improving inference speed, this decision leads to a loss of spatial information that could potentially diminish performance in video quality evaluation. Future research should consider incorporating additional modules or techniques that capture spatial-temporal information to enhance the model\u2019s capability in video quality assessment. 4.5.3 Length Extrapolation for AIGC-Video QA The T2VQA-DB Dataset [16] is composed of videos that are uniformly sampled to include 16 frames, whereas the AIGIQA-20K Dataset [15] can be regarded as a variant of the T2VQA-DB dataset, restricted to a single frame. Our methodology has not been evaluated on video samples encompassing more extensive frame sequences. This restriction could impede the model\u2019s generalizability to real-world contexts, where the lengths of videos and frame rates are subject to significant variation. It is imperative for future research to assess the model\u2019s performance on datasets characterized by a wide array of frame counts and temporal resolutions, thereby confirming its suitability for a more expansive spectrum of video content. 4.5.4 Computational Costs in Ensemble Method The ensemble methodology implemented in our study indeed augments the robustness of our model\u2019s predictions, along with the trade-off of the latency during inference and escalated training costs. To mitigate the increase in inference latency, we explore the self-distillation techniques [55] that aim to distill the predictions from an ensemble of models into a more compact model. Nonetheless, this strategy incurs a slight diminution in performance. Future investigations should delve into more sophisticated ensembling techniques that reconcile computational efficiency with the precision of the model. It could be achieved through the employment of model compression methods or the optimization of ensemble learning algorithms. 4.5.5 Competition Results We construct a universal strong baseline for the AIGC image and video quality assessment tasks through the design of a hybrid text encoder and feature adapter alongside an ensemble method utilizing multiple visual backbones combined with the mild data augmentation strategy. This framework achieves notable success, occupying a top-three position in the image track and a top-four ranking in the video track at the NTIRE competition [56]. 5." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2405.01573v1", |
| "title": "Class-Level Code Generation from Natural Language Using Iterative, Tool-Enhanced Reasoning over Repository", |
| "abstract": "LLMs have demonstrated significant potential in code generation tasks,\nachieving promising results at the function or statement level in various\nbenchmarks. However, the complexities associated with creating code artifacts\nlike classes, particularly within the context of real-world software\nrepositories, remain underexplored. Existing research often treats class-level\ngeneration as an isolated task, neglecting the intricate dependencies and\ninteractions that characterize real-world software development environments. To\naddress this gap, we introduce RepoClassBench, a benchmark designed to\nrigorously evaluate LLMs in generating complex, class-level code within\nreal-world repositories. RepoClassBench includes natural language to class\ngeneration tasks across Java and Python, from a selection of public\nrepositories. We ensure that each class in our dataset not only has cross-file\ndependencies within the repository but also includes corresponding test cases\nto verify its functionality. We find that current models struggle with the\nrealistic challenges posed by our benchmark, primarily due to their limited\nexposure to relevant repository contexts. To address this shortcoming, we\nintroduce Retrieve-Repotools-Reflect (RRR), a novel approach that equips LLMs\nwith static analysis tools to iteratively navigate & reason about\nrepository-level context in an agent-based framework. Our experiments\ndemonstrate that RRR significantly outperforms existing baselines on\nRepoClassBench, showcasing its effectiveness across programming languages and\nin various settings. Our findings emphasize the need for benchmarks that\nincorporate repository-level dependencies to more accurately reflect the\ncomplexities of software development. Our work illustrates the benefits of\nleveraging specialized tools to enhance LLMs understanding of repository\ncontext. We plan to make our dataset and evaluation harness public.", |
| "authors": "Ajinkya Deshpande, Anmol Agarwal, Shashank Shet, Arun Iyer, Aditya Kanade, Ramakrishna Bairi, Suresh Parthasarathy", |
| "published": "2024-04-22", |
| "updated": "2024-04-22", |
| "primary_cat": "cs.SE", |
| "cats": [ |
| "cs.SE", |
| "cs.AI" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "LLMs have demonstrated significant potential in code generation tasks,\nachieving promising results at the function or statement level in various\nbenchmarks. However, the complexities associated with creating code artifacts\nlike classes, particularly within the context of real-world software\nrepositories, remain underexplored. Existing research often treats class-level\ngeneration as an isolated task, neglecting the intricate dependencies and\ninteractions that characterize real-world software development environments. To\naddress this gap, we introduce RepoClassBench, a benchmark designed to\nrigorously evaluate LLMs in generating complex, class-level code within\nreal-world repositories. RepoClassBench includes natural language to class\ngeneration tasks across Java and Python, from a selection of public\nrepositories. We ensure that each class in our dataset not only has cross-file\ndependencies within the repository but also includes corresponding test cases\nto verify its functionality. We find that current models struggle with the\nrealistic challenges posed by our benchmark, primarily due to their limited\nexposure to relevant repository contexts. To address this shortcoming, we\nintroduce Retrieve-Repotools-Reflect (RRR), a novel approach that equips LLMs\nwith static analysis tools to iteratively navigate & reason about\nrepository-level context in an agent-based framework. Our experiments\ndemonstrate that RRR significantly outperforms existing baselines on\nRepoClassBench, showcasing its effectiveness across programming languages and\nin various settings. Our findings emphasize the need for benchmarks that\nincorporate repository-level dependencies to more accurately reflect the\ncomplexities of software development. Our work illustrates the benefits of\nleveraging specialized tools to enhance LLMs understanding of repository\ncontext. We plan to make our dataset and evaluation harness public.", |
| "main_content": "Introduction Using Large Language Models (LLMs) to generate code has garnered significant attention in recent years for its potential to streamline software development processes by automatically translating natural language descriptions into executable code snippets. Several code-specific models, like CodeGen (Nijkamp et al., 2023), WizardCoder (Luo et al., 2023), CodeLlama (Rozi` ere et al., 2024), StarCoder (Li et al., 2023), DeepSeekCoder (Guo et al., 2024) have been proposed to this end. While much of the focus in this domain has been on generating code units such as functions or statements, the specific task of generating classes has received comparatively less attention. Two of the most popular benchmarks HumanEval (Chen et al., 2021) and MBPP (Odena et al., 2021), for instance, focus on function generation. While useful, the problems in these datasets are short and standalone, and existing works have been able to show good 1 arXiv:2405.01573v1 [cs.SE] 22 Apr 2024 \fperformance on these benchmarks. LATS (Zhou et al., 2023) for instance reports a 94.4% accuracy on HumanEval, and 81.1% accuracy on MBPP. To address both of these issues, ClassEval (Du et al., 2023) proposes a benchmark for class generation. The 100 classes in the ClassEval dataset were handcrafted such that they contain inter-method dependencies, i.e. a method could reference another method in the same class. Using this dataset, they showed that, LLMs have a harder time generating code with these kind of dependencies than standalone functions of the kind present in HumanEval or MBPP. While an important contribution, the problems proposed in ClassEval are still standalone when taking the class as a single unit. The only dependencies from outside the class are from well known libraries that the LLM is likely to have memorized. This narrow focus overlooks the complex dependencies that classes may have on other components within a codebase, presenting a gap in our understanding of code generation techniques\u2019 practical applicability. A much more useful problem is to consider the generation of a new class that depends on code from across a repository. To address this gap, we make an attempt at creating a dataset to explore the task of generating classes within the context of code repositories, where classes may interact with other code entities within a larger codebase. Specifically, we collect 130 Java classes from 10 repositories and 97 Python classes from 10 repositories to create RepoClassBench. Each class is present in the context of a real-world repository and has dependencies from the repository. Additionally, we make sure that each class has corresponding test cases that pass on the ground truth, and ensure sufficient coverage. To be able to solve the problems in this dataset, the model has to both, understand the functionality required from each method in the class and reason about how to use repositorydependencies to achieve the same. We provide an evaluation of existing code-generation techniques in this setting, and demonstrate their poor performance. Specifically, BASICPROMPTING either hallucinates identifiers or avoids the dependencies, REFLEXION is able to reason about the error, but does not have enough context to fix it, and RAG-based approaches are able to find similar snippets from across the repo but fail to bring in other kinds of dependencies that are required by the class. Taking a step forward, we address the shortcoming of these methods, by proposing a novel method called RRR and show significant gains. Specifically, RRR leverages existing programming language tools to retrieve precise information from across the repository. With the injection of pointed repository context through these tools, the model is able to fix the error observed during the feedback-reflection stage. By bridging these gaps, our study seeks to contribute to a deeper understanding of LLMs\u2019 potential in generating classes within real-world coding scenarios, with implications for the development of more effective code generation techniques in the future. Our contributions are three-fold: \u2022 We contribute the first benchmark RepoClassBench for class-level code generation in realistic environment of an existing repository, with 130 java classes spanning 10 repositories and 97 python classes spanning 10 repositories. \u2022 We propose a novel method called RRR that equips LLMs with static analysis tools to iteratively navigate and reason about repository-level context in an agent-based framework, and provide a comparison with existing methods. \u2022 We contribute 6 repository tools, based on our observations of common errors experienced by code agents in this setting. 2 Related Work Large Language Models have seen wide success on various coding tasks. Many benchmarks have been created to assess their performance. CoNaLA (Yin et al., 2018), consisting of 500 examples is a statement-level benchmark where the target of each example contains one statement. HumanEval (Chen et al., 2021) and MBPP (Odena et al., 2021) are two widely 2 \fInitial Generation Oracle Call Tool Invocation Reflection Improve Generation NL description: The public class StringNumberHandler, which extends the abstract class AbstractCellHandler \u2026The `getCellValue` method is a protected method \u2026 formatting function of the relevant utilities class specialized in handling Excel numbers, and returns the resultant string Initial class: public class String NumberHandler \u2026{ protected String getCellValue(\u2026) return NumberUtils.formatNumber(\u2026) } cannot find symbol symbol: variable NumberUtils Tool\u2019s output: Tool call: get_relevant_code('format numeric value\u2019) Output: The following pieces of code from the repository may be relevant for the query \u201cformat numeric value\u201d: #### Code Piece 1: For class io.github.zouzhiy.excel.utils.ExcelNumberUtils:: \u2026. static members: -format(java.lang.Number number, java.lang.String format) : String instance members: -format(java.lang.Number number, java.lang.String format) : String Reflection output: The feedback indicates that the class NumberUtils does not exist. I need to use the class ExcelNumberUtils instead. \ud835\udc65 Class Description Independent Tools \ud835\udc660 \ud835\udc610 \u2133\ud835\udc65, \ud835\udc610 LLM: Create class \ud835\udc66\ud835\udc56 \ud835\udc66\ud835\udc56+1 Dependent Tools Tool Descriptions Tool Execution LLM: Pick a tool Selected Tool Tool\u2019s output \u2133\ud835\udc65, \ud835\udc66\ud835\udc56, \ud835\udc53\ud835\udc4f\ud835\udc56, \ud835\udc47 \ud835\udc5b \ud835\udc47\ud835\udc56 \ud835\udc61\ud835\udc56 \u2133\ud835\udc65, \ud835\udc66\ud835\udc56, \ud835\udc53\ud835\udc4f\ud835\udc56, \ud835\udc61\ud835\udc56 Reflection output \ud835\udc5f\ud835\udc56 Improved class code \u2133\ud835\udc65, \ud835\udc66\ud835\udc56, \ud835\udc53\ud835\udc4f\ud835\udc56, \ud835\udc61\ud835\udc56, \ud835\udc5f \ud835\udc56 \ud835\udc65 \ud835\udc65 \ud835\udc65 \ud835\udc66\ud835\udc56 \ud835\udc66\ud835\udc56 \ud835\udc66\ud835\udc56 \ud835\udc53\ud835\udc4f\ud835\udc56 \ud835\udc53\ud835\udc4f\ud835\udc56 \ud835\udc53\ud835\udc4f\ud835\udc56 \ud835\udc61\ud835\udc56 \ud835\udc61\ud835\udc56 \ud835\udc5f\ud835\udc56 LLM: Improve class \u2130\ud835\udc47\ud835\udc56 Build Testcases \ud835\udc47 1 \ud835\udc47 \ud835\udc41 \ud835\udc47 2 \ud835\udc53\ud835\udc4f\ud835\udc56 Test Failures Build Errors Repository Tools Tools LLM: Reflect Feedback Oracle Feedback: Improved class: public class StringNumberHandler \u2026{ protected String getCellValue(\u2026) \u2026 return ExcelNumberUtils.format( numericValue, javaFormat); } Initial class code Figure 1: Flowchart illustrating the procedural framework of RRR. RRR utilizes the natural language description of the class and the outputs of independent tools to create an initial attempt. This attempt is evaluated by an oracle that pinpoints specific errors. Subsequently, RRR uses repository tools to gather information to rectify errors. It then reflects on feedback and tool insights to refine the attempt. This iterative cycle persists until all test cases pass or the maximum allowed number of oracle calls is reached. used datasets, for function level code-generation, consisting of 164 and 974 tasks respectively. At the class-level, ClassEval (Du et al., 2023) has been proposed with 100 class generation problems, where the input is the class skeleton. However, these are all independent codegeneration problems. Although ClassEval includes inter-method dependencies, they are all present within the same class. The external references come from well-known libraries that the LLM is likely to have memorized. In real world repositories, code includes complex inter-dependencies from other files in the repository. RepoBench (Liu et al., 2023), CoderEval (Zhang et al., 2024) and MGD (Agrawal et al., 2023) are attempts to move closer to this setting, and show that existing models perform much better on the standalone setting than the non-standalone setting. However they explore line and function level tasks in the context of a repository, whereas RepoClassBench explores the generation of non-standalone classes within the context of a repository. There are two aspects to solving our dataset, retrieving the right context, and reasoning to generate the code. Reasoning: To improve the generation of LLMs, various iterative refinement techniques have been proposed. Self-refine (Madaan et al., 2023) attempts to use the LLM as it\u2019s own critic and produces successively better outputs. Reflexion (Shinn et al., 2023) incorporates test-case feedback while generating the reflection on its output. LATS (Zhou et al., 2023) uses the LLM as an agent to explore a tree of solutions, using compiler and test feedback as observations. Retrieval: While reasoning-enhanced methods, in themselves, may be useful for standalone generations, they are not sufficient when external context is needed. This is especially true, when the context consists of private data, unseen during pretraining. Under this paradigm Retrieval-Augmented-Generation methods like REALM (Guu et al., 2020), ATLAS (Izacard et al., 2022), RetGen (Zhang et al., 2021), FLARE (Jiang et al., 2023) retrieve relevant context, usually by considering snippets with the highest similarity score with the query. Similarly, in the code setting RLPG (Shrivastava et al., 2023) trains a model to predict the relevant context source, but relies on there being a \u201dhole\u201d in the code, whereas there is no such hole in the NL to new class setting. Additionally the RLPG model was trained for Java, whereas for the other languages new models would need to be trained. This adds additional cost of constructing new training data and the actual training of new models. RepoCoder (Zhang et al., 2023) has been proposed to perform iterative retrieval and generation. While such similarity based RAG methods can retrieve \u201dsimilar\u201d context, they fails to effectively retrieve \u201ddependency\u201d context. Further discussion can be found in RQ2. 3 \fFigure 2: The dataset creation pipeline involved shortlisting candidate repositories, noting passing test cases, finding classes covered by passing test cases (which make external references) and finally mitigating memorization issues if necessary, using paraphrasing. In our method, we leverage repository-level tools to allow the LLM explore the repository, as an alternative retrieval mechanism, in addition to using test-case feedback. This is along the lines of several works that have explored equipping the LLM with tools like ReACT (Yao et al., 2023) and ToolFormer (Schick et al., 2023). However to our knowledge, this is the first work that curates tools-specifically for repository-exploration. Hence, we propose a benchmark that addresses the problem of class generation in the context of a repository, address a gap in the span of existing benchmarks, and also propose a novel method that integrates retrieval and reasoning, mitigating the shortcomings of existing methods. 3 Dataset: RepoClassBench RepoClassBench is a benchmark featuring repositories from Github across languages: Java and Python. The task is to synthesize a complete class within a repository based on a natural language description, utilizing the context from other files within the same repository. Current benchmarks face two primary limitations: (1) they (Du et al., 2023) typically focus on generating small localized code snippets, which do not accurately represent the complex tasks software engineers encounter, often requiring a comprehensive understanding of the entire codebase; (2) they (Liu et al., 2023) rely on metrics such as exact-match or cosinesimilarity to the ground truth for evaluation, rather than assessing the functionality of the generated code through test cases. We mitigate these issues by designing a benchmark where every task corresponds to a class-generation problem where the LLM needs to synthesize the class based on the natural language specification of the class. We ensure that every class in our benchmark makes use of external references in the repository and is covered under test cases. 3.1 Benchmark Construction Stage 1 Shortlisting repositories: Our benchmark includes repositories both before and after the cutoff-date of the models we evaluate on. For JAVA we start with repositories considered in the MGD (Agrawal et al., 2023) dataset. For Python, we adapt the popular benchmark SWEBench (Jimenez et al., 2024) and also shortlist popular repositories which were first created on Github after Sept 2021. We filter out those repositories which we are unable to build and run. (Details in E.1.1) Stage 2 Shortlisting classes: Within each repository, we identify all classes that pass the existing test cases. We retain only those classes that (a) reference other parts of the repository within their body, and (b) have methods covered by test cases. To accommodate 4 \fthe context length limitations of large language models (LLMs), we exclude classes whose implementations exceed 3,000 tokens (excluding docstrings). Additionally, we limit our selection to classes defined in the global namespace. (Details in E.1.2) Stage 3 Dataset paraphrasing: For repositories available before the LLMs\u2019 training data cutoff, we undertake a paraphrasing initiative, altering the names of most symbols to prevent models from completing tasks through mere memorization. (Details in E.1.3) Stage 4 Generating natural language specification: We break the information within each class into varying levels of granularity and record it as metadata. The complete metadata fields are listed in Table E.1.3. Methods are categorized by three information levels: (1) Signature, detailing input and output types; (2) Docstring, providing a high-level function description; (3) Body, outlining full implementation and logic, including external references. We prompt GPT-4 to generate the natural language description of the class by providing it varying granularity of information extracted as a subset of the metadata (refer to Table E.1.3). Hence, two types of natural language description in our dataset are:1. DETAILED: This includes details from the entire class body (excluding imports) and prompts GPT-4 to create an NL description. 2. SKETCHY: This omits method bodies from the prompt, leading GPT-4 to generate an NL description without low-level implementation specifics or explicit external references. In the SKETCHY setting, since GPT-4 does not receive the method bodies, the resulting natural language (NL) descriptions lack detailed implementation specifics and explicit mentions of the external references used during the method\u2019s development. Consequently, the SKETCHY NL descriptions present a higher level of difficulty compared to the DETAILED versions. To foster community engagement and further research, we make the metadata used for constructing these prompts publicly available. This allows others to create NL descriptions with varying degrees of specificity and ambiguity to challenge the models\u2019 capabilities. Example of the difference in prompts to GPT-4 for them can be found in Prompt 1. Some statistics about our dataset can be found in Table 1. Distribution of tasks across different repositories can be found in: Figure 3 and Figure 4. Java Python Num. of tasks 130 97 Length of DETAILED NL description 1475.98 / 286.89 3245.23 / 771.77 Length of SKETCHY NL description 1481.69 / 269.81 2633.20 / 607.64 Length of classes 2080 / 452.69 4663.76 / 1070.49 Num. of TCs directly covering the classes 5.48 42.94 Num. of unique Ext. Refs 3.51 7.06 Num. of funcs in the class 3.1 9.29 Num. of funcs covered in at least one TC 2.85 4.84 Num. of funcs making at least one Ext. Refs 2.28 4.84 Table 1: Dataset High level Statistics. Each row represents an average over all the tasks in the dataset. The cells with / represent the <number of characters> / <number of tokens using gpt-3.5 tokenizer>. TC = Test Cases, funcs = functions, Ext. Refs = References from other files in the repository 4 Method To address the challenges presented by our benchmark, we propose Retrieve-RepotoolsReflect (RRR), an innovative method that enhances Large Language Models (LLMs) with static analysis tools. This approach enables the LLMs to iteratively explore and understand 5 \fthe context of a code repository through an agent-based framework. RRR leverages repository navigation and reasoning capabilities to effectively synthesize code that aligns with the broader structure and dependencies of the repository. 4.1 Phases of RRR The procedural framework of RRR is illustrated visually in Figure 1 and outlined algorithmicaly in Algorithm 1. During the initial generation phase, the LLM M makes an initial \u201dguess\u201d y1 based on the class description x and output from invocations of the independent tool t0: y1 = M(x, t0). Given the limited information available at this stage, the LLM may resort to hallucinating identifiers and other code-structures. (Prompt in G) The oracle call entails passing the generated code yi to the oracle O, to receive oracle feedback f bi, f bi = O(yi). If the attempt exceeds the maximum number of oracle calls or successfully passes all test cases, the loop terminates and returns yi. Otherwise, the oracle feedback errors f bi are utilized by the LLM agent in subsequent phases to refine its generation. While the oracle feedback identifies problems in the code, it lacks guidance on error resolution. To address this, the LLM requires repository context. This context is provided through carefully curated tools, allowing the LLM to explore the repository and retrieve relevant information. Based on the class description x, current generation yi and feedback f bi, the model generates a set of tool calls Ti: Ti = M(x, yi, f bi). The executor E takes these tool calls and produces outputs ti: ti = E(Ti). (Prompt in G) Based on the feedback from the oracle f bi and tool outputs ti, the LLM generates a reflection ri on the encountered errors and necessary actions to rectify them, using hints from the tool outputs tdependent. ri = M(x, yi, f bi, ti) This reflection serves as a hint for the subsequent stage. (Prompt in G) In the improved generation phase, leveraging the last attempt\u2019s yi, oracle feedback f bi, tool outputs ti, and reflection ri, the LLM makes another attempt at code generation yi+1. yi+1 = M(x, yi, f bi, ti, ri) (Prompt in G) After the improved generation, the attempt gets passed back to the \u201dOracle call\u201d phase and the loop continues. 4.2 Tools In RRR, tools are categorized as either independent or dependent based on their need for reasoning. Independent tools operate without considering the current state of the RRR loop and are automatically invoked during the initial generation phase. Our suite includes a single independent tool, \u2018get related snippets\u2018. . On the other hand, tools requiring reasoning over the current state of the RRRloop are classified as dependent tools. Our dependent toolset contains get imports, get class info, get signature, get method body and get relevant code. More information about the tools can be found in Table 4.2. 5 Experimental Results 5.1 Baselines Apart from RRR, we test other important baselines (summarized in Table 8) on our newly constructed benchmark. In BASICPROMPTING the LLM is expected to generate code solely based on the Natural Language Description. In NAIVERAG, inputs include the Natural Language Description and and top-snippets retrieved from repository when queried using the Natural Language Description. REFLEXION incorporates Oracle feedback to iteratively improve the generation. We also use REPOCODER, where the initial generation uses snippets retrieved using the Natural Language Description as the query, and subsequent iterations use snippets retrieved using the previous code-generation as the query. Summary of the baseline can be found in 8. 6 \fTool Description get related snippets Type: Independent. Segments the repository into snippets and returns the top 5 snippets based on cosine similarity with the class description. get imports Type: Dependent. Suggests imports for all the undefined symbols in the current generation, scanning the repository for potential source files defining the symbol and recommending import statements. Input args: No input get class info Type: Dependent. Locates the class definition in the repository and gathers information about its members, including inherited members, providing detailed information about each member. Input args: class name get signature Type: Dependent. Returns the signature of the requested method, displaying signatures of all methods with the same name if they exist in the same class. Input args: class name, method name get method body Type: Dependent. Returns the method definition of the requested method, truncating the output if it is too large, and showing the definition for each method with the same name if they exist. Input args: class name, method name (where class name is the class of which the method is a member. Class name is left as None for global methods.) get relevant code Type: Dependent. Allows specific queries to retrieve code structures using embedding similarity scores, returning the top 3 structures based on cosine similarity using UnixCoder embeddings. Input args: natural language query Table 2: Table containing descriptions of the tools used in RRR. The Type indicates whether reasoning is required (dependent) or not (independent) for the invocation. 5.2 Metrics For each task in our benchmark we use three metrics to measure performance. Pass@K measures the percentage of the tasks for which there is at least one correctly generated solution (passing all test cases) among the top K samples generated by the LLM (Chen et al., 2021). For our experiments, we simply set the total number (denoted as n) of samples generated by an LLM to 1, and then calculate Pass@1 for the LLM. For completeness, in RQ 7, we also measure Pass@ 1, 2, 3, setting n=6 for the JAVA dataset. We also use TR (Test Rate) which measures the mean of the fraction of test cases passed for all generations across all tasks. Finally, for JAVA, since we have access to a compiler, we also measure CR, or the compilation rate which measures the percentage of tasks for which the LLM generated code that successfully compiled. 5.3 Research Questions Through our experiments we aim to answer the following RQs (RQs 5-8 in Appendix): RQ1How does RRR perform compared to the baselines, under the DETAILED and SKETCHY settings ? RQ2 Where do similarity-based retrieval methods fail? RQ3 What is the impact of test feedback on performance? RQ4 What are the challenges faced by RRR? RQ5 How important is each tool for our method? RQ6 How does number of iterations in RRR and baselines impact their performance? RQ7 How does increased sampling impact the performance our RRR and the baselines? RQ8 Does performance depend on whether the repository might have been included in the training dataset of the LLM? 5.3.1 RQ1 Comparative analysis of RRR and baselines We analyzed Table 5.3.1 and Table 4, comparing RRR\u2019s performance with baselines. To explore the use of different LLMs, for JAVA, GPT-3.5 was used; for PYTHON, GPT-4 was employed. RRR consistently outperforms baselines across all metrics. BASICPROMPTING performs the worst without feedback or context, with hardly any generations passing 7 \fMethod JAVA PYTHON P@1 TR CR P@1 TR BASICPROMPTING 1.54 1.54 2.31 1.03 2.40 REFLEXION 3.85 5.04 5.38 7.22 14.36 NAIVERAG 11.54 12.15 14.62 13.40 14.08 REPOCODER 40.77 43.38 46.92 22.68 25.59 RRR 54.62 63.22 70.77 27.84 36.92 Table 3: Performance numbers expressed in percentage, for the baselines and RRR on the DETAILED version of the dataset. P@1 represents the Pass@(1,1) metric, TR is the mean testpass rate across all tasks, and CR is the mean compilation rate across tasks. RRR performs much better than the baselines. Method JAVA PYTHON P@1 TR CR P@1 TR BASICPROMPTING 1.54 1.54 2.31 0.00 1.43 REFLEXION 2.31 3.04 5.38 0.00 0.24 NAIVERAG 8.46 8.46 10.00 0.00 13.38 REPOCODER 34.62 39.17 44.62 7.14 10.06 RRR 48.46 54.72 64.62 7.14 21.89 Table 4: Performance numbers expressed in percentage, for the baselines and RRR on the SKETCHY version of the dataset. RRR performs much better than the baselines. test cases. REFLEXION slightly improves with oracle feedback but lacks repository context, resorting to hallucinating identifiers and limited repository utilization. To add the repository context one might consider dumping the entire repository in the prompt. However, the token count in JAVA and PYTHON repositories can exceed 50k, surpassing LLM context windows, and dumping entire repositories into prompts is impractical. To tackle these issues, methods that employ retrieval can be used. There\u2019s a noticeable performance jump from REFLEXION to NAIVERAG, further improved with REPOCODER, due to more relevant retrieved snippets. While REPOCODERis the best performing baseline, it has two major drawbacks. Firstly, oracle feedback is not used, and secondly, the REPOCODER snippets retrieve \u201dsimilar\u201d lines of code from the repository, and not dependencies, thereby missing crucial information. RQ2 explores this second point in greater detail. Conversely, RRR retrieves dependency context, combining repository context and oracle feedback intelligently. It queries specific repository information to address oracle feedback, consistently outperforming baselines across languages and metrics. Still, there are cases where RRR fails test cases, which we analyze in RQ4. 5.3.2 RQ2 The contributon of similarity-based RAG In this benchmark, repositories, typical of those on GitHub, contain numerous highly similar classes. RAG-based techniques excel over BASICPROMPTING or REFLEXION because they leverage these similarities. However, there\u2019s a crucial distinction between \u201ddependency context\u201d and \u201dsimilarity context.\u201d Dependency context involves information from the repository about utilized code structures, while similarity context merely seeks similar code, which may not always be present. To illustrate that REPOCODER\u2019s gains largely stem from \u201dsimilar\u201d snippets, we remove all relatives of each class to be generated, defined as descendants of the grandparent except the immediate parent and itself. These relatives, often similar to the target class, are pulled in through REPOCODER snippets. Upon re-comparison with baselines (see Table 5), REPOCODER\u2019s performance notably declines in both DETAILED and SKETCHY settings. 8 \fMethod JAVADETAILED JAVASKETCHY P@1 TR CR P@1 TR CR BASICPROMPTING 0.77 0.77 0.77 1.54 1.54 3.85 REFLEXION 2.31 2.88 3.85 1.54 2.36 4.62 NAIVERAG 8.46 8.46 10.00 4.62 6.60 8.46 REPOCODER 23.85 24.42 26.15 16.92 23.92 31.54 RRR 46.92 53.23 60.00 36.92 43.86 51.54 Table 5: Performance numbers expressed in percentage, for the baselines and RRR, after removing the \u201dRelatives\u201d from the DETAILED and SKETCHYversions of the Java dataset. While all retrieval-based methods suffer, RRR does not suffer as much as REPOCODER. Conversely, RRR suffers less, indicating its reliance on \u201ddependency context\u201d for generation completion. 5.3.3 RQ3 Importance of test feedback Method JAVADetailed JAVASketchy P@1 TR CR P@1 TR CR BASICPROMPTING 1.54 1.54 1.67 1.67 1.73 2.69 REFLEXION 2.69 3.36 5.38 3.08 3.78 6.92 NAIVERAG 11.41 11.92 13.33 8.97 9.61 11.28 REPOCODER 37.05 40.12 45.00 29.74 36.77 44.62 RRR 56.15 62.32 71.92 41.67 51.76 63.46 Table 6: Performance numbers expressed in percentage, for the baselines and RRR, terminating the generation immediately after the compilation succeeds, on the DETAILED and SKETCHY versions of the Java dataset. There is a marginal decrease in performance, indicating that most functional requirements can be met simply by using the compiler as the oracle. Examining the role of test feedback, we restrict the oracle to compiler feedback, applicable only for JAVA. In Table 6, baselines like BASICPROMPTING, NAIVERAG, and REPOCODER remain unchanged without oracle feedback. Methods utilizing test feedback show a slight decrease in performance, but still perform adequately. Code that compiles and aligns with functional descriptions tends to pass test cases, as they typically assess functional requirements. While test feedback aids in ambiguous cases, the LLM generally performs well with just compiler feedback. 5.3.4 RQ4 Success and failure case analysis Language Reasoning Errors Functional Ambiguity JAVADETAILED 70% 30% JAVASKETCHY 50% 50% Table 7: Analyzing failure causes across a sample of 20 tasks from the Java dataset, errors are categorized as reasoning-related (in tool retrieval or code generation) or functional ambiguity-related. The table shows the percentage contribution of each error type to failure cases. In the DETAILED dataset, reasoning errors dominate, while in the SKETCHY version, functional ambiguity-related errors increase. This section investigates instances where the Language Model (LLM) failed to pass test cases, identifying potential contributing factors. Notably, errors weren\u2019t due to information 9 \faccess limitations through tools; there was always a tool for repository information retrieval. Our analysis focuses on categorizing error types to guide future investigations for mitigation strategies. Distinct error patterns emerged upon examination, broadly categorized as reasoning errors or functional ambiguity errors. Reasoning errors occur during tool retrieval or code generation, where the LLM fails to interpret or apply information correctly. Functional ambiguity errors arise when the LLM misinterprets terse natural language descriptions, leading to multiple interpretations or missing information. Table 5.3.4, a qualitative analysis of 20 failure cases, shows reasoning errors dominate in the DETAILED setting, while functional ambiguity increases in the sketchy setting. Additionally, the LLM struggles with lengthy textual inputs, with extended class length correlating significantly with decreased efficacy. Over the detailed JAVA dataset, test performance and class length had a Spearman correlation of -0.66, highlighting the challenge of reasoning over extensive texts. Identifying these failure cases sheds light on the dataset\u2019s role in understanding LLM capabilities and limitations. By pinpointing error patterns and correlating them with variables like class length, our analysis sets the stage for future research on enhancing language model robustness and efficacy. 6 Discussion RepoClassBench provides a previously underexplored setting, with unique challenges that require reasoning over the repository. We have further showed that previous methods that use similarity based retrieval have certain drawbacks, in terms of applicability and effectiveness. In solving this problem we proposed using tools to retrieve repository information, which is able to combine traditional embedding based retrieval (through the get related snippets and get relevant code tools) and static analysis tools. Through an iterative paradigm of refinement based on the tool outputs and oracle feedback, we showed that RRR performs well. 7" |
| } |
| ] |
| } |