text
string | source
string |
|---|---|
e fut ed b y the da t a pr esen t ed. [ U pda t ed K e y Clues]: - Str ok e v olume r emains unchang ed during sauna ba thing , despit e incr eases in heart r a t e and c ar diac output. - Sauna use is link ed t o r educed c ar dio v ascular mort ality (including pr esumably heart a tt ack risk), impr o v ed men t al health, and lo w er risk s of demen tia, Alzheimer ’ s, and p s y chotic disor der s. - Er ectile function is not discussed, but ab sence of men tion does not dir ectly c on tr adict a bene fit—only s tr ok e v olume is e x plicitly s t a t ed not t o incr ease. isn ’t m en tio ne d in th e ar tic le isn ’t r e po r t e d as a b ene fit a c lear , d ir e ct c o n t r a d i ctio n of Op tio n B [Final Ans w er]: (B) Figure 9: Failure case in a basic -level question showing that even with the Catfish Agent’s dissent, the Moderator may override critique and finalize an incorrect diagnosis. 21
|
https://arxiv.org/abs/2505.21503v1
|
How does Alignment Enhance LLMs’ Multilingual Capabilities? A Language Neurons Perspective Shimao Zhang1†*Zhejian Lai1*Xiang Liu1*Shuaijie She1Xiao Liu2 Yeyun Gong2Shujian Huang1‡Jiajun Chen1 1National Key Laboratory for Novel Software Technology, Nanjing University 2Microsoft Research Asia {smzhang,laizj,liuxiang,shesj}@smail.nju.edu.cn {xiao.liu.msrasia,yegong}@microsoft.com {huangsj,chenjj}@nju.edu.cn Abstract Multilingual Alignment is an effective and representative paradigm to enhance LLMs’ multilingual capabilities, which transfers the capabilities from the high- resource languages to the low-resource languages. Meanwhile, some researches on language-specific neurons reveal that there are language-specific neurons that are selectively activated in LLMs when processing different languages. This provides a new perspective to analyze and understand LLMs’ mechanisms more specifically in multilingual scenarios. In this work, we propose a new finer-grained neuron iden- tification algorithm, which detects language neurons (including language-specific neurons and language-related neurons) and language-agnostic neurons. Further- more, based on the distributional characteristics of different types of neurons, we divide the LLMs’ internal process for multilingual inference into four parts: (1) multilingual understanding, (2) shared semantic space reasoning, (3) multilingual output space transformation, and (4) vocabulary space outputting. Additionally, we systematically analyze the models before and after alignment with a focus on different types of neurons. We also analyze the phenomenon of “Spontaneous Mul- tilingual Alignment”. Overall, our work conducts a comprehensive investigation based on different types of neurons, providing empirical results and valuable in- sights for better understanding multilingual alignment and multilingual capabilities of LLMs.1 1 Introduction By training on the extensive corpus, large language models (LLMs) demonstrate outstanding language capabilities [Grattafiori et al., 2024, Yang et al., 2024, Liu et al., 2024, Zhang et al., 2025]. However, due to the unbalanced pretraining corpus across different languages, LLMs have very uneven per- formance on high-resource languages and low-resource languages [Huang et al., 2023, Zhu et al., 2023, Zhang et al., 2024]. Therefore, researchers have conducted comprehensive explorations to further enhance the multilingual performance of LLMs. A straightforward approach is increasing the proportion of non-English texts during pretraining [Ni et al., 2021, Yang et al., 2024] or performing continual pretraining with multilingual texts [Liu et al., 2021, Ji et al., 2024]. But these approaches often entail high computational costs and substantial amounts of multilingual data. *Equal contribution. †Work done during his internship at MSRA. ‡Corresponding author. 1The project will be available at https://github.com/NJUNLP/Language-Neurons-Alignment . Preprint. Under review.arXiv:2505.21505v1 [cs.CL] 27 May 2025 Considering LLMs’ great performance on the high-resource languages, multilingual alignment has emerged as a representative paradigm for enhancing multilingual capabilities by transferring knowledge from high-resource to low-resource languages [Zhao et al., 2024a, She et al., 2024]. A representative example is MAPO She et al. [2024], which improves multilingual alignment by utilizing a well-trained multilingual translation model to compute alignment scores based on the conditional generation probability of translating non-English responses into English. Language-specific ?Language-agnostic ?Language-related? ❌ ❌ Figure 1: A neuron’s activation probability across different languages. This neuron that ex- hibits high activation probabilities across multi- plebutnot all languages can’t be correctly cate- gorized under the existing methodology.Many studies conduct systematic mechanism anal- yses of the multilingual alignment and LLMs’ multilingual capabilities. Zhao et al. [2024b] split the multilingual processing workflow
|
https://arxiv.org/abs/2505.21505v1
|
into three parts: multilingual understanding, resolv- ing tasks, and generating outputs in the target language. This three-stage inference workflow clearly demonstrates how LLMs leverage English as a pivot language to handle multilingualism us- ing a unified pattern. Inspired by the neurobio- logical underpinnings of human language facul- ties, Tang et al. [2024] conducts a fine-grained identification by detecting the language-specific neurons. Their results indicate that LLMs predom- inantly utilize a small subset of neurons to pro- cess the particular target language. Furthermore, these language-specific neurons are primarily sit- uated in the model’s top and bottom layers [Tang et al., 2024], which is consistent with the three- stage multilingual workflow of Zhao et al. [2024b]. However, we notice a key limitation in the existing language-specific neuron identification methodol- ogy: Some neurons are shared across multiple languages but are not entirely language-agnostic. Such neurons are incorrectly categorized as either language-specific or language-agnostic neurons under the existing framework. We present a case study in Figure 1. Furthermore, we aim to systematically explore an important question: Can we bet- ter analyze and understand how multilingual alignment enhances the LLMs’ multilingual capabilities from the perspective of language neurons? In this work, we comprehensively investigate the multilingual alignment of LLMs from the perspective of language neurons, where we deploy MAPO as a representative multilingual alignment algorithm. Considering the above limitation of the existing language-specific neuron identification methodology, we define language neurons as the union of language-specific neurons andlanguage-related neurons , as opposed to language-agnostic neurons . We separately distinguish language-related neurons from both language-specific and language-agnostic neurons, which allows for more precise analyses. Furthermore, we propose a new language neuron identification algorithm in our work, which is able to identify language-related neurons that are in fact shared across multiple languages. Then we analyze the models before and after alignment, focusing on the changes in different types of neurons. Based on the distributional characteristics of these neurons, we divide LLMs’ internal process for multilingual inference into four parts: (1) multilingual understanding, (2) shared semantic space reasoning, (3) multilingual output space transformation, and (4) vocabulary space outputting. Our findings reveal that different parts exhibit distinct dependencies on different types of neurons, and that multilingual alignment significantly enhances the activation of the corresponding types of neurons across the relevant layers. Additionally, we analyze the important “Spontaneous Multilingual Alignment” [Zhang et al., 2024] phenomenon in LLMs, providing insights into the roles of language- agnostic neurons and language-related neurons shared across languages. For further analysis, we also provide observations about the uniqueness of English and the neuron distributions. Overall, based on different types of neurons, we present empirical results and valuable insights that contribute to a deeper understanding of multilingual alignment and the multilingual capabilities of LLMs. 2 2 Related Work 2.1 Multilingual Alignment Conducting pretraining or continual pretraining on the multilingual corpus is a straightforward and effective method to enhance LLMs’ multilingual capabilities [Ni et al., 2021, Ji et al., 2024]. However, these methods typically require substantial investments in time, data, and computational resources. Thus, many researchers perform multilingual alignment to improve
|
https://arxiv.org/abs/2505.21505v1
|
LLMs’ multilingual performance by transferring the capabilities from high-resource languages to low-resource languages [Eronen et al., 2023, Zhao et al., 2024c,a, She et al., 2024], which efficiently and effectively improves the model performance in low-resource language scenarios. Furthermore, Zhang et al. [2024] first finds the “Spontaneous Multilingual Alignment” phenomenon in LLMs, which demonstrates that conducting multilingual alignment based on a small number of languages effectively improves the alignment even between English and many languages unseen during alignment. 2.2 Mechanistic Interpretability In addition to enhancing LLMs’ multilingual performance, research on the underlying mechanisms of multilingual capabilities in LLMs is still ongoing. It is crucial for us to understand and explain the LLMs and related methods explicitly. Typically, the existing approaches primarily perform mechanistic interpretability analyses by observing the internal states of the model [Nostalgebraist, 2020, Zhang et al., 2024, Zhao et al., 2024b, Mousi et al., 2024]. Overall, neuron states and latent intermediate logits are both important objects of observation. For latent logits, Wendler et al. [2024] utilizes logit lens [Nostalgebraist, 2020] to directly project the logits in the intermediate layers to the vocabulary space, which reveals the latent participation of English in the intermediate layers. For neuron states, Hu et al. [2024] analyzes the neuron activation overlap to measure the extent of shared neuron activation across different languages. 2.3 Language-Specific Neurons Many studies have revealed the language-related and language-agnostic components in LLMs. At the layer level, the multilingual processing of LLMs is considered to involve three stages [Zhao et al., 2024b, Wendler et al., 2024]: converting multilingual inputs into a shared semantic space, intermediate-layer reasoning, and outputting in the target language. The top and bottom layers of the model handle multilingual processing, while the intermediate layers perform inference in similar patterns across different languages. This demonstrates a distinct division of labor within the model at the layer level regarding language specificity. Furthermore, many studies investigate the finer-grained methods for language-specific neuron identi- fication [Kojima et al., 2024, Tang et al., 2024]. Tang et al. [2024] categorizes activated neurons into language-specific neurons and language-agnostic neurons. They detect language-specific neurons by calculating language activation probability entropy on massive text. However, we find that some neu- rons are activated by multiple languages (i.e., not language-specific), yet are not universally activated across all languages (i.e., not language-agnostic). Simply categorizing activated neurons into two classes blurs this distinction. Thus, we propose a new method to identify neurons, which categorizes activated neurons into three types: language-specific, language-related, and language-agnostic. 3 Methodology In this section, we introduce our overall analysis pipeline in our work. First, we review the existing multilingual alignment algorithm and neuron analysis techniques as a preliminary study in §3.1. Then we introduce the multilingual alignment algorithm we utilize in our work in §3.2. Finally, for mechanistic interpretability analysis, we introduce our new method for detecting language-specific, language-related, and language-agnostic neurons (§3.3). 3.1 Preliminary Study Most LLMs are pretrained mainly on the high-resource language corpus, which leads to LLMs’ unstable and unbalanced performance in multilingual scenarios. As a representative multilingual 3 alignment algorithm, Multilingual-Alignment-as-Preference Optimization (MAPO) [She
|
https://arxiv.org/abs/2505.21505v1
|
et al., 2024] effectively and efficiently improves the LLMs’ multilingual performance. Additionally, it is also important for us to understand and analyze the mechanism of LLMs’ multilingual capabilities and multilingual alignment. Moreover, some studies on the identification of the language-specific and language-agnostic neurons in LLMs [Tang et al., 2024, Kojima et al., 2024]. It is found that LLMs’ capabilities of processing a particular language mainly come from a small subset of neurons [Tang et al., 2024]. However, there are still many important questions waiting for further investigation. On the one hand, many methods overlook neurons activated by multiple languages yet not language-agnostic, namely language-related neurons that lie between language-specific and language-agnostic categories. On the other hand, research from the perspective of language neurons on the underlying mechanisms of LLMs’ multilingual alignment and multilingual capabilities remains quite limited, which is essential for better understanding and improving the multilingual performance of LLMs. 3.2 Multilingual Alignment MAPO is a typical multilingual alignment algorithm to align the reasoning capabilities of non-English language responses with those of English, which serves as the pivot language. Specifically, for a given query Xin a target (non-English) language and its corresponding English variant XEng, we collect their respective responses YandYEng. An off-the-shelf translation model, parameterized by θ, is deployed to estimate the conditional generation probability P(Y|YEng;θ)by force-decoding Y conditioned on YEng. A higher conditional probability is interpreted as stronger alignment between the target language response and its English counterpart. This probability is then used as an alignment score, denoted rθ(X, Y). This alignment score can be integrated into preference optimization algorithms. For instance, in PPO [Schulman et al., 2017], rθ(X, Y)can be directly employed as the reward score. In DPO [Rafailov et al., 2023], for each target language, ndistinct outputs are generated. Based on the alignment score these noutputs are used to form n 2 preference pairs (Yw, Yl), where Ywis deemed superior to Yldue to a higher alignment score. The model is then optimized by Eq1. LDPO(πθ;πref) =−E(X,Yw,Yl)∼D logσ βlogπθ(Yw|X) πref(Yw|X)−βlogπθ(Yl|X) πref(Yl|X) (1) 3.3 Language Neurons Identification Following Tang et al. [2024], the neurons in our work are defined as a linear transformation of a single column in a weighted matrix Wfollowed by a non-linear activation, SiLU [Shazeer, 2020]. For the j-th neuron in the i-th layer, its activation probability when processing responses in language kis computed as: pk i,j=E I SiLU( xiWi)j>0 |language k (2) We define language neurons as those exhibiting higher activation probabilities for some languages, while discriminating from others. To select neurons that, despite exhibiting relatively higher entropy, still demonstrate high activation probabilities for some languages, we additionally incorporate the maximum activation probability, as formulated in Eq 3: score i,j=−lX k=1p′k i,jlogp′k i,j−λmax 1≤k≤lpk i,j, (3) where p′ i,jrepresents the probability distribution pi,jafter normalization and λis a balancing coefficient. Neurons with scores falling in the lowest percentile, specifically, the bottom 5% are selected. Furthermore, to identify how many languages each selected neuron is related to, we introduce a threshold τand compute: Ni,j=lX k=1I pk i,j> τ . (4) 4 A neuron is considered as a language-specific neuron
|
https://arxiv.org/abs/2505.21505v1
|
ifNi,j= 1, and as a language-related neuron if1< N i,j< l. Both of them belong to language neuron . In contrast, a neuron is considered language-agnostic neuron if it exhibits high activation probabilities across all llanguages. Finally, given our focus on multilingual reasoning tasks, we select neurons exclusively based on responses from multilingual reasoning datasets, rather than relying on multilingual plain text [Tang et al., 2024]. 4 Experiments 4.1 Experimental Setup Following She et al. [2024], we conduct our experiments and analyses on the mathematical reasoning tasks and different languages. In this section, we introduce our experimental settings in detail. Models We include two different models in our experiments and analyses. Following She et al. [2024], we conduct our experiments on MistralMathOctopus-7B2and MetaMathOctopus- 7B3. MistralMathOctopus is obtained by fine-tuning MetaMath-Mistral [Yu et al., 2023] with MGSM8KInstruct [Chen et al., 2023]. MetaMathOctopus is obtained by fine-tuning MetaMath [Yu et al., 2023] with MGSM8KInstruct. Considering limited computational resources and reproducibility, we directly utilize the publicly released base models. Our analyses are mainly based on MistralMath- Octopus in the main text and we report more results in the Appendix. Datasets We conduct experiments on two representative mathematical reasoning benchmarks, MGSM [Shi et al., 2022] and MSV AMP [Chen et al., 2023]. MGSM is a widely used benchmark for multilingual mathematical reasoning evaluation. MSV AMP is an out-of-domain test set in contrast to MGSM, which evaluates robustness and generalization [Zhu et al., 2024, She et al., 2024]. Languages Following She et al. [2024], we choose the following 10 different languages for analysis in our work. As a pivot language, English (en) is used as the alignment target. We also choose Chinese (zh), Russian (ru), German (de), French (fr), Spanish (es), Japanese (ja), Swahili (sw), Thai (th) and Bengali (bn) as 9 representative non-English languages. Implementations Due to limited computational resources, our exploration focuses on the most effective DPO variant of MAPO [She et al., 2024]. We select 1, 4, and 8 tasks from the NumGLUE [Mishra et al., 2022], an arithmetic reasoning benchmark, and translate questions into 9 languages, consistent with the MGSM, thereby creating a multilingual seed dataset. To construct preference pairs, we sample responses using the corresponding base models and employ NLLB-200- distilled-600M4as the translation model to obtain alignment scores. Finally, for each model and each target language (excluding English), we gain 10,000 preference pairs. Training is conducted using LoRA [Hu et al., 2022]. During the neuron selection stage, we perform force-decoding on the responses of the MGSM or MSV AMP dataset to obtain the activation probabilities of neurons for each language. Based on empirical results on development sets, we set the balancing coefficient λ= 0.04and the threshold τ= 0.5. Additional implementation details are provided in Appendix B. 4.2 Language Neurons Identification Based on the neuron identification algorithm introduced in §3.3, we identify the language-specific neurons, language-related neurons, and language-agnostic neurons in the model. To further validate the effectiveness of our algorithm, we follow Tang et al. [2024] by examining changes in the perplexity of LLMs by deactivating the identified language neurons across different
|
https://arxiv.org/abs/2505.21505v1
|
languages. Experiments are conducted on both the base model and the aligned model, with results presented in Figure 2. We report the results of both language-specific neurons and language neurons. It can be found that whether deactivating language-specific neurons or all language neurons, the results consistently exhibit the same pattern: the diagonal elements in each row show the highest 2https://huggingface.co/kevinpro/MistralMathOctopus-7B 3https://huggingface.co/kevinpro/MetaMathOctopus-7B 4https://huggingface.co/facebook/nllb-200-distilled-600M 5 en ru es frde zh ja bn th sw T arget Languageen ru esfr de zhja bn th swDeactivated Language0.01 0.03 -0.02 0.00 0.00 0.01 -0.01 0.01 0.03 0.05 -0.00 0.06 0.01 -0.01 0.00 -0.00 -0.01 -0.00 -0.00 0.01 0.01 0.00 0.02 0.00 0.01 0.01 -0.00 0.00 0.00 -0.01 -0.02 -0.00 0.00 0.53 -0.01 -0.02 -0.01 -0.00 -0.00 -0.00 -0.01 0.02 0.01 0.02 0.51 0.00 0.00 0.01 0.01 0.02 -0.01 -0.01 -0.01 -0.00 -0.02 0.05 -0.00 0.00 0.00 0.00 -0.03 -0.01 -0.02 -0.02 -0.02 -0.01 0.09 0.00 -0.01 -0.00 0.14 0.11 0.10 0.10 0.10 0.05 0.03 0.97 0.03 0.09 -0.03 -0.02 -0.00 -0.01 -0.01 0.02 0.00 -0.00 2.26 0.00 -0.00 0.00 -0.00 -0.02 -0.04 -0.01 -0.01 0.01 0.03 0.87 0.00.51.01.52.0(a) Base - Language-Specific Neurons en ru es frde zh ja bn th sw T arget Languageen ru esfr de zhja bn th swDeactivated Language-0.01 0.05 -0.05 0.02 -0.01 0.01 -0.01 0.01 0.02 0.04 0.04 0.13 -0.02 -0.00 0.00 -0.01 -0.02 -0.00 -0.01 0.04 0.01 0.01 0.05 0.02 0.02 0.01 0.02 0.00 0.00 0.04 0.02 0.00 -0.02 0.75 0.01 -0.01 -0.00 -0.00 -0.00 0.02 -0.01 0.03 0.02 0.04 0.52 0.01 0.02 0.01 0.01 0.08 0.04 0.01 -0.02 0.01 -0.01 0.07 0.00 0.00 0.00 0.03 -0.05 -0.01 -0.04 -0.01 -0.02 -0.01 0.08 -0.00 -0.01 0.02 -0.02 -0.00 -0.03 0.00 -0.03 -0.09 -0.08 0.72 -0.03 -0.01 -0.03 -0.02 -0.02 -0.01 -0.02 -0.03 -0.01 -0.00 1.79 0.03 -0.00 -0.01 -0.01 0.00 -0.01 0.00 -0.01 0.01 0.02 0.91 0.000.250.500.751.001.251.501.75 (b) Aligned - Language-Specific Neurons en ru es frde zh ja bn th sw T arget Languageen ru esfr de zhja bn th swDeactivated Language0.00 0.10 0.01 -0.01 0.00 0.38 0.29 0.08 0.58 0.06 0.00 5.42 0.32 0.24 0.24 0.22 0.15 0.12 0.18 0.16 -0.01 0.03 2.07 0.36 0.25 0.02 0.20 0.01 0.04 0.20 -0.07 -0.04 0.30 3.43 0.28 0.05 0.12 0.01 0.10 0.20 0.04 0.08 0.29 0.32 4.12 0.07 0.14 0.05 0.08 0.30 0.07 0.31 0.16 0.12 0.16 5.15 2.41 0.34 0.44 0.24 -0.05 0.57 0.17 0.18 0.23 2.63 15.17 1.14 1.07 0.25 0.16 1.32 0.23 0.26 0.38 1.45 5.33 19.49 2.50 0.64 -0.00 0.79 0.07 0.22 0.19 2.06 3.66 2.71 28.19 0.51 -0.08 0.07 0.20 0.26 0.32 0.05 0.16 0.12 0.14 3.32 0510152025 (c) Base - Language Neurons en ru es frde zh ja bn th sw T arget Languageen ru esfr de zhja bn th swDeactivated Language0.02 0.15 -0.01 0.01 -0.02 0.54 0.31 0.10 0.85 0.07 0.05 4.60 0.42 0.37 0.34 0.22 0.22 0.06 0.12 0.27 -0.03 0.05 2.61 0.53 0.38 0.03 0.22 -0.01 0.04 0.34 -0.03 0.00 0.40 5.21 0.44 -0.00 0.11 -0.00 0.13 0.32 0.04 0.12 0.37 0.48 4.69 0.05 0.19 0.04 0.09 0.41 0.08 0.29 0.23 0.19 0.26 5.34
|
https://arxiv.org/abs/2505.21505v1
|
2.36 0.29 0.44 0.31 -0.00 0.39 0.27 0.25 0.35 2.75 13.97 0.79 1.07 0.48 0.03 0.69 0.08 0.26 0.35 0.78 4.44 13.63 2.50 0.70 0.03 0.71 0.07 0.33 0.27 2.12 3.17 2.12 23.49 0.75 -0.01 0.08 0.27 0.39 0.37 0.07 0.21 0.11 0.18 3.51 05101520 (d) Aligned - Language Neurons Figure 2: PPL changes of MistralMathOctopus on MGSM after deactivating language-specific neurons or language neurons. “Base” indicates the results of the base model. “Aligned” indicates the results of the aligned model. For comparison, the results of Tang et al. [2024] are provided in Appendix D. values. Notably, deactivating language neurons leads to a more pronounced effect compared to deactivating only language-specific neurons. These observations support the following findings: (1) Our algorithm effectively identifies language-specific and language-related neurons; (2) For a given language, in addition to its language-specific neurons, there are also a substantial number of shared language-related neurons contributing to its performance; (3) Deactivating all the language-related neurons of one language doesn’t cause significant impacts on the model’s performance in other languages. The above findings confirm the validity of the language neurons identified by our method and further provide insights into the characteristics of language neurons. 4.3 Layer-wise Functionality Analysis Based on the identified neurons, we perform layer-wise functional analyses of all layers in the LLMs. We begin by analyzing the distributions of different types of neurons in the base model. And we report the results in Figure 3. Through the analysis of the distribution of different types of neurons, we can further divide the LLMs’ internal process for multilingual inference into four parts rather than the three-stage division discussed in some works [Wendler et al., 2024, Zhao et al., 2024b]: 1.Multilingual Understanding: In the initial layers, the number of language neu- rons (language-specific and language-related neurons) peaks, while the number of language- agnostic neurons is relatively low. The model maps multilingual inputs into a unified semantic space at this stage. 2.Shared Semantic Space Reasoning: In the intermediate layers, the model engages in reasoning within a shared semantic space across different languages. During this stage, language neurons are largely absent, whereas language-agnostic neurons become dominant. 3.Multilingual Output Space Transformation: The model transfers features into the multi- lingual output space in this stage in preparation for generating the final output. In this part, 6 051015202530Layer050010001500200025003000 NumberLanguage-Specific Language-Related Language-AgnosticFigure 3: Layer-wise distribution of the different types of neurons of MistralMathOctopus on MGSM. the number of language neurons reaches a peak again, while the number of language-agnostic neurons drops to the lowest point. 4.Vocabulary Space Outputting: In the last layer, the model maps vectors of different languages into a shared vocabulary space to generate outputs. The number of both language- related and language-agnostic neurons rises sharply, while the language-specific neurons are fewer than those in several previous layers. Meanwhile, the distribution of different types of neurons aligns with the conclusions from existing studies mentioned above. Overall, we can find that the number of neurons varies correspondingly with the different inference stages of LLMs. 4.4 Layer-wise Neuron Changes Analysis In §4.3, we investigate the fundamental distribution
|
https://arxiv.org/abs/2505.21505v1
|
of neurons and the basic partitioning of the model. We further analyze the changes in different types of neurons before and after multilingual alignment. Based on the four functional stages in LLMs, we quantify the layer- wise changes ( ∆) in the number of different types of neurons. Figure 4 presents the re- sults for language-specific neurons, language-related neurons, and language-agnostic neurons. 0 5 10 15 20 25 30 Layer25 0255075100125150 Language-Specific Language-Related Language-Agnostic Figure 4: Layer-wise changes in the number of dif- ferent types of neurons of MistralMathOctopus on MGSM.During the multilingual understanding stage, the number of language neurons increases, while language-agnostic neurons decrease. In the subsequent shared semantic space rea- soning stage, language-agnostic neurons in- crease substantially, whereas language neu- rons remain stable and nearly absent. Then, in the third stage, as language-agnostic neu- rons decrease, language neurons increase over- all. Additionally, we notice that the number of language-related neurons show an upward trend. Finally, in the last stage, the number of language-agnostic increases significantly in the aligned model, accompanied by a re- duction in language neurons. We also report the results of different checkpoints during the alignment process in Appendix F. Overall, we find that language neurons and language-agnostic neurons exhibit generally opposite trends across different layers, which 7 1 2 3 4 5 6 7 8 9 10 N600 400 200 02004006008001000 -486-23662154209161201 200171971Figure 5: Changes in the number of neurons shared by Nlanguages after alignment. Table 1: Accuracy of the MistralMathOctopus base model and aligned model on MGSM. "X/Y ⇒T" indicates that languages X and Y are used for multilingual alignment. MGSM bn th sw ja zh ru de es fr en Avg. base 43.6 53.2 50.4 55.6 59.6 59.2 61.2 62.8 56.8 75.6 57.8 zh/de⇒en46.4 55.6 59.2 56.8 64.0 71.2 66.8 71.2 69.2 75.2 63.6 sw/th⇒en48.8 58.8 59.2 56.4 68.4 68.4 69.2 69.6 70.4 77.6 64.7 corresponds to the characteristics of LLMs at different stages of inference. Especially, at the last stage, language-agnostic neurons play an more important role than language neurons. Multilingual alignment facilitates more effective activation of the appropriate neurons at each stage, thereby improving the model’s capability to handle multilingual tasks. 4.5 Macroscopic Analysis of Different Types of Neurons We further conduct macroscopic analysis for different types of neurons. In our neuron identification algorithm introduced in §3.3, the number of languages that share a specific neuron is an attribution characterizing all types of activated neurons. Since our study involves 10 languages, the valid range of Nis from 1to10. Among these, values of Nfrom2to9correspond to language-related neurons. As special cases in our work, N= 1represents language-specific neurons, while N= 10 corresponds to language-agnostic neurons. We report the changes in the number of neurons after multilingual alignment for each value of N ranging from 1 to 10 in Figure 5. The results show a decrease in the number of language-specific neurons, while an increase in the number of language-related neurons, which are shared across multiple languages. This indicates that multilingual alignment encourages LLMs to develop and utilize more shared language-related neurons, rather than on language-specific
|
https://arxiv.org/abs/2505.21505v1
|
neurons, which are applicable to only a single language. Meanwhile, during the alignment process, the model improves its understanding of task-relevant common knowledge. Therefore, the overall number of language- agnostic neurons also increases significantly. 4.6 Spontaneous Multilingual Alignment Analysis The important “Spontaneous Multilingual Alignment” phenomenon is first revealed and discussed by Zhang et al. [2024], which means that conducting alignment in a small number of languages significantly improves multilingual alignment even between English and many languages unseen during the alignment process. We further analyze this phenomenon in our experiments. As shown in Table 1, spontaneous multilingual alignment also emerges under the multilingual alignment strategy employed in our study. Except for the languages used for alignment, LLMs exhibit notable performance gain in other unaligned languages. 8 Table 3: Average number of different types of neurons for English and non-English languages of MistralMathOctopus on MGSM. We round the results to the nearest integer. Language Language-Specific Language-Related English 46 603 non-English 613 2006Table 4: Overlap ratio of different types of neu- rons across different domains andbefore and after alignment . Following She et al. [2024], MSV AMP is regarded as an out-of-domain dataset. The results of MistralMathOctopus on MGSM are used as the fiducial value. Variable (%) Language-Specific Language-Related Domain 80.7 92.3 Alignment 95.6 92.1 To understand how multilingual alignment generalizes to other languages, we analyze the changes in different types of neurons before and after multilingual alignment based on our method. Table 2: Average results of neuron count changes across multiple languages. “Trained” indicates the trained languages in the spontaneous multilingual alignment experiment. “Others” indicates other lan- guages except the trained languages. We round the results to the nearest integer. Language Language-Specific Language-Related Trained -37 +232 Others -36 +205Taking the case of “zh/de ⇒en” as a represen- tative example, we report the average results in Table 2. For the trained languages, the num- ber of language-specific neurons decreases, while the number of language-related neu- rons increases. This indicates that the aligned languages tend to utilize more language- related neurons shared with other languages rather than exclusive language-specific neu- rons. Moreover, we extend this analysis to lan- guages other than the trained languages and observe a similar phenomenon. These findings indicate that multilingual alignment facilitates the use of language-related neurons while reducing the reliance on language-specific neurons in both trained and other unseen languages. We hypothesize that the new language-related neurons shared with trained languages contribute to the performance improvement on other unseen languages. 4.7 Further Analysis Uniqueness of English Since current LLMs are primarily pretrained on English data, English is often regarded as playing a special role within LLMs [Wendler et al., 2024]. In our experiments, we also observe that English exhibits markedly different characteristics compared to other non-English languages. Based on the identified neurons in our work, in Figure 2, we can find that deactivating the language neurons of English has a negligible impact on the model’s performance in English, which is entirely different from the behavior observed in other languages. This is also consistent with the results of Tang et al. [2024].
|
https://arxiv.org/abs/2505.21505v1
|
Furthermore, based on this finding, we quantify the number of language neurons for English and non-English languages based on the MistralMathOctopus base model (Table 3). Our analysis reveals that English has significantly fewer neurons than other languages, both in terms of language-specific and language-related neurons. We hypothesize that this is due to the fact that English actually possesses numerous language-related neurons. And since English serves as a pivot language, these language-related neurons are likely shared with almost all other languages. It causes them to be confounded with language-agnostic neurons. Stability of Neuron Distributions We discuss the stability of neuron distributions across different data domains , as well as before and after alignment . To quantify the stability of neuron distributions, we compute the neuron overlap ratio in both settings, with the results summarized in Table 4. We can find that although the exact positions of a few language neurons may vary across different settings, the positional distribution of most language neurons remains stable. This also indicates good reliability and generalization of the language neurons identified under fixed hyperparameters. 5 Conclusion In this work, we systematically investigate the multilingual alignment from the perspective of language neurons. We propose a new language neuron identification algorithm based on entropy and probability value, which detects the language-specific neurons, language-related neurons, and 9 language-agnostic neurons in LLMs. The validity of the identified neurons is confirmed through deactivation ablation experiments. Furthermore, we examine the multilingual alignment mechanism by analyzing the roles of different types of neurons. Based on their distributional characteristics, we categorize LLMs’ internal process into four functional parts. Our analysis reveals that multilingual alignment enhances the model’s utilization of the corresponding types of neurons across different functional parts. Meanwhile, we find that alignment promotes a greater reliance on shared language- related neurons across languages, rather than on language-specific neurons. We also explore the phenomenon of spontaneous multilingual alignment. Additionally, we provide further analysis and more empirical results based on the preceding findings. References Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 , 2024. Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, et al. Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437 , 2024. Shimao Zhang, Xiao Liu, Xin Zhang, Junxiao Liu, Zheheng Luo, Shujian Huang, and Yeyun Gong. Process-based self-rewarding language models. arXiv preprint arXiv:2503.03746 , 2025. Haoyang Huang, Tianyi Tang, Dongdong Zhang, Wayne Xin Zhao, Ting Song, Yan Xia, and Furu Wei. Not all languages are created equal in llms: Improving multilingual capability by cross- lingual-thought prompting. arXiv preprint arXiv:2305.07004 , 2023. Wenhao Zhu, Hongyi Liu, Qingxiu Dong, Jingjing Xu, Shujian Huang, Lingpeng Kong, Jiajun Chen, and Lei Li. Multilingual machine translation with large language models: Empirical results and analysis. arXiv
|
https://arxiv.org/abs/2505.21505v1
|
preprint arXiv:2304.04675 , 2023. Shimao Zhang, Changjiang Gao, Wenhao Zhu, Jiajun Chen, Xin Huang, Xue Han, Junlan Feng, Chao Deng, and Shujian Huang. Getting more from less: Large language models are good spontaneous multilingual learners. arXiv preprint arXiv:2405.13816 , 2024. Minheng Ni, Haoyang Huang, Lin Su, Edward Cui, Taroon Bharti, Lijuan Wang, Dongdong Zhang, and Nan Duan. M3p: Learning universal representations via multitask multilingual multimodal pre- training. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 3977–3986, 2021. Zihan Liu, Genta Indra Winata, and Pascale Fung. Continual mixed-language pre-training for extremely low-resource neural machine translation. arXiv preprint arXiv:2105.03953 , 2021. Shaoxiong Ji, Zihao Li, Indraneil Paul, Jaakko Paavola, Peiqin Lin, Pinzhen Chen, Dayyán O’Brien, Hengyu Luo, Hinrich Schütze, Jörg Tiedemann, et al. Emma-500: Enhancing massively multilin- gual adaptation of large language models. arXiv preprint arXiv:2409.17892 , 2024. Jun Zhao, Zhihao Zhang, Luhui Gao, Qi Zhang, Tao Gui, and Xuanjing Huang. Llama beyond english: An empirical study on language capability transfer. arXiv preprint arXiv:2401.01055 , 2024a. Shuaijie She, Wei Zou, Shujian Huang, Wenhao Zhu, Xiang Liu, Xiang Geng, and Jiajun Chen. Mapo: Advancing multilingual reasoning through multilingual alignment-as-preference optimization. arXiv preprint arXiv:2401.06838 , 2024. Yiran Zhao, Wenxuan Zhang, Guizhen Chen, Kenji Kawaguchi, and Lidong Bing. How do large language models handle multilingualism? arXiv preprint arXiv:2402.18815 , 2024b. 10 Tianyi Tang, Wenyang Luo, Haoyang Huang, Dongdong Zhang, Xiaolei Wang, Xin Zhao, Furu Wei, and Ji-Rong Wen. Language-specific neurons: The key to multilingual capabilities in large language models. arXiv preprint arXiv:2402.16438 , 2024. Juuso Eronen, Michal Ptaszynski, and Fumito Masui. Zero-shot cross-lingual transfer language selection using linguistic similarity. Information Processing & Management , 60(3):103250, 2023. Yiran Zhao, Wenxuan Zhang, Huiming Wang, Kenji Kawaguchi, and Lidong Bing. Adamergex: Cross-lingual transfer with large language models via adaptive adapter merging. arXiv preprint arXiv:2402.18913 , 2024c. Nostalgebraist. interpreting gpt: the logit lens. https://www.lesswrong.com/posts/ AcKRB8wDpdaN6v6ru/interpreting-gpt-the-logit-lens , 2020. Basel Mousi, Nadir Durrani, Fahim Dalvi, Majd Hawasly, and Ahmed Abdelali. Exploring align- ment in shared cross-lingual spaces. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Proceedings of the 62nd Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers) , pages 6326–6348, Bangkok, Thailand, August 2024. As- sociation for Computational Linguistics. doi: 10.18653/v1/2024.acl-long.344. URL https: //aclanthology.org/2024.acl-long.344/ . Chris Wendler, Veniamin Veselovsky, Giovanni Monea, and Robert West. Do llamas work in english? on the latent language of multilingual transformers. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 15366–15394, 2024. Peng Hu, Sizhe Liu, Changjiang Gao, Xin Huang, Xue Han, Junlan Feng, Chao Deng, and Shujian Huang. Large language models are cross-lingual knowledge-free reasoners. arXiv preprint arXiv:2406.16655 , 2024. Takeshi Kojima, Itsuki Okimura, Yusuke Iwasawa, Hitomi Yanaka, and Yutaka Matsuo. On the multi- lingual ability of decoder-based pre-trained language models: Finding and controlling language- specific neurons. arXiv preprint arXiv:2404.02431 , 2024. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 , 2017. Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea
|
https://arxiv.org/abs/2505.21505v1
|
Finn. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems , 36:53728–53741, 2023. Noam Shazeer. Glu variants improve transformer. arXiv preprint arXiv:2002.05202 , 2020. Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions for large language models. arXiv preprint arXiv:2309.12284 , 2023. Nuo Chen, Zinan Zheng, Ning Wu, Ming Gong, Dongmei Zhang, and Jia Li. Breaking lan- guage barriers in multilingual mathematical reasoning: Insights and observations. arXiv preprint arXiv:2310.20246 , 2023. Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush V osoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, et al. Language models are mul- tilingual chain-of-thought reasoners. arXiv preprint arXiv:2210.03057 , 2022. Wenhao Zhu, Shujian Huang, Fei Yuan, Shuaijie She, Jiajun Chen, and Alexandra Birch. Question translation training for better multilingual reasoning. arXiv preprint arXiv:2401.07817 , 2024. Swaroop Mishra, Arindam Mitra, Neeraj Varshney, Bhavdeep Sachdeva, Peter Clark, Chitta Baral, and Ashwin Kalyan. NumGLUE: A suite of fundamental yet challenging mathematical reasoning tasks. In Smaranda Muresan, Preslav Nakov, and Aline Villavicencio, editors, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 3505–3523, Dublin, Ireland, May 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.acl-long.246. URL https://aclanthology.org/2022.acl-long.246/ . 11 Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. ICLR , 1(2):3, 2022. Leandro von Werra, Younes Belkada, Lewis Tunstall, Edward Beeching, Tristan Thrush, Nathan Lambert, Shengyi Huang, Kashif Rasul, and Quentin Gallouédec. Trl: Transformer reinforcement learning. https://github.com/huggingface/trl , 2020. Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. Deepspeed: System optimiza- tions enable training deep learning models with over 100 billion parameters. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining , pages 3505–3506, 2020. Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles , 2023. 12 A Limitations We provide insights and analysis results for multilingual alignment and multilingual capabilities of LLMs, which allows for a better understanding of multilingualism. Despite we have conducted a systematic investigation in our work, there are still some limitations waiting for research. Due to the limited resources, we only conduct experiments on two different models and two different datasets. We are willing to perform a more comprehensive analysis on different scenarios if more resources are available in the future. Additionally, we don’t perform a finer-grained analysis of neurons within the same layer in this work. We would like to explore this in our future work. B Implementation Details Our experiments are conducted on 4 NVIDIA RTX A6000 GPUs or 4 NVIDIA GeForce RTX 3090 GPUs. We use the TRL [von Werra et al., 2020] and DeepSpeed [Rasley et al., 2020] frameworks
|
https://arxiv.org/abs/2505.21505v1
|
for preference alignment, and the vLLM engine Kwon et al. [2023] for inference. B.1 MAPO We employ the officially released scripts5to generate the preference data. For the alignment process, we set the learning rate to 1e-6 and the batch size to 16. LoRA is utilized to fine-tune the model with a LoRA rank of 64, a LoRA alpha of 128, and a LoRA dropout rate of 0.05. The total number of training steps is set to 1000. It takes 7.5 hours for one alignment. B.2 Language Neurons Identification We spend 25 minutes calculating the activation probability of each neuron with MGSM dataset on a single NVIDIA GeForce RTX 3090 GPU. We spend 2 hours calculating the activation probability of each neuron with MSV AMP dataset on a single NVIDIA GeForce RTX 3090 GPU. After obtaining the activation probability, it takes 3 minutes to identify how many languages each selected neuron is related to. C Licenses for Used Assets We list the names of the licenses for each asset we utilized in our work: • MGSM: CC-BY-SA-4.0 • MSV AMP: Apache-2.0 • GSM8KInstruct: Apache-2.0 • NumGLUE: Apache-2.0 • MistralMathOctopus: Apache-2.0 • MetaMathOctopus: Apache-2.0 • NLLB-200-distilled-600M: CC-BY-NC-4.0 • TRL: Apache-2.0 • DeepSpeed: Apache-2.0 • vLLM: Apache-2.0 D Deactivation Ablation Experiments For the MistralMathOctopus model, Figure 6 presents the results of the deactivation ablation ex- periments conducted on MGSM using the neuron identification algorithm proposed by Tang et al. [2024]. Additionally, Figure 7 presents the results of the deactivation ablation experiments on the out-of-domain test set, MSV AMP. To verify the generalization of our identification algorithm, we also deploy the MetaMathOctopus to conduct the deactivation ablation experiment, with the result in Figure 8. 5https://github.com/NJUNLP/MAPO 13 E Layer-wise distribution of the different types of neurons In Figure 9a and 9b, we separately display the layer-wise distribution of the different types of neurons of MistarlMathOctopus on MSV AMP and MetaMathOctopus on MGSM to validate the generalization of our finding discussed in §4.3. F Layer-wise Changes in the Number of Different Types of Neurons During Alignment Figure 10 shows the same pattern discussed in §4.4, thereby supporting its generalization. Fur- thermore, for each dataset, we examine the layer-wise changes in the number of different types of neurons in the process of alignment in Figure 11, 12 and 13. The notation “ ckpt-x” denotes the model checkpoint obtained after xtraining steps. G Spontaneous Multilingual Alignment We report the accuracy of the base model on the MGSM and MSV AMP benchmarks in Table 5. The expected accuracy of random guessing is 50.0%. "X/Y ⇒T" indicates that languages X and Y are used for alignment. The best results for each language are highlighted. It is evident that models trained on multilingual translation data substantially outperform the original model across a wide range of languages, indicating that multilingual training significantly enhances the model’s multilingual capabilities. 14 en ru es frde zh ja bn th sw T arget Languageen ru esfr de zhja bn th swDeactivated Language0.01 0.02 -0.01 -0.00 -0.00 -0.05 -0.02 0.00 -0.00 0.01 0.02 0.87 0.07 0.05 0.04 0.06 0.03
|
https://arxiv.org/abs/2505.21505v1
|
0.06 0.10 0.05 0.01 0.00 0.21 0.05 0.04 -0.00 0.06 -0.00 0.01 0.06 -0.01 0.01 -0.02 0.40 0.03 -0.02 0.00 -0.00 0.02 0.06 0.02 0.06 0.03 0.05 0.93 0.03 0.01 0.01 0.02 0.10 0.00 -0.01 -0.01 -0.04 -0.01 0.87 0.59 0.09 0.22 0.04 -0.05 0.06 -0.05 -0.04 0.01 0.19 7.32 0.40 0.62 0.03 0.01 0.28 -0.03 -0.01 -0.00 -0.01 1.64 8.65 0.86 0.07 -0.01 0.13 -0.02 -0.01 -0.01 0.07 0.83 0.96 14.82 0.06 -0.02 0.00 -0.04 -0.02 -0.03 -0.00 0.02 0.02 0.02 1.33 02468101214(a) Base - Language-Specific Neurons en ru es frde zh ja bn th sw T arget Languageen ru esfr de zhja bn th swDeactivated Language0.03 0.01 -0.04 0.01 -0.02 -0.07 -0.03 -0.00 -0.01 0.03 -0.00 0.79 0.06 0.08 0.04 0.03 -0.01 0.01 0.04 0.05 0.05 -0.01 0.25 0.10 0.06 -0.04 0.06 -0.01 -0.00 0.06 -0.00 -0.11 -0.01 0.67 0.02 -0.05 -0.02 -0.01 0.00 0.06 -0.02 0.04 0.04 0.07 0.96 -0.00 0.00 -0.00 -0.00 0.10 0.01 0.00 -0.01 -0.03 -0.01 0.93 0.62 0.08 0.19 0.05 -0.03 0.03 -0.05 -0.04 0.03 0.23 6.48 0.34 0.55 0.07 0.01 0.07 -0.03 0.02 0.02 -0.13 1.39 6.67 0.75 0.08 -0.02 0.11 -0.03 0.03 0.00 0.02 0.64 0.88 11.88 0.08 -0.03 -0.03 -0.02 -0.01 -0.01 -0.02 0.01 0.01 0.01 1.32 0246810 (b) Aligned - Language-Specific Neurons Figure 6: PPL changes of MistralMathOctopus on MGSM when deploying the algorithm proposed by Tang et al. [2024]. en ru es frde zh ja bn th sw T arget Languageen ru esfr de zhja bn th swDeactivated Language0.04 0.02 -0.06 0.00 -0.02 -0.02 -0.02 -0.00 0.00 0.03 -0.01 0.03 0.01 -0.02 0.01 -0.00 -0.01 -0.00 -0.01 -0.01 0.04 0.00 -0.04 -0.01 0.02 0.01 0.00 0.01 -0.00 0.04 -0.01 -0.00 -0.01 0.17 0.01 -0.01 -0.01 -0.00 0.00 0.01 -0.01 0.02 0.01 0.02 0.17 0.02 0.01 0.01 0.00 0.03 -0.04 -0.01 -0.03 -0.02 -0.02 0.04 -0.00 -0.00 -0.00 -0.01 -0.08 -0.03 -0.06 -0.04 -0.04 -0.02 0.09 -0.01 -0.01 -0.03 0.22 0.17 0.15 0.17 0.15 0.11 0.05 1.02 0.06 0.14 -0.01 -0.03 -0.01 -0.02 -0.02 0.01 -0.01 -0.00 2.71 -0.01 -0.05 -0.01 -0.01 -0.01 -0.02 -0.01 -0.01 0.01 0.04 0.68 0.00.51.01.52.02.5 (a) Base - Language-Specific Neurons en ru es frde zh ja bn th sw T arget Languageen ru esfr de zhja bn th swDeactivated Language0.01 0.03 -0.12 0.01 -0.05 -0.04 -0.01 -0.01 -0.00 0.05 0.05 0.02 -0.03 -0.02 0.02 -0.02 -0.00 -0.01 -0.01 0.05 -0.02 0.02 -0.06 0.01 0.02 -0.00 0.01 0.00 -0.00 0.04 -0.05 0.01 -0.01 0.73 0.02 -0.01 0.00 0.00 -0.00 0.07 0.02 0.03 -0.00 0.04 0.21 0.01 0.01 0.01 0.00 0.10 -0.05 0.03 -0.02 0.00 0.00 0.05 0.01 -0.00 0.00 0.08 -0.13 -0.04 -0.05 -0.03 -0.04 -0.04 0.10 -0.01 -0.01 0.05 -0.04 0.01 -0.03 0.02 0.01 -0.04 -0.02 0.70 0.00 0.02 -0.05 -0.01 0.00 0.01 0.00 0.00 -0.00 -0.00 2.22 0.08 -0.04 -0.02 -0.00 0.02 0.01 0.01 -0.02 0.01 0.01 1.00 0.00.51.01.52.0 (b) Aligned - Language-Specific Neurons en ru es frde zh ja bn th sw T arget Languageen ru esfr de zhja bn th swDeactivated Language0.07 0.01 -0.02 -0.01 -0.06 0.65 0.23 0.09 0.68 0.02 -0.01 3.86
|
https://arxiv.org/abs/2505.21505v1
|
0.15 0.13 0.09 0.30 0.12 0.11 0.13 0.11 0.01 -0.01 1.06 0.12 0.13 0.04 0.07 0.00 0.02 0.10 -0.02 -0.16 0.05 2.11 0.14 0.06 0.04 0.00 0.04 0.12 0.00 0.00 0.11 0.13 2.30 0.08 0.06 0.04 0.04 0.19 0.00 0.23 0.08 0.05 0.03 4.36 1.90 0.54 0.68 0.13 -0.19 0.25 -0.13 0.28 0.11 1.76 10.79 0.98 0.88 0.13 0.14 0.83 0.14 0.25 0.33 1.06 3.74 15.65 1.86 0.34 -0.00 0.40 -0.09 0.16 0.06 1.17 2.63 2.24 22.69 0.17 -0.19 -0.02 0.00 0.12 0.12 0.09 0.08 0.06 0.07 2.27 05101520 (c) Base - Language Neurons en ru es frde zh ja bn th sw T arget Languageen ru esfr de zhja bn th swDeactivated Language0.05 0.06 -0.04 0.03 -0.08 0.55 0.22 0.09 0.63 0.07 -0.06 3.87 0.14 0.13 0.11 0.29 0.14 0.04 0.06 0.18 -0.06 0.01 1.57 0.23 0.22 0.08 0.11 -0.02 -0.01 0.23 0.00 -0.01 0.18 3.43 0.26 0.08 0.04 -0.01 0.01 0.25 0.10 0.00 0.24 0.28 3.07 0.16 0.09 0.03 0.01 0.31 -0.07 0.51 0.09 0.17 0.09 6.01 2.47 0.57 0.78 0.20 0.08 0.61 0.14 0.30 0.33 2.68 11.97 0.81 1.04 0.39 0.12 0.87 0.10 0.31 0.31 1.01 3.85 12.26 1.98 0.44 0.14 0.78 0.11 0.38 0.31 1.87 3.16 2.14 21.28 0.47 -0.14 -0.01 0.11 0.27 0.25 0.10 0.05 0.05 0.01 2.87 0.02.55.07.510.012.515.017.520.0 (d) Aligned - Language Neurons Figure 7: PPL changes of MistralMathOctopus on MSV AMP after deactivating language-specific neurons or language neurons. “Base” indicates the results of the base model. “Aligned” indicates the results of the aligned model. 15 en ru es frde zh ja bn th sw T arget Languageen ru esfr de zhja bn th swDeactivated Language0.07 -0.02 -0.05 0.00 -0.00 0.45 0.24 0.03 0.03 0.05 -0.11 2.55 0.05 0.04 0.01 0.21 0.07 0.11 0.01 0.01 -0.02 -0.05 0.24 0.07 0.01 0.40 0.20 0.01 0.02 0.07 -0.04 0.02 0.07 0.46 0.08 0.01 -0.04 0.01 0.02 0.07 -0.04 0.16 0.07 0.14 0.49 0.32 0.15 0.02 0.03 0.02 0.01 0.46 -0.03 0.01 0.02 0.61 0.37 0.50 0.20 0.02 -0.06 0.47 -0.04 0.02 0.14 0.61 1.04 0.75 0.52 0.03 0.03 1.15 0.01 0.01 0.15 0.62 0.64 2.18 0.86 0.35 0.05 0.49 0.04 0.06 0.01 0.64 0.77 1.26 2.52 0.17 -0.01 0.14 0.11 0.09 0.12 0.38 0.13 0.09 0.01 1.78 0.00.51.01.52.02.5(a) Base - Language Neurons on MGSM en ru es frde zh ja bn th sw T arget Languageen ru esfr de zhja bn th swDeactivated Language0.10 0.03 -0.02 -0.01 0.08 0.46 0.27 0.03 0.06 0.09 -0.09 2.35 0.08 0.02 0.04 0.20 0.11 0.08 0.02 0.01 -0.00 -0.01 0.29 0.06 0.04 0.36 0.20 0.01 0.03 0.09 -0.01 -0.01 0.13 0.48 0.14 0.06 -0.00 0.02 0.05 0.11 -0.05 0.19 0.09 0.09 0.55 0.32 0.14 0.02 0.05 0.02 0.03 0.46 0.02 -0.03 0.07 0.63 0.41 0.46 0.20 0.02 0.03 0.53 0.00 0.01 0.25 0.61 1.19 0.65 0.49 0.04 0.10 1.09 0.09 0.02 0.26 0.64 0.73 2.05 0.83 0.39 0.06 0.59 0.08 0.03 0.07 0.67 0.94 1.19 2.45 0.16 0.09 0.14 0.17 0.11 0.21 0.38 0.17 0.08 0.04 1.84 0.00.51.01.52.0 (b) Aligned - Language Neurons on MGSM Figure 8: PPL changes of MetaMathOctopus on
|
https://arxiv.org/abs/2505.21505v1
|
MGSM after deactivating language neurons. “Base” indicates the results of the base model. “Aligned” indicates the results of the aligned model. 051015202530Layer050010001500200025003000 NumberLanguage-Specific Language-Related Language-Agnostic (a) MistralMathOctopus on MSV AMP. 051015202530Layer050010001500200025003000 NumberLanguage-Specific Language-Related Language-Agnostic (b) MetaMathOctopus on MGSM. Figure 9: Layer-wise distribution of the different types of neurons. 0 5 10 15 20 25 30 Layer15 10 5 051015 Language-Specific Language-Related Language-Agnostic Figure 10: Layer-wise changes in the number of MetaMathOctopus 16 0 5 10 15 20 25 30 Layer80 60 40 20 020 ckpt-200 ckpt-400 ckpt-600(a) MGSM 0 5 10 15 20 25 30 Layer120 100 80 60 40 20 02040 ckpt-200 ckpt-400 ckpt-600 (b) MSV AMP Figure 11: Layer-wise changes in the number of language-specific neurons of MistralMathOctopus during the alignment. 0 5 10 15 20 25 30 Layer100 80 60 40 20 0204060 ckpt-200 ckpt-400 ckpt-600 (a) MGSM 0 5 10 15 20 25 30 Layer120 100 80 60 40 20 02040 ckpt-200 ckpt-400 ckpt-600 (b) MSV AMP Figure 12: Layer-wise changes in the number of language-related neurons of MistralMathOctopus during the alignment. 0 5 10 15 20 25 30 Layer100 0100200300400500 ckpt-200 ckpt-400 ckpt-600 (a) MGSM 0 5 10 15 20 25 30 Layer100 0100200300400500 ckpt-200 ckpt-400 ckpt-600 (b) MSV AMP Figure 13: Layer-wise changes in the number of language-agnostic neurons of MistralMathOctopus during the alignment. 17 Tested on MGSM bn th sw ja zh ru de es fr en Avg. base 43.6 53.2 50.4 55.6 59.6 59.2 61.2 62.8 56.8 75.6 57.8 zh⇒en 49.6 58.4 54.8 56.4 65.2 70.0 66.4 72.8 68.8 78.4 64.1 zh/de⇒en 46.4 55.6 59.2 56.8 64.0 71.2 66.8 71.2 69.2 75.2 63.6 sw/th⇒en 48.8 58.8 59.2 56.4 68.4 68.4 69.2 69.6 70.4 77.6 64.7 zh/es/ru⇒en 46.0 56.4 58.8 54.8 63.2 70.8 68.8 71.6 69.6 76.8 63.7 zh/es/fr/ja/de/sw/ru/th/bn ⇒en49.6 60.0 56.4 56.4 64.4 64.8 65.2 65.2 61.2 76.4 62.0 Tested on MSV AMP bn th sw ja zh ru de es fr en Avg. base 49.3 62.5 60.6 60.9 67.4 64.9 66.5 67.6 67.2 77.0 64.4 zh⇒en 52.8 62.5 60.9 63.7 66.6 67.5 69.4 69.8 69.7 76.5 65.9 zh/de⇒en 53.2 62.0 62.9 65.3 66.1 68.7 69.9 69.0 70.1 77.0 66.4 sw/th⇒en 53.8 67.4 65.1 67.7 71.9 71.4 71.9 72.0 72.8 78.9 69.3 zh/es/ru⇒en 52.9 63.2 61.5 63.0 67.8 68.1 69.8 68.7 70.6 78.3 66.4 zh/es/fr/ja/de/sw/ru/th/bn ⇒en54.9 65.9 65.6 68.6 69.7 70.2 72.3 71.6 71.6 77.3 68.8 Table 5: Accuracy of the base model and aligned variants on benchmarks. 18
|
https://arxiv.org/abs/2505.21505v1
|
1 AITEE - Agentic Tutor for Electrical Engineering Christopher Knievel, Alexander Bernhardt, Christian Bernhardt Intelligent tutoring systems combined with large language models offer a promising approach to address students’ diverse needs and promote self-efficacious learning. While large language models possess good foundational knowledge of electrical engineering basics, they remain insufficiently capable of addressing specific questions about electrical circuits. In this paper, we present AITEE, an agent-based tutoring system for electrical engineering designed to accompany students throughout their learning process, offer individualized support, and promote self-directed learning. AITEE supports both hand-drawn and digital circuits through an adapted circuit reconstruction process, enabling natural interaction with students. Our novel graph-based similarity measure identifies relevant context from lecture materials through a retrieval augmented generation approach, while parallel Spice simulation further enhances accuracy in applying solution methodologies. The system implements a Socratic dialogue to foster learner autonomy through guided questioning. Experimental evaluations demonstrate that AITEE significantly outperforms baseline approaches in domain-specific knowledge application, with even medium-sized LLM models showing acceptable performance. Our results highlight the potential of agentic tutors to deliver scalable, personalized, and effective learning environments for electrical engineering education. Index Terms —Intelligent tutoring systems, electrical engineering education, graph neural networks, large language models I. I NTRODUCTION THE field of educational technology has seen remark- able advancements, with the emergence of transforma- tive tools such as Learning Management Systems, Massive Open Online Courses, and Intelligent Tutoring Systems. These technologies have enabled a shift towards distance learning models, allowing students to learn at their own pace and pro- viding teachers with the ability to scale up effective teaching practices [1]. However, despite these innovations, many educa- tional technologies do not substantially change the traditional role of teachers. Typical teaching activities, such as providing feedback, motivation, and content adaptation, are still primar- ily entrusted to human instructors, leading to the ”teacher- bandwidth problem” where there is a shortage of teaching staff to provide highly informative and competence-oriented feedback at large scale [2]. The advent of ChatGPT, an applica- tion based on state-of-the-art GPT language models for natural language processing (NLP) model, has further expanded the potential of Intelligent Tutoring Systems (ITS). Tracing its origins to the pioneering ELIZA chatbot developed in 1966, the capabilities of modern chatbots have become increasingly sophisticated, with the ability to engage in human-like con- versations and provide personalized learning experiences [3]. Intelligent Tutoring Systems promise to address the limitations of traditional educational technologies by incorporating com- putational models to provide individualized learning, formative feedback, and personalized learning paths [4]. Chatbots, as a subtype of dialog systems, have emerged as a particularly promising approach, with the ability to simulate conversational partners and provide feedback through natural language [1, 5]. Despite their potential, deploying chatbots as Intelligent Tutoring Systems involves several complications. Due to their susceptibility to hallucinations and limited robustness, unsu- pervised chatbot usage may enable students to extract incorrect solutions from the system, which is particularly a problem for weaker students [6–9]. Additionally, there is a risk that C. Knievel, A. Bernhardt and C. Bernhardt are with the Depart- ment of Electrical Engineering
|
https://arxiv.org/abs/2505.21582v1
|
and Information Technology, HTWG Hochschule Konstanz, University of Applied Sciences, Germany (email: {cknievel,abernhard,cbernhard }@htwg-konstanz.de).students lose their sense of self-efficacy when solving tasks independently due to excessive support and instead develop a dependency on the tutor [10, 11]. The application of intelligent tutoring systems to elec- trical engineering is very limited [12] and is restricted to static knowledge representation, lacking dynamic inference and application of knowledge to solve questions related to electrical circuits. In this paper, we develop an agentic tutor for electrical engineering (AITEE), which provides students with an interactive platform for asking questions about electrical circuits while ensuring reliability and accuracy of informa- tion, leveraging domain-specific contextual knowledge, and preventing excessive trust in and dependence on technology. To support students’ self-efficacy, AITEE employs a Socratic dialogue that fosters learner autonomy through systematic questioning, guiding students toward logical conclusions [13, 14]. Furthermore, AITEE has to address the typical challenges faced by first-semester electrical engineering students when analyzing DC circuits, who need to apply both mathematical foundations, such as linear algebra, as well as electrical engineering principles, such as Kirchhoff’s laws, to a given circuit. This involves identifying and applying the solution approaches discussed in the lecture. An exemplary circuit is shown in Fig. 1, with the task of calculating the current I3 through the ohmic resistance. The challenge for AITEE is to identify the relevant context within the knowledge base given only the image of the circuit and the fragmented question: “How do i calculate the current I3?” as input. We develop a deep learning-based approach to detect the I1 I1I3 RI2 Vq Fig. 1: Exemplary electrical circuit with current and voltage source as well as an ohmic resistor.arXiv:2505.21582v1 [cs.CY] 27 May 2025 2 Circuit Detection of components and connectionsConversion into Graph/NetlistSimulation with Spice Scripts Relevant context in vector databaseRetriever (RAG) Students PromptLLM-Instructions Large Language Model Prompt: Output Fig. 2: Overview of the required components of AITEE. electrical components and their connections. Different repre- sentations of the query and the circuits were examined for their suitability for retrieval augmented generation. Due to the poor performance of naive and advanced RAG methods, we adapted the so-called passage retrieval [15] to use a representation of electrical circuits as indexes, termed index- circuits, and thereby identify the relevant passages in the script. In order to find the relevant index-circuit for a given query- circuit, a similarity measure between the two circuits must be calculated. For this purpose, the circuit is transformed into a latent vector representation using a graph-neural network, which captures, among other things, the structure of the circuit. A similarity measure is calculated based on the cosine distance between the vectors of different circuits. Given the relevant context, several language models, both open-source as well as closed-source were evaluated regarding their understanding of electrical circuits and their ability to correctly solve first- semester electrical engineering problems. Additionally, the models’ robustness against erroneous information in multi-turn dialogues with students was investigated. The remainder of this paper is organized as follows: Chap- ter II introduces the architecture of the agentic system. In chapter. III the
|
https://arxiv.org/abs/2505.21582v1
|
identification of the electrical circuit as well as the graph-representation and the subsequent similarity measure are discussed. Four different large language models (LLMs) are evaluated in Chapter IV concerning their capabilities of understanding electrical circuits. Furthermore, the performance of all four LLMs is evaluated with various prompting and retrieval strategies. Finally, chapter V concludes the paper. II. S YSTEM ARCHITECTURE Chatbots in education have the potential to increase stu- dents’ motivation to learn and strengthen their self-perception and self-efficacy [5]. For an Intelligent Tutoring System (ITS) in electrical engineering to achieve these goals, it must be able to understand electrical circuits and solve tasks by applying the correct methods. However, ”AI hallucinations” - convinc- ingly formulated but factually incorrect responses - remain an unsolved problem [16]. This is particularly concerning when students receive these false answers, as they often lack the abil- ity to verify their correctness. In order to enhance accessibility and provide seamless learning support, AITEE is designed toprocess both digitally created as well as hand-drawn circuit diagrams. This capability allows students to interact naturally with the system, whether they are working with computer- generated schematics or sketching circuits during problem- solving sessions. AITEE combines several key technologies: circuit image processing to create netlists (a textual representation of an electrical circuit), a graph neural network-based similarity measure for context retrieval, and an LLM supported by Retrieval-Augmented Generation (RAG). Guided by its system prompt, the tutoring agent engages students in a Socratic dialogue, promoting active learning and self-efficacy by lead- ing them towards solutions rather than providing immediate answers. To ensure accuracy and prevent hallucinations, a SPICE simulation of the circuit is used to provide precise voltage and current values in case specific values are given in the task description. These components work together to create a reliable and effective tutoring system. The overall architecture of AITEE is shown in Fig. 2, visualizing the flow of information. III. R EPRESENTATION & S IMILARITY OF ELECTRICAL CIRCUITS The transformation of hand-drawn circuit diagrams into a machine-readable format begins with the detection of compo- nents and their interconnections. While research in electrical circuit recognition is extensive, studies specifically address- ing hand-drawn circuits remain limited [17–19]. Hand-drawn circuit recognition presents unique challenges, primarily re- quiring robust detection algorithms that can handle inherent imprecisions in sketches. Notable approaches using YOLO models for component detection have demonstrated promising results, achieving AP 0.5scores of 98.2%and91.6%respec- tively [17, 18]. Uzair et al. further refined this approach by developing a two-stage detector specifically optimized for smaller component detection [19]. The established method for connection detection in hand-drawn electrical circuits involves a multi-step process: first removing identified components from the image, then applying Canny edge detection fol- lowed by Hough transformation. The resulting nodes are then grouped using k-means clustering, with cluster centers serving as connection endpoints. While this approach has proven 3 effective for conventional circuits, it faces limitations when applied to educational contexts. In educational settings, circuit layouts often follow specific didactic principles. For instance, star ( ) or delta ( ∆) circuits may intentionally incorporate diagonal connections
|
https://arxiv.org/abs/2505.21582v1
|
or components to emphasize particular circuit characteristics. These pedagogically motivated layouts present unique challenges that existing connection recogni- tion methods cannot easily address. Although the Connected Component Analysis [19, 20] could be a potential solutions, it is not well-suited for processing hand-drawn circuits due to its susceptibility to the inherent inaccuracies of the given circuits. Therefore, we have developed a novel approach that better serves these educational requirements. The following sections describe the technical components of the circuit analysis system. First, we introduce the netlist as a generic circuit representation format and the graph neural network for determining circuit similarities. Next, we present the methods for object detection and node recognition. The final section details the calculation of graph embeddings and the similarity measure. A. Generic Representation In electrical engineering, the circuit provides the context for a student’s question, with explicit references to specific circuit elements. To identify relevant solution methods from lecture materials, AITEE must search for approaches applied to circuits with similar characteristics, as it cannot be ex- pected that all possible circuit variations are comprehensively documented. However, LLMs face challenges in interpreting graphical representations of electrical circuits [21]. Netlists, which provide a textual description of a circuit topology, offer a machine-readable alternative. The netlist of the circuit shown in Fig. 3 is given as an example in Table I. The netlist of a circuit contains a list of all components and the corresponding nodes they are connected with, i.e. in the given example from N001 to N006. It is important to note that subtle changes in the circuit configuration can significantly alter the solution strategy. For instance, replacing resistor R6 with a second voltage source U2requires the use of, for example, the superposition principle, which is a significant change for a first-semester student. In the netlist, however, only two characters are changed. A measure of similarity between two circuits on the basis of netlists is therefore challenging. Nevertheless, a netlist is used as input for the supporting SPICE simulation. A more promising solution compared to the netlist representation is given by graph neural networks [22–24]. A central idea in this paper, is to use the cosine distance between two feature vectors of a GNN as a measure of similarity between two electrical circuits. Initially, all components listed in the netlist were stored as graph nodes. Additionally, connection nodes appearing more than twice in the netlist were also created as graph nodes, thereby, enabling the representation of parallel structures within the graph. Subsequently, all graph nodes are connected by edges using the connection nodes from the netlist. The result is a graph that captures the complete structure of an electrical circuit. Each graph node stores specific features: node type, Fig. 3: Image of a circuit with netlist nodes.R1 N003 N006 R2 N002 N001 R3 N004 N002 R4 N006 N004 R5 N005 N002 R6 N005 N005 U1 N001 N003 TABLE I: Netlist of the circuit shown to the left. number of neighbors, and centrality, which serve as node embeddings. The resulting graph of the circuit in Fig. 3 is shown in
|
https://arxiv.org/abs/2505.21582v1
|
Fig. 4. For the calculation of a graph similarity, the U1R2 N2 R3 R5 R4 R6 N6 R1 Fig. 4: Graph representation of the exemplary circuit. graph neural network Φ, parameterized by the weights θ, maps each circuit ciinto an embedding space of ddimensions [25]: fi= Φ ( ci;θϕ) (1) where fi∈Rdis referred to as the feature representation of the circuit ci. The similarity between two circuit representations can be calculated by the cosine similarity [25]: S(fi, fj) =fT ifj ∥fi∥ · ∥fj∥. (2) Cosine similarity provides a measure of vector alignment in space. A value of 1means vectors point in identical directions (0◦angle). A value of 0indicates perpendicular vectors ( 90◦ angle). A value of −1shows vectors pointing in opposite directions ( 180◦angle) [26]. For circuit embeddings, this similarity metric captures struc- tural relationships. Similar circuits have embeddings that point in nearly the same direction in latent space, with cosine similarity approaching 1. As circuits become more dissimilar, their embeddings become increasingly orthogonal, with cosine similarity nearing 0. B. Object Detection & Node Recognition Similarly to [22, 23], we use a one-stage YOLO detector to detect all circuit components. Namely, the YOLO-v8 version from Ultralytics [27], which improves the detection of small objects [28]. Due to the lack of a public dataset containing electrical circuits with european symbols, the first and second semester students studying electrical engineering at the HTWG 4 Hochschule Konstanz drew 831 resistor circuits compris- ing linear and parallel circuits, voltage dividers, Wheatstone bridges, and delta- and star-circuits. The selection of circuits is based on the syllabus of electrical engineering 1. The labeled dataset can be accessed here: [29]. In addition to the passive and active two-pole circuits, the identifiers of the two-pole cir- cuits as well as the corner and intersection points in the circuit have also been labeled. Four variants of the YOLOv8 model were trained (nano, small, medium, large) and their runtime and mean average precision were measured at an IoU of 0.5 on a Intel i7-4790k CPU. The results are given in Table II. Based Model Runtime in ms mAP0.5 YOLOv8n 120 0.965 YOLOv8s 211 0.971 YOLOv8m 392 0.971 YOLOv8l 632 0.973 TABLE II: Precision and runtime results for the YOLOv8- based detection. on these results, we chose the YOLOv8s model providing the best trade-off between precision and runtime. The output of the object detection is shown for the example circuit in Fig. 5. Given the detection results, we can subsequently proceed to reconstruct the connections between the detected components. In contrast to previous publications, we also detect the corner and intersection points in a circuit. This facilitates a simple approach to also detect diagonal connections. The process of the connection recognition is shown in Fig. 6. In a first step, all detected components and their identifiers are removed from the image. Then, Ndcontour points are created on the remaining connections (see c1in Fig. 6). In parallel, all corner and intersection points are connected to each other building so-called inter-node connections (see c2in Fig. 6). The validation of inter-node connections is performed using a
|
https://arxiv.org/abs/2505.21582v1
|
line-loss metric that quantifies the geometric proximity be- tween candidate connections and actual circuit paths. The line- Fig. 5: Output of the object detection for the example circuit with the YOLOv8s model.loss computation consists of two steps: Initially, each inter- node connection kis discretized with Nb,kequidistant interval points (xk,i, yk,i). Subsequently, the Euclidean distance is calculated from each interval point to its nearest circuit contour point. The set of contour points is defined as: Nd={(xj, yj)∈R2|j= 1, . . . , n }. (3) The line-loss metric for connection kis computed as: dk=N− b,kX i=1min cj∈Ndq (xk,i−xj)2+ (yk,i−yj)2 (4) where (xk,i, yk,i)denotes the coordinates of the i-th interval point of connection k, for i∈1, ..., N b,k. The term N− b,k≤ Nb,kaccounts for the exclusion of interval points which are located within a bounding box of a detected component from the line-loss metric. The validation step establishes a heuristically determined linear threshold value to differentiate valid from invalid inter-node connections. The result of the analysis is shown next to c3in Fig. 6 where green lines represent the valid inter-node connections and red lines belong to invalid inter-node connections. The final integration step, indicated by c4in Fig. 6, compares the bounding boxes of the detected components with the valid inter-node connections. The resulting intersections are used to incorporate the compo- nents into the electrical circuit structure. Additionally, each valid inter-node connection corresponds to a netlist node. Together with the class of the detected component, this allows both the netlist to be generated and graph-based processing to be enabled. C. Graph Embedding & Similarity Measure After reconstructing an electrical circuit diagram, it becomes necessary to identify its corresponding context within the lecture materials. Electrical engineering fundamentals are typi- cally taught using basic circuit configurations, including series circuits, parallel circuits, and combinations thereof. Students face a primary challenge in applying learned principles across different circuit configurations. For AITEE, this presents a specific challenge since the input circuit may not exactly match those presented in lecture materials. Therefore, the objective is to identify the most analogous circuit and derive the applicable methodologies. In this paper, we propose to model the electrical circuit as an undirected graph and to use the global graph embeddings to calculate a circuit-similarity measure. Due to the application within an educational setting, the similarity between electrical circuits is primarily defined by their shared methodological approaches to problem-solving. Two key characteristics determine the calculation methodol- ogy: 1)Circuit Type : This describes the interconnection pat- tern of components within the electrical circuit. Each circuit type (series, parallel, mixed, and bridge circuits) typically requires specific formulas and procedures for problem-solving. 2)Special Cases : These arise when specific conditions, unusual components, or particular connection types are 5 c1 c2 c3 c4 Fig. 6: Illustration of the process for recognizing the connec- tion nodes in an electrical circuit. present. Even a single connection or component can trigger a special case, potentially requiring a completely different calculation methodology. The superposition principle is one such special case, used to analyze circuits with multiple independent sources by evaluating each source’s
|
https://arxiv.org/abs/2505.21582v1
|
effect individually before combining the results. This definition of circuit similarity forms the foundation for developing feature representations that can effectively cap- ture these characteristics for comparison purposes. In order to develop an effective feature representation, we formulate a classification problem with eight distinct circuit classes. These classes are derived from combining four basic circuittypes (parallel, series, mixed, and bridge circuits) with two source configurations (single and multiple sources). The circuit classifications are summarized in Table III. The computation Circuit Class Single source Multiple sources Parallel Circuit Class 1 Class 2 Series Circuit Class 3 Class 4 Mixed Circuit Class 5 Class 6 Bridge Circuit Class 7 Class 8 TABLE III: The different circuit classes in the GNN classifi- cation. of graph embeddings follows the process illustrated in Fig. 7 and is explained in detail in Sec. III-C1. In the evaluation of suitable architectures for the graph neural network (GNN) component, several established approaches were examined: Graph Convolutional Networks (GCNs) [30], Graph Attention Networks (GATs) [31], GraphSAGE [32], and Graph Isomor- phism Network (GIN) [33]. The evaluation involved training the different GNNs with 150 netlists from various classes and validating them using 30 netlists. Based on this evaluation, GraphSAGE was chosen showing slightly better performance. 1) Graph Embedding The graph embedding generation integrates two primary inputs: the netlist graph and its associated metadata, processed through distinct pathways as depicted in Fig. 7. The netlist graph encodes component interconnections and their topolog- ical relationships, whereas the metadata comprises the amount and type of components. The structural information initializes the GraphSAGE network, generating node embeddings that incorporate both local and global structural characteristics. These node embeddings capture contextual information from their neighborhood, yet they do not inherently provide a comprehensive representation of the entire graph structure. To address this limitation, a global pooling operation is implemented, aggregating the node embeddings into a single Netlist graph Netlist metadata Global Mean Pooling FC-Layer (Softmax) Normalized Graph EmbeddingAnalysis of netlist NormalizationGNN ⃗fg ⃗f¯g ⃗fm⃗f¯m Fig. 7: Process to calculate normalized graph embeddings using the netlist graph as well as netlist metadata. 6 representative vector. A subsequent fully-connected layer with a softmax activation function outputs the normalized feature vector ⃗f¯g. The secondary path implements a heuristic approach to process netlist metadata. This approach comprises three key components: a sigmoid function fcfor component quantifica- tion, a linear combination fsfor source type distribution and a binary function fbthat differentiates between single-source (fb= 0) and multi-source ( fb= 1) configurations. The component quantification function fcuses a sigmoid form defined as fc=1 1 + exp ( −c1·(x−c2)). (5) The sigmoid parameters were calibrated with c1=1 and c2=7.5, establishing a normalized range of [0,1]for circuits containing 1 to 14 components. This range covers the typical complexity found in lecture materials. A linear combination quantifies the number and type of sources: fs= 0.33·V+ 0.66·C+ 0.01·V·C, (6) where VandCare binary indicators ( V, C∈ {0,1}) for the presence of voltage and current sources, respectively. Finally, a binary function fbindicates whether there is only one source ( fb=0) or multiple sources ( fb=1) in the circuit.
|
https://arxiv.org/abs/2505.21582v1
|
The feature representations of all three functions are consolidated into a unified vector ⃗fm= [⃗fc,⃗fs,⃗fb]. To ensure consistent scaling, the elements of ⃗fmare normalized, constraining their sum to unity, and stored in ⃗f¯m. 2) Similarity Measure The effectiveness of graph embeddings for circuit repre- sentation is clearly demonstrated in our experimental results. As shown in Fig. 8, the similarity map based on cosine distances between embeddings across 8 distinct circuit classes (2 circuits per class) reveals strong intra-class relationships. Circuits belonging to the same class exhibit high similarity values, approaching 1, indicating their embeddings point in nearly identical directions within the latent space. Conversely, cross-class comparisons show minimal similarity, suggesting the embeddings become increasingly orthogonal as circuit differences grow. This clear separation validates that the graph- based representation successfully captures the fundamental characteristics that define circuit classes while distinguishing between different topological configurations. IV. LLM- BASED TUTOR IN ELECTRICAL ENGINEERING To ensure the technical accuracy of AITEE, it is essential that the employed LLM is able to correctly interpret a given electrical circuit as well as to apply corresponding solution methods. The correct recognition and interpretation of the electrical circuit represented by a netlist is therefore crucial. Misinterpretation at this stage can introduce significant errors, potentially compromising the effectiveness of domain-specific electrical engineering knowledge when applied to an inaccu- rately understood circuit. The following section analyzes the Fig. 8: Cosine Similarity Map of Circuit Embeddings. Heat map showing similarities between circuit embeddings across 8 classes (2 circuits per class). High similarity values (yellow) appear between circuits of the same class, with minimal similarity (black) between different classes, demonstrating the effectiveness of graph embeddings in distinguishing circuit topologies. fundamental capabilities of three open-source and one closed- source LLM in interpreting netlist representations. Subse- quent chapters will then evaluate the application of Retrieval- Augmented Generation (RAG) approaches for solving electri- cal circuit tasks. Finally, the robustness of the agent and the effectiveness of Socratic dialogue strategies will be assessed. A. Understanding of Electrical Circuits For the evaluation of the LLMs’ capabilities to understand electrical circuits, we manually created a dataset comprising 24 netlists, with three examples each for the circuit classes defined in Table III, along with their corresponding accurate descriptions. Each model received netlists from the dataset and was tasked with generating circuit descriptions. The initial assessment focused on the baseline performance of LLMs without optimization. To automate the evaluation process, GPT-4.0 was employed as the judge, utilizing the LLM-as- a-Judge method described by Zheng et al [34]. For each generated description, the judge was instructed to provide a rating from 0 to 4, based on the following scoring scheme: •0 points: The description is completely incorrect. •1 point: The description exhibits numerous errors or fails to capture many aspects of the reference description. •2 points: The description includes a limited number of errors or differs from the reference in a few aspects. •3 points: The description displays only minor errors or diverges in a few aspects from the reference description. •4 points: The description is entirely error-free and logi- cally describes the
|
https://arxiv.org/abs/2505.21582v1
|
same circuit as the reference descrip- tion. A corresponding prompt example for the baseline approach is shown in Fig. 9 The baseline accuracy results are presented in the first column of Table IV. Accuracy is quantified as the ratio of the total points achieved to the maximum possible total points. The smallest model, Llama 3.1 8B, demonstrated a sig- nificant deficit in netlist comprehension, which resulted in the misinterpretation of the majority of circuits within the dataset. The next larger open-source models, Llama 3.1 70B and Llama 3.1 405B, also showed fundamental shortcomings in this area. A particular notable weakness was observed in 7 [Human] ### Here is the netlist to be described. Netlist: R_D N004 N005 450 Ω R_B N002 N003 270 Ω R_C N003 N004 360 Ω V7 N001 N005 18V R_A N001 N002 180 Ω [System] Your task is to analyze a netlist and briefly and concisely describe the circuit. Describe the circuit represented by the netlist, not the netlist itself. Fig. 9: Baseline prompt example for the generation of circuit descriptions for a given netlist. the interpretation of electrical nodes. The closed-source model Claude 3.5 Sonnet accurately described simple circuits such as series and parallel configurations. However, it demonstrated limitations with more complex circuits, particularly in the recognition of nodes and parallel branches. ModelAccuracyBaseline CoT 2-Shot-CoT 4-Shot-CoT 4-Shot-CoT + Contextualization Llama 3.1 8B 0.25 0.28 0.31 0.5 0.57 Llama 3.1 70B 0.37 0.69 0.83 0.87 0.89 Llama 3.1 405B 0.55 0.73 0.82 0.89 0.90 Claude 3.5 Sonnet 0.74 0.8 0.95 0.95 0.97 TABLE IV: Accuracy for the correct interpretation and anal- ysis of electrical circuits as a function of prompt engineering method by LLM-as-a-Judge. Chain-of-Thought (CoT) prompting [35] was implemented to enhance reasoning capabilities of the models in the analysis, recognition, and interpretation of netlists, which is inherently a complex reasoning task. The previously employed baseline prompt provided only a brief task description, prompting the LLMs to attempt a single-step solution. To address this, the prompt was modified to guide the LLMs to process the task through a defined chain of thought. Specifically, the chain begins with identifying the component connections, followed by analyzing the current flow pattern through the circuit. The analysis then proceeds to identify circuit topologies and configurations, examining parts of the circuit which are in a series or parallel arrangement, delta/wye connections, or bridge circuits. Only after completing this systematic exam- ination does the process generate a comprehensive circuit description. It can be seen from the results in the second column of Table IV that the Llama models 70B and 405B improve significantly while Claude Sonnet 3.5 and especially Llama 3.1 8B only slightly improve.[Human] ### Here is the netlist to be described. Netlist: R_D N004 N005 450 Ω R_B N002 N003 270 Ω R_C N003 N004 360 Ω V7 N001 N005 18V R_A N001 N002 180 Ω [System] Your task is to analyze a netlist and briefly and concisely describe the circuit it represents. Follow these steps in order: 1. Create a description explaining how the components of the circuit are connected. 2.
|
https://arxiv.org/abs/2505.21582v1
|
Create a description of how the electric current flows through the circuit from the first pole of the source to the second. (This point can be ignored for circuits with multiple sources) 3. Create a list of sub-circuits, such as series circuits, parallel circuits, or delta/star connections. 4. Create a description of the overall circuit. (Describe the circuit represented by the netlist, not the netlist itself.) Fig. 10: Chain-of-thought prompt example for the generation of circuit descriptions for a given netlist. In order to further enhance the performance, few-shot prompting, as described by Brown et al. [36], was evaluated. This technique was implemented with both two and four examples, in conjunction with Chain-of-Thought prompting. These configurations are denoted as 2-Shot-CoT and 4-Shot- CoT, respectively, in Table IV. As can be seen from the results, further improvements were achieved for all models. Notable Claude Sonnet 3.5 reached a near optimal results of 0.95. Building upon the initial analysis of netlist interpretations, which revealed frequent inaccuracies in the identification of electrical nodes, a static contextualization strategy was intro- duced. This approach incorporates deterministically derived information about the electrical nodes directly into the prompt. Furthermore, guidance on interpreting the netlist structure was also provided within the prompt. The contextualization in combination with 4-Shot-CoT Prompting achieved the best results. It can be seen, that the mid-sized Llama model (70B) achieved almost the same results as the 405B model and performs only slightly worse than the Claude 3.5 Sonnet model. It is important to note that although a perfect score was not achieved by any model, the scoring was influenced by GPT-4.0 as the judge, which lowered scores for minor deviations from the reference description. With the exception of Llama 3.1 8B, all models are able to provide sufficiently accurate descriptions of the netlist. 8 ModelAccuracyBaseline 3-Shot-CoT 3-Shot-CoT + Naive-RAG 3-Shot-CoT + RAPTOR + RAG-Fusion 3-Shot-CoT + RAPTOR + HyDE 1-Shot-CoT + MRI 1-Shot-CoT + MRI + Sim Llama 3.1 8B 0.15 0.15 0.15 0.27 0.42 0.39 0.42 Llama 3.1 70B 0.50 0.57 0.38 0.65 0.54 0.77 0.85 Llama 3.1 405B 0.47 0.68 0.5 0.65 0.62 0.85 0.92 Claude 3.5 Sonnet 0.69 0.77 0.73 0.85 0.84 0.96 0.96 TABLE V: Accuracy of the LLMs when applying domain-specific knowledge of electrical engineering to electrical circuits. 8B 70B 405B C3.5 8B 70B 405B C3.5 8B 70B 405B C3.5 8B 70B 405B C3.5 8B 70B 405B C3.5 8B 70B 405B C3.5 8B 70B 405B C3.500.51 G7 G6 G5 G4 G3 G2 BaselineAccuracyClass 1/3 Class 5 Class 5+ Class 7 Class 6/8 Baseline 3-Shot-CoT 3-Shot-CoT + Naive RAG3-Shot-CoT + RAPTOR + RAG-Fusion3-Shot-CoT + RAPTOR + HyDE1-Shot-CoT+ MRI1-Shot-CoT + MRI +Sim Fig. 11: Accuracy by Circuit Class for the given LLM configurations. Stacked bar histogram detailing the accuracy (y-axis) of Llama 3.1 (8B, 70B, 405B) and Claude 3.5 Sonnet models under various problem-solving strategies. B. Application of Solution Methods to Electrical Circuits In the following, the correct application of solution methods to tasks for given electrical circuits is evaluated. The tasks are limited to the curriculum of the first semester of Fun-
|
https://arxiv.org/abs/2505.21582v1
|
damentals of Electrical Engineering, in which, among other topics, resistance networks with direct current are examined. One or two tasks for a subset of circuit classes from Table III, with several subtasks, are evaluated. In order to make a precise statement about the capabilities of the models in relation to the correct application of the methods, the reference description of the netlist is provided for each task. Since none of the models examined, including GPT-4.0, was able to solve the tasks without errors, the solutions of all models were checked manually. The achievable partial points were defined in advance for each subtask to ensure a consistent evaluation. 1) Baseline Performance Evaluation The baseline results over all circuit classes are shown in the first column of Table V. Furthermore, the results per circuit class are depicted as a stacked bar plot in Fig. 11. Due to their particular importance and widespread use in lecture materials, we have listed tasks on voltage and current dividers for mixed circuits separately, denoted by class 5+.As Fig. 11 illustrates for the Baseline configurations, correct solutions are predominantly concentrated in the simpler Class 1/3 circuits (single source source, series/parallel) and, to a lesser extent, Class 5+ (voltage/current dividers). This latter observation supports the notion that tasks on this subclass could be solved significantly better due to their widespread use in training material. The Llama 3.1 70B and 405B models were able to solve many tasks for the simple series and parallel circuits (Class 1/3) and a majority of tasks related to current and voltage dividers (Class 5+). However, more complex configurations such as Class 7 and especially Class 6/8 saw minimal to no success across all models at baseline. Furthermore, the baseline performance of the models could not be significantly improved by CoT prompt engineering either. Hereby, the number and order of the examples have been empirically evaluated and set to three examples. The results for the 3-Shot-CoT can be seen in the second column of Table IV and detailed in Fig. 11. Figure 11 confirms that 3-Shot-CoT offered only marginal gains over the baseline approach for most models, with performance still heavily reliant on solving Class 1/3 and Class 5+ circuits. While the Llama 3.1 405B model was able to achieve a more noticeable improvement in performance, Fig. 11 reveals this was largely 9 due to an increased proficiency on these same less complex classes, rather than a breakthrough in handling more difficult circuit types. 2) Retrieval-Augmented Generation and its Limitations It is evident that neither the baseline performance nor the performance achieved with the 3-Shot-CoT approach for the language models is sufficient for a tutor application, partic- ularly given their challenges with circuits beyond moderate complexity. A typical solution to provide the domain-specific knowledge to the LLM is given by Retrieval-Augmented Generation (RAG) [37–39]. For AITEE, the lecture content was preprocessed as a knowledge base where relevant formu- las and calculations were reproduced with L ATEX equations. Circuit illustrations were converted to netlists and placed at appropriate locations. The script was then divided into 400- token
|
https://arxiv.org/abs/2505.21582v1
|
chunks. OpenAI’s text-embedding-ada-002-v2 model was used to create the embeddings. To identify semantically relevant content in the vector database, a circuit description must be added to the prompt alongside the student’s question (e.g., ”How do I calculate the current I3”). The three most similar chunks are returned and used to contextualize the LLM. The results are denoted by 3-Shot-Cot + Naive RAG. Compared to isolated prompt engineering, the performance actually deteriorated for some models. A detailed analysis of the responses revealed that the naive RAG approach introduced an additional source of error. Without RAG, the baseline models relied on their trained knowledge, whereas with RAG, they used the provided chunks for finding solutions. Unsuitable chunks led to poorer responses. However, identifying the relevant chunks is challenging. Simply combining the circuit description and the task formulation is not sufficient to find appropriate chunks. The query must be optimized for the retrieval process. Additionally, some queries relate to multiple sections of the script. For example, when a question about a mixed circuit is posed and this circuit is simplified during the response process, such as to a series circuit, it would be optimal to have chunks with higher abstraction that contain information about both mixed circuits and series circuits. To address these limitations, we evaluated two advanced re- trieval approaches. The first approach combines RAPTOR [40] with RAG-Fusion [41]. RAPTOR (Recursive Abstractive Pro- cessing for Tree-Organized Retrieval) constructs a hierarchical tree of recursively embedded, clustered, and summarized text chunks, enabling retrieval at different levels of abstraction. RAG-Fusion complements this by generating multiple contex- tual queries and reranking them using reciprocal rank fusion, which helps capture various perspectives of the original query. The second approach pairs RAPTOR with HyDE (Hypothet- ical Document Embeddings) [42]. which specifically addresses the style mismatch between student queries and the knowledge base. HyDE first uses a large language model to generate a hypothetical text segment that mimics the style of the lecture script while answering the query. Although this generated text may contain errors, it is not used directly for answering but rather to identify semantically similar chunks in the vector database. This approach is particularly valuable in educational contexts where first-semester students’ questions often differ significantly from the formal language used in lecture scripts.By bridging this linguistic gap, HyDE enables more effective retrieval of relevant information despite differences in formu- lation and terminology. Although advanced RAG methods improved the perfor- mance of the naive RAG approach, as illustrated in Fig. 11, they enabled models to solve a greater proportion of Class 5, Class 5+, and begin addressing Class 7 circuits. However, for the Llama models, these methods did not achieve significantly better performance compared to isolated prompt engineering (cf. 3-Shot-CoT performance), especially for the most complex classes. A primary limitation was the suboptimal suitability of queries - comprising circuit descriptions and questions - for similarity searches of matching chunks. Only the large closed- source Claude 3.5 Sonnet model achieved a sufficient overall performance and, as seen in Fig. 11, a broader capability across circuit complexities with advanced RAG methods
|
https://arxiv.org/abs/2505.21582v1
|
to suggest tutor-level expertise. 3) Multi-Representation Indexing for Improved Retrieval A key consideration for RAG is the chunking strategy. Instead of segmenting content based on a fixed number of tokens, the teaching material is structured into clearly defined units. From a didactic perspective, a unit represents basic building blocks of knowledge in electrical engineering, encom- passing declarative knowledge (definitions, facts), procedural knowledge (application methods, problem-solving strategies), and conceptual knowledge (understanding of interrelationships and principles). Thus, supplementing the prompt with the most relevant unit is anticipated to enhance the performance of the LLM. To address the query suitability issue, multi-representation indexing (MRI) was implemented. Chen et al. [15] introduced MRI, advocating for indexing a corpus using “propositions” – concise, self-contained factoids – as retrieval units. In contrast to this proposition-based approach, our implementation of MRI utilizes representative netlists as indices for units. For each unit, typical electrical circuits are generated, and their corresponding netlists serve as indices for that unit. When a prompt includes a circuit, the GNN-based similarity measure, detailed in Section III-C, identifies the representative netlists most similar to the given circuit. The units associated with these similar netlists are then retrieved and provided to the LLM in combination with a single CoT example. The perfor- mance results of this approach, termed 1-Shot-CoT +MRI, are presented in Table V and Figure 11. The implemented system demonstrates a significant per- formance improvement compared to previously evaluated ap- proaches. As shown in Fig. 11, the 1-Shot-CoT +MRI ap- proach led to a substantial increase in accuracy, particularly enabling models to successfully address more complex circuit classes. With the exception of the Llama 3.1 8B model, all other models exhibit a performance level that suggests the po- tential to ensure tutor-level expertise. Notably, Llama 3.1 70B and 405B, and especially the Claude 3.5 Sonnet model, showed significant capability in solving Class 7 and even the challenging Class6/8 problems, which were largely unsolvable with previous methods. The Claude 3.5 Sonnet model achieves near-flawless performance on the tasks within the dataset. Consistent with findings reported by Chen et al. [43], our 10 analysis also reveals a recurring challenge for all language models in performing basic arithmetic operations. 4) Simulation-Based Arithmetic Validation To address the identified limitations of LLMs in arithmetic operations, the system was augmented with a simulation execution capability. This enhancement incorporates the tool PySpice [44]. The netlist representation of the circuit is provided as input to PySpice, and the simulation generates output parameters including partial voltages, currents, total current, total voltage, and total resistance. The results presented in Table V and Figure 11 with 1- Shot-CoT +MRI+Sim, indicate near-optimal performance for both the Llama 3.1 405B and Claude 3.5 Sonnet models across most circuit classes. However, for tasks within Classes 6 and 8, which necessitate the application of the superposition principle, some inaccuracies in current calculations were ob- served. These errors appear to originate from inconsistencies in current direction definitions between the provided netlist and the task query. Specifically, the system may have failed to detect or reconcile cases where the netlist’s current direction
|
https://arxiv.org/abs/2505.21582v1
|
convention deviated from that implied or explicitly stated in the query, resulting in incorrect calculations using the superposition method. C. Evaluation of Didactic Competence A main goal of AITEE is to generate didactically valuable responses. This necessitates that the tutor guides students towards solutions, rather than directly presenting them. While a comprehensive analysis of the full spectrum of didactic capabilities in LLMs presents a significant challenge, this section concentrates on evaluating key aspects of pedagogical effectiveness relevant to a tutoring system. Specifically, we focus on two critical dimensions of didactic quality: fostering learner autonomy and dialogue robustness. These two metrics are prioritized as essential indicators of a system’s ability to provide effective and pedagogically sound guidance. To provide a focused and evaluable assessment of didactic quality, we employ the following metrics: Fostering Learner Autonomy : This metric assesses the system’s success in promoting independent learning. Recog- nizing that effective tutoring should guide rather than dictate, we evaluate whether the system avoids directly providing solutions or explicit intermediate steps. Instead, pedagogically sound dialogues are expected to employ counter-questions and guiding prompts to facilitate the learner’s autonomous progress towards both intermediate and final solutions. Dialogues are considered to fall short in fostering autonomy if the system preempts the learner’s problem-solving process by directly supplying final answers or critical intermediate results. Dialogue Robustness : This metric specifically measures the system’s resilience to potentially inaccurate user input. A key characteristic of a robust tutoring agent is its ability to maintain a consistent and correct understanding, even when confronted with erroneous user statements. For example, a robust system should remain unaffected if a user mistakenly classifies a series circuit as a parallel circuit. To specifically examine dialogue robustness, each evaluation dialogue includes a simulatedinstance of such user-provided misinformation. Dialogues are classified as insufficiently robust if the system inappropriately accepts the inaccurate user statement and subsequently adapts its behavior based on this error. In order to ensure a focused evaluation, a dataset was constructed consisting of electrical circuit descriptions paired with corresponding tasks or questions. Each question–circuit pair serves as the starting point for a dialogue, which is then extended to include five user queries and five system responses. To assess dialog robustness, each conversation includes one intentional insertion of false information (for example, incorrectly labeling a parallel circuit as a series cir- cuit). This methodology results in five dialogs, each containing five question–answer exchanges per initial query. Finally, the resulting dialogs were evaluated using the predefined metrics for learner autonomy and dialogue robustness. The results are presented in Table VI. All evaluated models exhibit fundamental behavioral defi- ciencies in the context of this tutoring application. Specifically, the LLMs consistently generated complete solutions directly, a practice that could negatively impact student learning out- comes. Regarding dialogue robustness, the smallest model, Llama 3.1 8B, adopted the user’s perspective in four out of five dialogues. This behavior was also observed in the other models, albeit less frequently, occurring twice out of five dialogues. In all cases, this level of robustness is deemed insufficient for effective pedagogical application. ModelLearner AutonomyDialogue RobustnessBaseline
|
https://arxiv.org/abs/2505.21582v1
|
w. Instructions Baseline w. Instructions Llama 3.1 8B 0/5 4/5 1/5 4/5 Llama 3.1 70B 0/5 5/5 3/5 5/5 Llama 3.1 405B 0/5 5/5 3/5 5/5 Claude 3.5 Sonnet 0/5 5/5 3/5 5/5 TABLE VI: Evaluation of fostering the learner autonomy and dialogue robustness for baselines models vs models with instruction prompts. To address these limitations, the system prompt of the LLMs is designed to clearly define the tutors tasks and provide specific guidelines to follow: 1)Socratic Questioning : Ask a specific question that stimulates the students’ critical thinking and lead them step by step to the solution. 2)No direct solutions : Never provide complete or partial solutions. Your role is to enable students to solve prob- lems independently. 3)Promote self-efficacy : Encourage students to think for themselves and apply their knowledge. Don’t show the students how to do it, but encourage them to find the solution themselves. 11 4)Error correction : If students give incorrect answers, gently guide them in the right direction without giving away the correct answer. 5)Technical terms : Use and explain relevant electrical engineering terms to deepen understanding. 6)Language : Answer in German only. 7)Adaptability : Adapt your explanations and questions to the student’s level of understanding. 8)Positive reinforcement : Reward correct answers and progress to increase motivation. 9)Short and specific answers : Always answer the stu- dent’s specific question to enable step-by-step problem solving. To further align the language model with the task, few-shot examples of desired dialogues are provided. As a result, all examined language models engage in Socratic dialogue. Nei- ther the closed-source model Claude 3.5 Sonnet nor the open- source models Llama 3.1 405B and 70B provide complete or partial results for the test dialogues. Only the smallest model, Llama 3.1 8B, provided partial results in one of five dialogues. When faced with incorrect user input, the three largest models examined remain robust and do not adopt the student’s opinion. They guide the student through the task and provide only the necessary support. The models appropriately decline when students request complete solutions, explaining that providing answers directly is not possible. V. C ONCLUSIONS This paper introduces AITEE, an agentic tutor designed to address the limitations of traditional educational technologies in electrical engineering education, particularly the teacher bandwidth problem. AITEE integrates Large Language Mod- els within an Intelligent Tutoring System to provide interactive and personalized learning experiences for students analyzing electrical circuits. A key feature of AITEE is its ability to process both digital and hand-drawn circuit diagrams, enabling students to interact with the system using either digital tools or hand sketches. The core strength of AITEE lies in its agentic nature, leveraging tools such as circuit reconstruction and Spice simulation, while separately employing Socratic di- alogue as its pedagogical approach to foster learner autonomy and self-efficacy by guiding students towards solutions through systematic questioning rather than providing direct answers. Our evaluation focused on netlist interpretation and the application of domain-specific knowledge to engineering tasks for students. Results demonstrate that the proposed graph- based similarity measure effectively retrieves relevant con- textual information from lecture
|
https://arxiv.org/abs/2505.21582v1
|
materials. Regarding didac- tic competence, initial evaluations revealed a tendency for LLMs to provide direct solutions, which hindered learner autonomy. However, implementing instruction prompts that explicitly guide the LLMs to adopt Socratic questioning tech- niques significantly improved the system’s ability to foster learner autonomy and enhance dialogue quality. While im- proving dialogue robustness remains an ongoing challenge, the instruction-prompted models demonstrated significant im- provement in resisting inaccurate user input while maintainingpedagogical soundness. Despite these promising results, cer- tain limitations persist. Arithmetic inaccuracies, particularly in complex circuits requiring superposition, and the need for further enhancement of dialogue robustness are identified as key areas for future work. A crucial next step involves conducting a comprehensive test with students to evaluate AITEE’s effectiveness in real-world educational settings and to gather feedback on its usability and impact on student learning outcomes. REFERENCES [1] S. Wollny, J. Schneider, D. Di Mitri, J. Weidlich, M. Rittberger, and H. Drachsler, “Are We There Yet? - A Systematic Literature Review on Chatbots in Education,” Frontiers in Artificial Intelligence , vol. 4, p. 654924, Jul. 2021. [2] D. A. Wiley and E. Edwards, “Online self-organizing social systems: The decentralized future of online learning,” The Quarterly Review of Distance Education , 2002. [3] L. Labadze, M. Grigolia, and L. Machaidze, “Role of ai chatbots in education: systematic literature review,” International Journal of Educational Technology in Higher Education , vol. 20, pp. 1–17, 12 2023. [4] A. C. Graesser, K. VanLehn, C. P. Rose, P. W. Jordan, and D. Harter, “Intelligent Tutoring Systems with Conversational Dialogue,” AI Mag- azine , vol. 22, no. 4, pp. 39–39, Dec. 2001. [5] R. Winkler and M. S ¨ollner, “Unleashing the Potential of Chatbots in Education: A State-Of-The-Art Analysis,” Academy of Management Proceedings , vol. 2018, p. 15903, Apr. 2018. [6] T. Lehmann, I. H ¨ahnlein, and D. Ifenthaler, “Cognitive, metacognitive and motivational perspectives on preflection in self-regulated online learning,” Computers in Human Behavior , vol. 32, pp. 313–323, Mar. 2014. [7] J. Maynez, S. Narayan, B. Bohnet, and R. McDonald, “On Faithfulness and Factuality in Abstractive Summarization,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , D. Jurafsky, J. Chai, N. Schluter, and J. Tetreault, Eds. Online: Association for Computational Linguistics, Jul. 2020, pp. 1906–1919. [8] J. Quiroga Perez, T. Daradoumis, and J. Puig, “Rediscovering the use of chatbots in education: A systematic literature review,” Computer Applications in Engineering Education , vol. 28, Sep. 2020. [9] D. Mar ´ın, “A Review of the Practical Applications of Pedagogic Con- versational Agents to Be Used in School and University Classrooms,” Digital , vol. 1, pp. 18–33, Jan. 2021. [10] J. B. Wiggins, J. F. Grafsgaard, K. E. Boyer, E. N. Wiebe, and J. C. Lester, “Do You Think You Can? The Influence of Student Self-Efficacy on the Effectiveness of Tutorial Dialogue for Computer Science,” International Journal of Artificial Intelligence in Education , vol. 27, no. 1, pp. 130–153, Mar. 2017. [11] L. E. Margulieux, J. Prather, B. N. Reeves, B. A. Becker, G. Cetin Uzun, D. Loksa, J. Leinonen, and P. Denny, “Self-Regulation,
|
https://arxiv.org/abs/2505.21582v1
|
Self-Efficacy, and Fear of Failure Interactions with How Novices Use LLMs to Solve Programming Problems,” in Proceedings of the 2024 on Innovation and Technology in Computer Science Education V. 1 , ser. ITiCSE 2024. New York, NY , USA: Association for Computing Machinery, Jul. 2024, pp. 276–282. [12] M. Negnevitsky, “Application of an intelligent tutoring system in elec- trical engineering education,” in 1996 IEEE International Conference on Multi Media Engineering Education. Conference Proceedings , Jul. 1996, pp. 491–497. [13] L. Favero, J. A. P ´erez-Ortiz, T. K ¨aser, and N. Oliver, “Enhancing Critical Thinking in Education by means of a Socratic Chatbot,” arXiv preprint arXiv:2409.05511 , 2024. [14] L. Zhang, J. Lin, Z. Kuang, S. Xu, and X. Hu, “SPL: A Socratic Playground for Learning Powered by Large Language Model,” arXiv preprint arXiv:2406.13919 , Sep. 2024. [15] T. Chen, H. Wang, S. Chen, W. Yu, K. Ma, X. Zhao, H. Zhang, and D. Yu, “Dense X Retrieval: What Retrieval Granularity Should We Use?” arXiv preprint arXiv:2312.06648 , 2024. [16] V . Plevris, G. Papazafeiropoulos, and A. Jim ´enez Rios, “Chatbots Put to the Test in Math and Logic Problems: A Comparison and Assessment of ChatGPT-3.5, ChatGPT-4, and Google Bard,” AI, vol. 4, no. 4, pp. 949–969, Dec. 2023. 12 [17] R. R. Reddy and M. R. Panicker, “Hand-Drawn Electrical Circuit Recognition using Object Detection and Node Recognition,” arXiv preprint arXiv:2106.11559 , Nov. 2021. [18] B. Bohara and H. S. Krishnamoorthy, “Computer Vision based Frame- work for Power Converter Identification and Analysis,” in 2022 IEEE International Conference on Power Electronics, Drives and Energy Systems (PEDES) , Dec. 2022, pp. 1–6. [19] W. Uzair, D. Chai, and A. Rassau, “ElectroNet: An Enhanced Model for Small-Scale Object Detection in Electrical Schematic Diagrams,” preprint Research Square , Jul. 2023. [20] L. He, X. Ren, Q. Gao, X. Zhao, B. Yao, and Y . Chao, “The connected- component labeling problem: A review of state-of-the-art algorithms,” Pattern Recognition , vol. 70, pp. 25–43, Oct. 2017. [21] P. S. Meshram, S. Karthikeyan, Bhavya, and S. Bhat, “ElectroVizQA: How well do Multi-modal LLMs perform in Electronics Visual Question Answering?” arXiv preprint arXiv:2412.00102 , 2024. [22] A. Mirhoseini, A. Goldie, M. Yazgan, J. W. Jiang, E. Songhori, S. Wang, Y .-J. Lee, E. Johnson, O. Pathak, A. Nova, J. Pak, A. Tong, K. Srinivasa, W. Hang, E. Tuncer, Q. V . Le, J. Laudon, R. Ho, R. Carpenter, and J. Dean, “A graph placement methodology for fast chip design,” Nature , vol. 594, no. 7862, pp. 207–212, Jun. 2021. [23] A. Said, M. Shabbir, B. Broll, W. Abbas, P. V ¨olgyesi, and X. Kout- soukos, “Circuit design completion using graph neural networks,” Neural Computing and Applications , vol. 35, no. 16, pp. 12 145–12 157, Jun. 2023. [24] Y . Yamakaji, H. Shouno, and K. Fukushima, “Circuit2Graph: Circuits With Graph Neural Networks,” IEEE Access , vol. 12, pp. 51 818–51 827, 2024. [25] D. D. Mohan, B. Jawade, S. Setlur, and V . Govindaraj, “Deep Metric Learning for Computer Vision: A Brief Overview,” arXiv preprint arXiv:2312.10046 , Dec. 2023. [26] C.
|
https://arxiv.org/abs/2505.21582v1
|
Allen and T. Hospedales, “Analogies Explained: Towards Understand- ing Word Embeddings,” arXiv preprint arXiv:1901.09813 , May 2019. [27] G. Jocher, J. Qiu, and A. Chaurasia, “Ultralytics YOLO,” Jan. 2023. [Online]. Available: https://github.com/ultralytics/ultralytics [28] A. Vijayakumar and S. Vairavasundaram, “YOLO-based Object Detec- tion Models: A Review and its Applications,” Multimedia Tools and Applications , vol. 83, no. 35, pp. 83 535–83 574, Oct. 2024. [29] C. Knievel, A. Bernhardt, and C. Bernhardt, “Circuit-dataset for AITEE - agentic tutor for electrical engineering,” 2025. [Online]. Available: https://github.com/CKnievel/aitee-dataset [30] T. N. Kipf and M. Welling, “Semi-Supervised Classification with Graph Convolutional Networks,” arXiv preprint arXiv:1609.02907 , Feb. 2017. [31] P. Veli ˇckovi ´c, G. Cucurull, A. Casanova, A. Romero, P. Li `o, and Y . Ben- gio, “Graph Attention Networks,” arXiv preprint arXiv:1710.10903 , Feb. 2018. [32] W. L. Hamilton, R. Ying, and J. Leskovec, “Inductive Representation Learning on Large Graphs,” arXiv preprint arXiv:1706.02216 , Sep. 2018. [33] K. Xu, W. Hu, J. Leskovec, and S. Jegelka, “How Powerful are Graph Neural Networks?” arXiv preprint arXiv:1810.00826 , Feb. 2019. [34] L. Zheng, W.-L. Chiang, Y . Sheng, S. Zhuang, Z. Wu, Y . Zhuang, Z. Lin, Z. Li, D. Li, E. P. Xing, H. Zhang, J. E. Gonzalez, and I. Stoica, “Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena,” arXiv preprint arXiv:2306.05685 , Dec. 2023. [35] J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. Chi, Q. Le, and D. Zhou, “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models,” arXiv preprint arXiv:2201.11903 , Jan. 2023. [36] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert- V oss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei, “Language Models are Few-Shot Learners,” in Advances in Neural Information Processing Systems , vol. 33. Curran Associates, Inc., 2020, pp. 1877–1901. [37] J. J. Slade, A. Hyk, and R. A. R. Gurung, “Transforming Learning: Assessing the Efficacy of a Retrieval-Augmented Generation System as a Tutor for Introductory Psychology,” Proceedings of the Human Factors and Ergonomics Society Annual Meeting , vol. 68, no. 1, pp. 1827–1830, Sep. 2024. [38] E. Mullins, A. Portillo, K. Ruiz Rohena, and A. Piplai, “Enhancing classroom teaching with LLMs and RAG,” in Proceedings of the 25th Annual Conference on Information Technology Education , ser. SIGITE ’24. New York, NY , USA: Association for Computing Machinery, Dec. 2024, pp. 145–146.[39] C. Dong, Y . Yuan, K. Chen, S. Cheng, and C. Wen, “How to Build an Adaptive AI Tutor for Any Course Using Knowledge Graph- Enhanced Retrieval-Augmented Generation (KG-RAG),” arXiv preprint arXiv:2311.17696 , Feb. 2025. [40] P. Sarthi, S. Abdullah, A. Tuli, S. Khanna, A. Goldie, and C. D. Man- ning, “RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval,” in The Twelfth International Conference on Learning Repre- sentations , Oct. 2023. [41] Z. Rackauckas, “Rag-Fusion: A New Take on Retrieval
|
https://arxiv.org/abs/2505.21582v1
|
Augmented Generation,” International Journal on Natural Language Computing , vol. 13, no. 1, pp. 37–47, Feb. 2024. [42] L. Gao, X. Ma, J. Lin, and J. Callan, “Precise Zero-Shot Dense Retrieval without Relevance Labels,” in Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , A. Rogers, J. Boyd-Graber, and N. Okazaki, Eds. Toronto, Canada: Association for Computational Linguistics, Jul. 2023, pp. 1762–1777. [43] C.-C. Chen, H. Takamura, I. Kobayashi, and Y . Miyao, “The Impact of Language on Arithmetic Proficiency: A Multilingual Investigation with Cross-Agent Checking Computation,” in Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers) , K. Duh, H. Gomez, and S. Bethard, Eds. Mexico City, Mexico: Association for Computational Linguistics, Jun. 2024, pp. 631– 637. [44] F. Salvaire, “Pyspice,” https://pyspice.fabrice-salvaire.fr, 2021.
|
https://arxiv.org/abs/2505.21582v1
|
arXiv:2505.21584v1 [cs.LG] 27 May 2025Fairness in Federated Learning: Fairness for Whom? Afaf Taik1,2, Khaoula Chehbouni1,3, Golnoosh Farnadi1, 2, 3 1Mila - Quebec AI Institute 2Universit ´e de Montr ´eal 3McGill University afaf.taik@mila.quebec, khaoula.chehbouni@mila.quebec, farnadig@mila.quebec Abstract Fairness in federated learning (FL) has emerged as a rapidly growing area of research, with numerous works proposing formal definitions and algorithmic interventions. Yet, despite this technical progress, fairness in FL is often defined and evaluated in ways that abstract away from the sociotechni- cal contexts in which these systems are deployed. In this pa- per, we argue that existing approaches tend to optimize nar- row system-level metrics — such as performance parity or contribution-based rewards — while overlooking how harms arise throughout the FL lifecycle and how they impact diverse stakeholders. We support this claim through a critical analy- sis of the literature, based on a systematic annotation of pa- pers for their fairness definitions, design decisions, evaluation practices, and motivating use cases. Our analysis reveals five recurring pitfalls: (1) fairness framed solely through the lens of server–client architecture; (2) a mismatch between simu- lations and motivating use-cases and contexts; (3) definitions that conflate protecting the system with protecting its users; (4) interventions that target isolated stages of the lifecycle while neglecting upstream and downstream effects; and (5) a lack of multi-stakeholder alignment where multiple fair- ness definitions can be relevant at once. Building on these insights, we propose a harm-centered framework that links fairness definitions to concrete risks and stakeholder vulner- abilities. We conclude with recommendations for more holis- tic, context-aware, and accountable fairness research in FL. 1 Introduction Machine learning (ML) systems are increasingly deployed in high-stakes domains such as healthcare (Hadi et al. 2023) and hiring (Qin et al. 2018), where their decisions carry sig- nificant social implications. Ensuring fairness in such sys- tems is critical, yet difficult. While fairness in centralized ML has been widely studied (Mitchell et al. 2021; Saxena et al. 2019), collaborative ML paradigms such as Feder- ated Learning (FL) introduce new complexities to this en- deavor (Kone ˇcn`y 2016; Kairouz et al. 2021). In FL, model training is distributed across multiple clients (e.g., smart- phones or institutions), each with their own local data and constraints. This setup brings unique challenges, including heterogeneity in data, compute, and context, as well as lim- Copyright © 2025, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.ited observability into client distributions or downstream im- pact. To address these challenges, new fairness definitions have emerged specifically for FL (Shi, Yu, and Leung 2024), in- cluding participation fairness, performance fairness, group fairness, and collaborative fairness. Several recent surveys have documented these proposals (Shi, Yu, and Leung 2024; Vucinich and Zhu 2023; Annapareddy, Preston, and Fox 2023; Salazar et al. 2024; Ude et al. 2023; Rafi et al. 2024; Chen et al. 2024; Wang et al. 2024b; Balbierer et al. 2024). While this body of work provides valuable taxonomies and technical insight, it often treats fairness as a standalone opti- mization problem, isolated from the social context in which FL systems are deployed.
|
https://arxiv.org/abs/2505.21584v1
|
In this paper, we argue that much of the fairness in FL lit- erature suffers from a critical abstraction error (Selbst et al. 2019). Fairness is often defined and operationalized in terms of narrow system-level metrics, such as minimizing perfor- mance variance across clients or allocating rewards based on contribution, without attention to the structural inequal- ities, institutional dynamics, or real-world harms that shape FL participation and impact. These abstractions risk obscur- ing who is protected, what harms are being addressed, and whose interests are prioritized. To examine these concerns, we adopt a harm-centered lens and conduct a systematic annotation of 121 papers on fairness in FL. We map fairness definitions, intervention points, and evaluation practices to the FL lifecycle—from problem formulation to deployment—and identify recurring gaps in how fairness is conceptualized and operationalized. Our findings reveal five core pitfalls: (i) a narrow focus on server–client architecture; (ii) a disconnect between fairness definitions and simulated use-cases; (iii) the conflation of system interests’ protection with user protection; (iv) a lim- ited intervention scope for mitigation techniques; and (v) a lack of multi-stakeholder alignment where many fairness definitions could be relevant. We also highlight how fairness efforts in FL interact with other concerns such as privacy and robustness—sometimes introducing new harms, such as disproportionate privacy degradation for minorities (Ling et al. 2024), or the misclas- sification of underrepresented clients as adversaries (Touat and Bouchenak 2023). Furthermore, to address these gaps, we propose a harm- centered framework for guiding context-aware fair FL so- lutions. Our framework maps the FL lifecycle—spanning problem formulation, client selection, aggregation, evalu- ation, and deployment—against different sources of harm and stakeholder vulnerabilities. It encourages researchers to ask: Who is (or could be) harmed at this step? What kinds of errors or exclusions may occur? And how do these re- late to existing fairness definitions? This approach offers a more grounded and context-aware lens for evaluating fair- ness claims and designing interventions. Our contributions are twofold: • We provide a critical, annotated review of the fairness in FL literature, uncovering patterns in definitions, motiva- tions, and lifecycle coverage. • We propose a harm-centered framework for fairness in FL, connecting technical decisions to stakeholder impact, and offering actionable recommendations for more ac- countable, context-aware research. 2 Background Federated learning (FL) is a decentralized ML paradigm in which a global model is trained collaboratively across mul- tiple participants (e.g., smartphones, hospitals, or institu- tions), without exchanging raw data. Clients train the model locally and share updates (e.g., gradients or model weights) with a central server, which aggregates them to produce an updated model. This process repeats over multiple commu- nication rounds (Kairouz et al. 2021). Depending on the type of participants, we refer to the setting as cross-silo FL for institutional participants, and cross-device FL when clients are massively distributed IoT and mobile devices. This dis- tinction helps move beyond the abstract client-server model by grouping use cases with similar resource constraints and aligning them with the types of stakeholders both operating the clients and affected by the resulting model—for exam- ple, mobile users
|
https://arxiv.org/abs/2505.21584v1
|
both provide data and are affected by the model in cross-device FL, whereas in cross-silo FL, institu- tions (e.g., banks, hospitals) operate as the clients, but use their models to make decisions impacting their customers. While FL is often introduced as a privacy-preserving al- ternative to centralized ML, its distributed nature introduces new technical and ethical challenges. In particular, fairness in FL must be reconsidered in light of structural asymmetries between clients. These asymmetries are not incidental; they are deeply ingrained into the realities of FL deployment. 2.1 Federated Learning Lifecycle Although implementations of FL vary across settings and applications, most systems follow a common lifecycle that structures the learning process and coordination between clients and the central server. The following outline repre- sents a basic FL lifecycle. While researchers have proposed many variations—including hierarchical (Abad et al. 2020) and clustered (El-Rifai et al. 2025) setups, and personaliza- tion training schemes (Tan et al. 2022)—these core steps re- main the most widely adopted and studied in practice: 1.Problem Formulation: The task to be solved is se- lected, formalized, and translated into an ML problem.This includes defining objectives, choosing input/output formats, loss functions, and defining the different con- straints. 2.Model Initialization: An initial global model is de- fined—often using a standard architecture and optionally initialized with pretrained weights. This phase also in- cludes setting key hyperparameters (e.g., learning rate, batch size), and establishing the stopping criterion (e.g., number of rounds or convergence threshold). Then, in each iteration, referred to as communication round, the following steps 3 to 5 are repeated until the stopping cri- terion is met. 3.Client Selection: During training, a subset of clients (e.g., institutions, mobile devices) is selected in each round based on varying criteria such as availability, re- sources, or data attributes. These choices affect which clients contribute to the model. 4.Local Training: Clients train the model locally on their private data. Key design decisions at this stage include number of local epochs, optimizer choices, and any per- sonalization applied before sending updates. 5.Model Aggregation: The central server aggregates client updates (e.g., using weighted averaging). The aggrega- tion strategy and timing (synchronous/asynchronous) in- fluence which updates contribute and how they are val- ued. 6.Evaluation: The model is evaluated—either centrally, on a held-out dataset, or by clients locally. Evalua- tion choices affect which metrics are prioritized and ac- counted for. 7.Deployment and Incentives: The final model (or per- sonalized variants) is deployed to clients or users. In some settings, reward mechanisms may be used to in- centivize participation, often based on estimated contri- butions. In addition to this cycle, further mechanisms are often added to enhance privacy and guarantee adversarial robust- ness. 2.2 Why Fairness in FL is Challenging FL is fundamentally shaped by two structural properties of FL systems: heterogeneity andscarcity . Heterogeneity refers to differences across clients along three critical dimensions (Molamohammadi et al. 2023): •Data heterogeneity : Local datasets may differ in size, la- bel distribution, feature representation, or demographic composition. •Resource heterogeneity : Clients vary in computational capacity, network stability, and energy constraints. •Contextual heterogeneity :
|
https://arxiv.org/abs/2505.21584v1
|
Clients operate under diverse legal, institutional, or social conditions that shape what data can be collected or shared. Scarcity , meanwhile, refers to limited availability of data and compute resources—especially among clients serving minority populations or operating under infrastructural con- straints. Scarcity amplifies the risk that some clients will be consistently underrepresented or excluded from the training process. Together, heterogeneity and scarcity lead to two key lim- itations for FL: (i) Partial participation : Not all clients participate equally or consistently, leading to sampling bi- ases in model updates; (ii) Limited visibility : The server typically lacks access to protected attributes, demographic breakdowns, or downstream outcomes—making it difficult to measure or enforce fairness during training. 2.3 How Fairness is Defined in FL Research Across the literature, several fairness notions have been pro- posed for FL (Shi, Yu, and Leung 2024), often adapted from centralized ML or cooperative systems. These include: •Performance-centered fairness : The dominant defini- tion of performance fairness found in the FL literature (Li et al. 2020; Yan et al. 2024; Xu et al. 2022; Chu et al. 2024a; Cong et al. 2024) refers to achieving similar or comparable model performance across different clients or groups of clients. A model ω1is considered fairer than model ω2with respect to the participants if, for a met- ric of interest (e.g., accuracy, loss), it has a smaller vari- ance across clients. This definition can also be applied when comparing multiple client groups. Two additional definitions were proposed in this category: 1) Rawlsian definition of fairness, where the objective is to optimize the performance of the worst performing clients or group of clients ( ¨Oks¨uz et al. 2024; Mohri, Sivek, and Suresh 2019); and 2) individual fairness, where similar clients should have similar performance (Mashhadi et al. 2022). •Group fairness-inspired definitions : Group fairness in ML emphasizes that predictions should not disadvan- tage underrepresented or unprivileged groups based on sensitive attributes such as race or gender. Building on these principles, researchers have sought to adapt clas- sical group fairness criteria—such as equal opportunity, equalized odds, or demographic parity—to the federated learning setting, applying them both to global aggregates and to individual client models (Shi, Yu, and Leung 2024; Salazar et al. 2024). While these definitions are appli- cable to cross-device settings, they are mostly studied in the context of cross-silo collaborations, where insti- tutions develop models that need to be fair to the popula- tions they serve. •Collaborative fairness : Ensuring that clients receive re- wards, utility, or model quality proportionally to their contributions. This definition of fairness is particularly relevant for cross-silo FL settings, where data owners of- ten compete within the same market and face increased risks from malicious actors and free-riders. Yu et al. (2020) define three fairness criteria for a fair distribu- tion of the rewards: (1) contribution fairness: the data owner’s payoff should be positively related to its contri- bution, (2) regret distribution fairness: the difference of regret among data owners should be minimized and (3) expectation fairness: the variability of data owners’ re- gret should be minimized. Here,
|
https://arxiv.org/abs/2505.21584v1
|
regret refers to the dif- ference between the reward already received by the dataowner and what they are supposed to receive. •Participation fairness: Including under-represented and never-represented clients (Shi, Yu, and Leung 2024) in the training process. Client selection in FL (Cho, Wang, and Joshi 2022) is among the most studied steps of the FL lifecycle, as the subset selection can determine who gets to influence the model and how fast the FL model will converge. Partial participation and uneven representation of client groups might appear to be the easiest aspect to assess. Yet, given the complex and heterogeneous nature of the clients, in addition to the private nature of their data and other properties, it is challenging to determine which properties should guide client selection and how to as- sess the fairness of this process. The most straightforward approach is to define it at an individual level, where all clients are given an equal chance to contribute. This is of- ten formulated as a long-term fairness constraint (Huang et al. 2021), which requires setting a minimum threshold on the number of times a client needs to be included in the FL process. Another option is to define client groups based on the general knowledge of the system, such as geographical location (Lee, Kim, and Joo 2024; Saputra et al. 2019) (e.g., applications that might be impacted by the weather or the culture), or language preferences (e.g., next word prediction) (Hard et al. 2018). While these definitions offer technically tractable objec- tives, they are often introduced in isolation from questions of power, structural inequality, or downstream harm. In the following section, we examine how these fairness notions fall short when applied without regard to the use case con- text, stakeholders involvement and systemic constraints. 3 Method To inform our position, we conducted a systematic litera- ture review covering 121 papers that focus on bias and fair- ness in FL. We used DBLP to find papers with keywords (‘Fair*’+ ‘Federated learning’), (‘Bias’+ ‘Federated Learn- ing’), which were published up until December 2024. As FL is multidisciplinary, we did not limit our search to ML venues, but also included papers from networking/ telecom- munications and economics venues. We filtered out: (a) Comments, abstracts, thesis, tutorials, slides, and other re- ports which are not research papers, (b) Papers that use of term “bias” in other contexts, such as bias-variance trade- off, (c) duplicates, and duplicates under different titles, (d) papers in predatory venues, and those behind paywalls and inaccessible through the authors’ institutions. Categorization: We manually categorize the papers de- pending on the fairness definition that was used, i.e., partic- ipation fairness, performance fairness, group fairness, and collaborative fairness. Note that many of these papers do not explicitly use this terminology, and that some overlap exists across the definitions. Further filtering was done for papers that explore the intersection with privacy and adversarial ro- bustness, which we considered as a separate category when needed, although they might use the same fairness defini- tions. 8.4%28.0% 34.0% 21.6%8.0%Participation fairness Performance-centered Group fairness-inspired Collaborative fairness Surveys and other
|
https://arxiv.org/abs/2505.21584v1
|
Figure 1: Fairness paradigms covered in our analysis. Annotation: For each paper, we identified (1) the motiva- tions of the paper and if it came with real-world examples, (2) the fairness definitions and the considered stakeholders, (3) the proposed interventions and which stages of the FL lifecycle they focus on; and (4) the datasets used in the eval- uation. 4 From Definition to Deployment: Five Pitfalls in Federated Learning Fairness Research Despite increasing attention to fairness in FL, many exist- ing approaches operate with narrow or conflicting defini- tions, often optimized for system-level performance or de- veloper convenience rather than equity for clients or end users. Drawing from our annotation of 121 papers, we iden- tify five recurring pitfalls that hinder meaningful fair FL re- search and development. 4.1 Abstract System Formulation: Narrow Focus on Client-Server Architecture One common observation is that fairness is FL is de- fined without explicit articulation of who is being protected , what harms are being mitigated , orwhose values are be- ing encoded . Definitions are often not related to stakeholder groups and contextual risks and harms. Several papers de- fine fairness objectives without specifying whether they are optimizing for institutional clients or individual users, nor do they specify in which use-cases their approaches would be applicable. FL is often described as a two-party protocol between a central server and distributed clients. This technical framing—rooted in system architecture—focuses on com- munication efficiency (Lim et al. 2020), model convergence (Lu et al. 2024), and privacy guarantees (Yin, Zhu, and Hu 2021) between these two roles. However, real-world FL de- ployments rarely involve just servers and clients. Instead, they are embedded in larger sociotechnical systems with multiple stakeholders, each with distinct roles, interests, and vulnerabilities (Chehbouni et al. 2025; Antunes et al. 2022). For example, in cross-silo FL between healthcare institu- tions, the “client” may represent an entire hospital, but the downstream impact of the model affects patients, physicians, and administrators. Similarly, in financial applications, in addition to customers on whom decisions apply, regulators and auditors may also have a stake in how models are trained and evaluated. Developers, platform providers, and even thedesigners of benchmark datasets exert influence over what is optimized, who is included, and what trade-offs are made. This framing risks can lead to what (Selbst et al. 2019) call an “abstraction error” : treating fairness as a property of a technical subsystem, rather than of the broader sociotech- nical system it serves. Justice is not an outcome of optimiz- ing model convergence or performance variance in isolation; it requires attention to several aspects, especially power dy- namics and harm. It requires understanding who is behind each abstract client, who bears the risk when performance degrades, and who gets excluded when design decisions are made. Treating clients as impersonal and theoretical entities misses the nested and the broader institutional, social, and economic factors that shape FL. This lack of clarity is not just a technical oversight. It reflects a deeper failure to treat fairness as a sociotechnical concept in this subset of the re- search
|
https://arxiv.org/abs/2505.21584v1
|
community. 4.2 Abstract Evaluation: Synthetic Clients vs Real Harms There is a significant mismatch between the concept that re- searchers aim to measure (fairness) and the measurement methods they use to evaluate it. This mismatch appears in two ways: the creation of synthetic clients focused solely on data heterogeneity, and the use of datasets unrelated to the motivating use-cases. The majority of the annotated papers evaluate their frame- work using centralized ML datasets (e.g., MNIST, CIFAR- 10, Adult, or COMPAS), by splitting them into synthetic “clients.” For instance, for group fairness papers, many studies cite real-world applications in finance, justice, and healthcare, but evaluation typically relies on centralized ML datasets like Adult ((Kohavi and Becker 1996)), COMPAS ((Angwin et al. 2016)), and CelebA ((Liu et al. 2015)). A few use the more realistic ACS dataset ((Ding et al. 2021)), which models demographic variation across U.S. states, and only one paper both motivates and evaluates using real- world federated data ((Chu et al. 2024b)). While useful for prototyping, this practice obscures the sociotechnical con- text in which fairness actually matters. This is particularly concerning given that federated learning introduces unique challenges not present in centralized settings (§ 2.2). Only a small fraction of the literature uses datasets from real-world federated settings or evaluate across realistic data from silos or clients. In our analysis, we found that fewer than 10% of papers evaluated their fairness methods on domain-specific data. For instance, in performance fairness papers, healthcare is the most commonly cited motivating use case, with the goal of reducing disparities across silos (e.g., hospitals) (Yan et al. 2024; Xu et al. 2022). Similar concerns apply to cross-device FL, especially in personalized applications like mobile phones (Hard et al. 2018), where QoS must be consistent across clients. Yet, most research is evalu- ated on ML benchmarks like MNIST and CIFAR10, sim- ulating heterogeneity through class imbalance or rotated im- ages. These datasets do not capture real-world distributional shifts, which are shaped by demographic, geographic, or temporal factors, nor do they capture all the domain-specific challenges that come with the application. Instead, fair- ness is reduced to an oversimplified distributional problem, where the minority class serves as a proxy for real-world dis- advantaged groups. Only five performance fairness papers evaluate on domain-specific datasets (Yan et al. 2024; Xia et al. 2024; Xu et al. 2022; Mashhadi et al. 2022; Selialia, Chandio, and Anwar 2022), mostly in healthcare. Likewise, of the 26 papers on collaborative fairness , only five are grounded in real use cases (Maheswari et al. 2024; Albaseer et al. 2024; Xu, Nanda, and Liang 2024; Donahue and Kleinberg 2021; Lu et al. 2022), three offer motivating examples (Lyu, Xu, and Wang 2020; Yu et al. 2020; Liu et al. 2023), and just one (Albaseer et al. 2024) conducts experi- ments using data relevant to its stated application. While these setups enable reproducibility and control, they abstract away the institutional and social contexts in which FL systems operate. As the technical interventions for fairness in FL are evaluated only over synthetic clients, it is easy
|
https://arxiv.org/abs/2505.21584v1
|
to miss the real impact of these solutions. Additionally, clients are modelled as interchangeable units (i.e., similar resources, contexts), as the focus is pre- dominantly on data heterogeneity. The abstraction of equally resourced, equally motivated, or equally impacted, hides some of the inequalities fairness mechanisms are meant to address. 4.3 Protection for the System, Not the Vulnerable Another trend we observed is the prioritization of fairness definitions that aim to protect the system from free riders and adversaries, rather than the people the system serves. Col- laborative fairness , which represents almost a third of the annotated papers, allocates rewards based on contribution to global utility, often estimated via Shapley values. While this may seem reasonable in settings with symmetric power (e.g., corporate collaboration), in reality, it easily breaks down, as participation is shaped by unequal access to resources and heterogeneous data and contexts. In such settings, clients with noisy or underrepresented data would be penalized not because their data is intentionally low quality, but because it simply deviates from the majority’s data. These frameworks’ claimed goal is to provide structured methods for evaluat- ing contributions and distributing rewards, and by doing so, they may inadvertently prioritize individual gains over col- lective outcomes. This individualistic perspective can con- flict with the inherently cooperative nature of FL, where the primary objective is to collaboratively train a high-quality global model that benefits allparticipants. This becomes obvious with how collaborative fairness pa- pers assign degraded models or lower rewards to clients deemed “low contributors” (Lyu, Xu, and Wang 2020; Yi et al. 2024; Adamek and Darup 2024), without considering that those clients might face structural barriers to participa- tion. This logic is particularly troubling in high-stakes do- mains like healthcare or finance—often cited as motivating use-cases for this fairness definition— where a poorly per- forming model may harm patients or customers rather than the institution itself. Ultimately, it is the patient or customer who pays the price.Although free-riders present a potential risk, tackling such issues aligns more closely with adversarial robustness than algorithmic fairness. Collaborative fairness papers appear to prioritize adversarial robustness under the guise of protect- ing fairness for “honest” clients. Framing these mechanisms as fairness-preserving obscures the reality that they primar- ily aim to safeguard the system from misuse. While adver- sarial robustness frameworks often acknowledge the risk of false positives and incorporate mechanisms for correction (Touat and Bouchenak 2023), collaborative fairness schemes frequently cast exclusionary decisions, such as penalizing low-contribution clients, as inherently just. This framing leaves little room to question whether such clients are dis- advantaged due to resource constraints rather than bad faith (Green 2021). 4.4 Disconnected interventions: One-Stage Solutions to Lifecycle Harms Existing interventions in FL in general, and fairness in FL in particular, tend to concentrate on a limited subset of the de- velopment pipeline, most notably aggregation schemes (Qi et al. 2024) and client selection (Fu et al. 2023). However, other critical stages such as problem formulation, model ini- tialization, and evaluation remain largely overlooked. This narrow focus creates blind spots: biases introduced in ear- lier stages
|
https://arxiv.org/abs/2505.21584v1
|
may propagate unchecked, and the criteria used to assess fairness may fail to capture the outcomes that mat- ter most. Our analysis reveals that few papers adopt a life- cycle perspective, which is essential to understanding how harms emerge and to designing effective and equitable in- terventions. Most interventions are localized on a few steps while others are overlooked, as illustrated in Fig 2. Indeed, participation fairness papers focus on client selec- tion (Cho, Wang, and Joshi 2022; Huang et al. 2021; Javahe- rian et al. 2024a) and asynchronous model aggregation (Gao et al. 2024; Wang et al. 2024a). While a large portion of the surveyed literature in group fairness in FL (Ezzeldin et al. 2021; Makhija et al. 2024; Roy, Sharma, and Salekin 2024; Yue, Nouiehed, and Kontar 2021; Meerza et al. 2024; Pa- padaki et al. 2022; Selialia, Chandio, and Anwar 2023; Abay et al. 2020; Li et al. 2024b) proposed techniques that com- bines adapting local training (e.g., using regularizers, adver- sarial debiasing) with another intervention at the server-level during model aggregation (e.g., reweighting). For this line of work, only one intervention was proposed for client selec- tion (Zhang, Kou, and Wang 2020), and one at the personal- ization stage (Chu et al. 2024b). Meanwhile, to achieve per- formance fairness, the proposed solutions span a larger por- tion of the FL pipeline compared to group fairness, but the different steps have received varying attention. The interven- tions include model initialization (Chu et al. 2024a), client selection (Javaherian et al. 2024b), adjusting local training (Cong et al. 2024), model aggregation (Li et al. 2020), or personalization techniques (Xie et al. 2024) such as clus- tered FL (Chu et al. 2022). Despite these contributions, the disjointed nature of current interventions leaves key sources of harm unaddressed; without a holistic, lifecycle-oriented approach, fairness efforts risk being reactive, fragmented, and insufficiently adapted to the clients heterogeneity. 4.5 Disconnected Impact: No Single Fairness Definition is Enough In many real-world FL scenarios, fairness cannot be reduced to a single definition. Each of the paradigms discussed (i.e., performance-centric, group fairness, and collaborative fair- ness) captures only one axis of concern. Treating them as mutually exclusive leads to incomplete or even counterpro- ductive interventions. In our analysis, few papers considered multiple fairness at once (Amiri et al. 2022; G ´alvez et al. 2021), however, these papers were mostly exploring the im- pact of privacy techniques on performance and group fair- ness, rather than their interplay. Instead, we believe a holistic evaluation for different fairness aspects should be adopted. FL is a multi-stakeholder system, thus, it is important to evaluate the harm to each stakeholder. Consider a cross-silo FL deployment in the financial sector; one of the few recurring examples in the three different parts of the annotated literature. Multiple banks may collaborate to train a shared fraud detection model. A performance-centric definition might require that the global model perform well for all participating institutions. How- ever, each institution may serve different customer bases, raising concerns about group fairness: will the model per- form equally across demographic subgroups
|
https://arxiv.org/abs/2505.21584v1
|
within each silo? At the same time, institutions have competing interests. Supposing that the institutions are equally powerful, a so- called collaborative fairness mechanism may be considered to ensure that no institution intentionally degrades the per- formance for others. In this example, the fairness concerns are not only relevant, they are entangled. Optimizing for just one could undermine the others. For instance, maximizing global performance may disproportionately benefit larger clients; enforcing group fairness locally may reduce global accuracy; prioritizing contribution-based rewards may pun- ish institutions that serve minority populations. This illustrates a broader point: fairness in FL is not a property of a single metric or stage in the pipeline. It is an emergent property of the full system, shaped by technical design choices, institutional power dynamics, and the social roles of clients and end users. Addressing fairness mean- ingfully requires acknowledging this multidimensionality/ multi-stakeholder nature, and moving beyond one-size-fits- all definitions. 5 A Harm-Centered Framework for Fairness in Federated Learning ML systems can lead to harms such as unequal performance across demographic groups, inequitable resource distribu- tion, or the reinforcement of stereotypes (Shelby et al. 2023). FL inherits these risks from traditional ML while introduc- ing new ones due to its decentralized structure and structural asymmetries across clients. To design fairer FL systems, we must shift from identifying isolated biases to understanding how harm emerges from developer decisions throughout the lifecycle, influenced by heterogeneity (in data, context, re- sources) and scarcity (of data and compute power). Following our analysis in Section 4, we propose a frame- work for identifying potential harms and guiding fairness in-Init/Preproc.Client Sel.Local Train.Aggreg.Eval. Add. Mech.01020 28 616 8 2 2122 17 17 03 09 05 08 12 0 0# PapersLifecycle Steps Targeted per Fairness Paradigm Fairness Paradigm Performance Group Collaborative Participation Figure 2: Lifecycle steps addressed in fairness papers, by fairness paradigm. terventions throughout the FL pipeline. Rather than viewing bias as a fixed category, we examine how developer choices, constrained by heterogeneity across clients, may introduce harms such as quality of service (QoS), allocative, and rep- resentational harms. Below, we walk through each stage of the FL lifecycle and highlight where structural conditions and technical design interact to produce or amplify harm. We illustrate our framework in Figure 3. 5.1 Federated Learning Lifecycle Decisions and Harms Developers have little control and visibility on data collec- tion and local model training, in comparison with centralized settings. However, different decisions that are made through- out the FL lifecycle propagate existing biases and introduce new ones. Indeed, data generation and collection introduces 1)historical bias : Since data reflects the cultural, institu- tional, political, and socioeconomic context surrounding its extraction, it can encompass past or existing structural in- equalities and injustices (e.g., COMPAS dataset (Angwin et al. 2016)); 2) representation/ selection bias where these biases arise when training data is not representative of the real-world population or use-case distribution (e.g., Ima- geNet data (Yang et al. 2020)); and 3) measurement bias : Such bias is introduced by how features or labels are col- lected, chosen, and used (e.g., faulty sensors,
|
https://arxiv.org/abs/2505.21584v1
|
inadequate proxies (Kleinberg et al. 2018)). Such biases can be am- plified during local training, introducing learning bias . In the following, we focus on the different steps where there is more control and visibility, as well as biases that are specif- ically tied to FL’s collaborative nature. Problem Formulation Developers often retain full con- trol over how problems are framed, including defining learn- ing objectives, inputs and outputs, and success metrics (Passi and Barocas 2019; Suresh and Guttag 2021). In FL, contex- Evaluation Problem FormulationContribution & Incentives ServerClients (Cross -Silo/ Cross -Device ) Global ModelLocal ModelPrivacy RobustnessLearning Bias Model AggregationClient Selection Model InitializationLocal Training Representation BiasParticipation Bias Aggregation BiasCollaboration Bias Evaluation BiasDeployment ABSTRACT SYSTEM FORMULATIONDISCONNECTED INTERVENTIONSHistorical Bias Measurement Bias ABSTRACT EVALUATIONPROTECTION OF THE SYSTEMDISCONNECTED IMPACT Contextual Heterogeneity•Resource Scarcity •Resource Heterogeneity•Limited Visibility •Data/Resource Heterogeneity and Uncertainty•Power Asymmetries •Contribution Quantification•Multiple Stakeholders •Diverging Interests•Data Heterogeneity •Intermittent Availability •Outlier Suppression •Misaligned task definition •Exclusion of populations•Exclusion oflow-resource clients •Majority -skewed models •Reduced privacy protections •Minority clients misclassified asadversaries•Disparities hidden by average metrics •Fairness claims unsubstantiated•Low -resource clients penalized •Unequal competitive dynamic•Exclusion of minorities/low - resource clientsSTRUCTURAL CHALLENGES POTENTIAL HARMS PITFALLS IN FL RESEARCHQuality Service Harms Allocative Harms Representational Harms Privacy Harms Reputational HarmsFigure 3: Overview of our harm-centered framework for fairness in FL. At each step of the FL lifecycle, we identify biases within stakeholders control, or outside of their control (red-filled boxes), structural challenges and potential harms that might emerge at each step. tual heterogeneity across clients, such as differing institu- tional goals or regional constraints, complicates this step. When problem formulations assume a universal objective, they may be misaligned with local needs (Raji et al. 2022). For instance, health prediction tasks that ignore regional dis- ease prevalence or local medical practices risk producing models that systematically underperform for some popula- tions (Asiedu et al. 2024). Additionally, oversimplified fair- ness criteria may be selected for convenience rather than ap- propriateness (Simson, Fabris, and Kern 2024), leading to learning bias (Suresh and Guttag 2021). Thus, rather than just asking technical questions about the solution, moving away from the client-server abstraction is necessary. Con- sidering who this model is for? what goals are embedded in this task? andwhose outcomes matter? can positively impact the formulation to be more applicable. Model Initialization Model choices, including size, ar- chitecture, and pretraining, are often made before the start of the training, and in tandem with problem formulation. However, these decisions can exclude clients with limited resources, especially in cross-device FL, where large mod-els may not run on older or lower-powered devices (McMa- han et al. 2017; Bonawitz 2019). This creates participation bias, where only well-resourced clients can meaningfully contribute to training. Similarly, using a pre-trained model may introduce representation bias if the pretraining dataset does not reflect the data distributions of FL clients (Wang et al. 2023; Chuang et al. 2023; Kaur and Jadhav 2023). As a result, it is important to consider the possibility that the chosen model architecture/ pretrained model disadvantage clients with limited resources or underrepresented data. Client Selection While many aspects about data collec- tion
|
https://arxiv.org/abs/2505.21584v1
|
are not under the developers’ control, especially in cross-device FL, some of the biases and harms found in data collection are shifted to client selection. In cross-device FL, the ideal scenario would include a full participation from all clients at each communication round. Nonetheless, clients are often offline, have low battery/ poor connectivity, or are unavailable (Tak and Cherkaoui 2021). Additionally, privacy concerns in FL due to memorization remain present (Thakkar et al. 2020). If clients participate too often, their data may be over-exposed during model up- dates. This leads to the ideal to be purely random/ round robin selection. Moreover, the most adopted FL aggrega- tion scheme is synchronous, meaning that if a client does not send their update before a set deadline, their update is discarded. Additionally, the communication bandwidth is a bottleneck for FL, hence only a few clients can participate in each round. Thus, optimizing the client selection step has become a key strategy in FL (Nishio and Yonetani 2019; Zhu et al. 2022; Cho, Wang, and Joshi 2022). The proposed schemes often try to maximize the number of collected up- dates in order to improve the performance of the global model. While these schemes might accelerate the conver- gence of the training on average, these schemes are often biased towards clients with more powerful devices and bet- ter connectivity. Slow clients are often labeled as stragglers and dropped out of the training round. On the opposite side of the spectrum, other strategies focus on data properties, where clients are selected in proportion to the size or quality of their data, meaning that clients with more data (or po- tentially more valuable data) are more likely to be selected (Goetz et al. 2019). However, data size is not a good proxy for the meaningfulness of a contribution due to redundancy (Tak and Cherkaoui 2021). Moreover, clients with larger datasets (e.g., big companies or institutions with more re- sources) may dominate the model, potentially drowning out smaller clients or those with less data but more diverse or unique insights. Client selection strategies that overlook resource and data heterogeneity and their interplay are how participation bias could manifest in FL. Consequently, it is important to ask Are some clients less likely to participate? Are selection rules exclusionary? Model Aggregation FL aggregation combines updates from clients to form a global model. There are two deci- sions that developers make for this stage: When to aggre- gate? andHow to weigh the updates? For the first question, synchronous aggregation is the most commonly adopted scheme. It requires that model updates are averaged or com- bined at fixed intervals, with updates being sent after the fixed deadlines being ignored. This inevitably exacerbates participation bias , as clients with slower internet connec- tions or lower computational power might not be able to update their models in time for synchronization (Gao et al. 2024). Over time, this would lead to a model that better rep- resents the data from faster clients but fails to accurately cap- ture the diversity and variability of the entire population of
|
https://arxiv.org/abs/2505.21584v1
|
clients. The second decision is choosing the updates weight- ing scheme. The first proposed and most adopted method (McMahan et al. 2017) requires weighting models based on the size of each client’s dataset. Such technique ampli- fies aggregation bias by prioritizing dominant clients and suppressing minority contributions. Over time, this leads to models that reflect majority distributions and perform poorly for smaller, less represented clients, leading to aggregation bias. Different aggregation schemes were proposed (Qi et al. 2024), tackling various challenges in FL, including robust- ness and asynchronous updates. This step is also a populartarget of group fairness in FL techniques. As a decisive step, it is important to ask 1) How does the aggregation deadline affect which clients are able to partic- ipate, and which are excluded? , and 2) How does the ag- gregation weighting scheme influence the fairness outcome being targeted? Evaluation FL evaluations often rely on benchmark datasets or average client performance. However, due to lim- ited visibility into local data, these metrics may obscure disparities (Lai et al. 2022). Similarly to centralized ML, this can be referred to as evaluation bias , where the meth- ods or conditions used to assess a model’s performance do not accurately represent the real-world scenarios in which it will be deployed. A model that performs well on aver- age may still harm specific clients, especially if fairness is not evaluated across subgroups. Additionally, relying solely on benchmarks and simulated environments can further dis- tort conclusions. While these tools offer necessary step for developers to prototype, these remain controlled con- ditions with standardized datasets and predefined assump- tions, whereas real-life conditions would differ significantly. Thus it is necessary to adopt detailed metrics (e.g., per- formance, client participation and drop-out, group-fairness metrics), and to continually monitor the various aspects of the FL process after deployment. For this part we should ask Which metrics are we using? On which data do we evaluate? For whom do these metrics matter? and how often do we need to assess the system’s performance? Contribution and Incentives Incentive mechanisms in FL aim to reward clients based on their contributions to the global model. However, measuring “contribution” remains an open and contested problem. Existing approaches of- ten favour clients whose data aligns with dominant patterns or leads to measurable performance gains in global utility. Meanwhile, clients providing rare or minority data, which may be critical for generalization and group or performance fairness, are undervalued or excluded. This creates a feed- back loop that reinforces inequality and marginalizes those already underrepresented, we refer to this issue as collabo- ration bias . In FL, clients already bear real costs—computational, communicational, and sometimes organizational. In theory, participation should offer sufficient returns to justify these investments, whether through improved local model perfor- mance, access to the global model, or monetary compensa- tion. While improving contribution evaluation is a meaning- ful technical objective, it also risks missing the larger ques- tion: What purpose do additional incentives serve in the first place? To move forward, future work must grapple with con- textual questions: Who is
|
https://arxiv.org/abs/2505.21584v1
|
affected by the final model? What are the actual costs borne by clients? How would an incen- tive scheme shape participation and power within the fed- eration? Without answers to these questions, incentive de- sign risks introducing new harms—through collaborative bias where incentive mechanisms introduce misattributed rewards and the punishment of honest but nonconforming clients. Privacy and Robustness Mechanisms Privacy guaran- tees and robustness defences are essential in FL, but they are not neutral. Privacy: As keeping the data local was proven not enough for privacy protection, a combination of FL with privacy pre- serving mechanisms such as differential privacy (DP) and secure aggregation have been adopted in the literature and practice through the use of DP-SGD for instance. How- ever, balancing privacy and fairness presents a challenge. On one hand, DP degrades the performance of the models, and disproportionately affect underrepresented groups (Bag- dasaryan, Poursaeed, and Shmatikov 2019; Ling et al. 2024). On the other hand, prioritizing fairness can heighten privacy risks due to revealing sensitive attributes and memorization (Pentyala et al. 2022). Given these tradeoffs and disparate impacts on privacy and performance, it is important to con- sider what are the side effects of the privacy mechanisms? and ask if everyone is equally protected? Adversarial Robustness: Adversarial attacks in FL, such as model/data poisoning, free riding, and backdoor attacks, are often detected through anomaly detection or using simi- larity metrics, using techniques like reputation systems Lyu, Xu, and Wang (2020), performance evaluation, and sta- tistical tests (e.g., clustering or outlier detection) (Sattler, M¨uller, and Samek 2020). However, due to data hetero- geneity, falsely identifying a legitimate client as an adver- sary is a common risk with significant consequences (Touat and Bouchenak 2023; Ren et al. 2024). Misclassified clients may face unjust reputation damage, exclusion from future tasks, financial losses tied to missed incentives, and reduced opportunities for collaboration. These outcomes can erode trust, foster alienation, and ultimately discourage partici- pation in FL. When designing robustness mechanisms, de- velopers must ask: What are the consequences of a false- positive detection—and is the cost of excluding a legitimate client greater than the risk posed by an undetected adver- sary? It is thus necessary to ensure that these additional mechanisms do not introduce additional harms , while pro- viding equal protection to different stakeholders. 5.2 Downstream Harms in Federated Learning FL inherits many of the fairness concerns from centralized ML but introduces new layers of complexity due to its dis- tributed nature and client heterogeneity. In this section, we examine different types of harms that can arise in FL, and we map each to the corresponding biases and stages in the FL lifecycle that contribute to them. Understanding how harms emerge and propagate is essential for developing targeted and context-aware fairness interventions. In Section 5.1, we noted that several decisions lead to the exclusion of minority populations. Such harms in practice translate into QoS, allocative, and representational harms. While such harms are noted in centralized ML, they take dif- ferent forms in FL settings. Additionally, additional mecha- nisms for privacy and adversarial robustness
|
https://arxiv.org/abs/2505.21584v1
|
introduce dis- parities and possible damages. In particular, the biases in the FL lifecycle translate into the following harms: •Quality of Service (QoS) Harms. These occur when theFL model performs unequally across clients. A typical cause is aggregation bias , where global model updates disproportionately reflect data from dominant clients (e.g., those with more frequent participation or more standard data distributions). This harm is often intro- duced during model aggregation (Michieli and Ozay 2021), but can be seeded earlier through biased model initialization or improper problem formulation (e.g., as- suming all clients have similar objectives (Suresh and Guttag 2021)). •Allocative Harms. These relate to unfair allocation of benefits or resources informed by FL models. They may result from representation bias orlearning bias at the data and training stages, where certain groups are under- represented or oversimplified. In financial applications, for instance, credit scoring systems built using FL can exclude certain regions or demographics if their data was sparsely sampled or downweighted (Ding et al. 2021). Incentive mechanisms that reward “high-contributing” clients also risk allocative harms when contributions are unfairly evaluated (Albaseer et al. 2024). •Representational Harms. FL systems may reinforce stereotypes or marginalize specific groups when mea- surement or historical bias is present at the data collec- tion step (Yang et al. 2020; Kleinberg et al. 2018). These harms are especially prominent in applications involving vision (Li et al. 2024a) language (Gallegos et al. 2024). •Privacy Harms. While FL aims to preserve privacy by design, privacy-enhancing techniques like differential privacy can have disparate effects. For instance, clients with small datasets or outlier data distributions may suf- fer from higher performance degradation due to added noise (Bagdasaryan, Poursaeed, and Shmatikov 2019). These harms are introduced by protection mechanisms, but are shaped by earlier choices around data representa- tion and client sampling. •Reputational Harms. When robustness mechanisms such as anomaly detection or reputation scoring are de- ployed, clients with nonconforming updates (e.g., due to non-IID data) risk being misclassified as adversaries (Touat and Bouchenak 2023; Ren et al. 2024). This can result in exclusion from training, lost incentives, or dam- aged reputations, even when clients acted in good faith. These harms often emerge during aggregation or incen- tive distribution stages but stem from unaccounted-for data heterogeneity. 6 Takeaways and Path Forward Our analysis of fairness in FL highlights a disconnect be- tween how fairness is defined in research and how harms manifest in practice. If fairness is to be meaningful in FL, it must be grounded not only in optimization objectives, but in the structural conditions and institutional realities shaping the collaboration. We propose a framework emphasizing that fairness in FL is not a single technical problem, but a distributed, sequential, multi-stakeholder and context-dependent chal- lenge. Addressing fairness requires understanding where structural inequalities shape developer decisions, and how those decisions affect different stakeholders in different ways. While we focused on the classic horizontal FL life- cycle, it is important to recognize that the pipeline is not fixed—additional steps such as personalization or cluster- ing may be introduced, and design choices at each stage can
|
https://arxiv.org/abs/2505.21584v1
|
evolve to better address fairness and contextual needs. Following our analysis, We note some key takeaways for future work: 1.Fairness requires contextually grounded evaluation. The datasets most commonly used in FL fairness re- search are inherited from centralized ML and fail to re- flect the complexity of collaborative scenarios. Many rely on synthetic or oversimplified distributions, with minor- ity classes used as proxies for structural disadvantage. These abstractions risk obscuring real harms and may re- sult in cherry-picked evaluations (Lones 2021). Only a few datasets, such as folktables (Ding et al. 2021) and flamby (Ogier du Terrail et al. 2022), reflect real-world fairness/ FL concerns, yet they remain limited in scope. Advancing fairness in FL will require better datasets, more realistic simulations, and evaluation strategies that reflect real-world risks. 2.Fairness must account for real-world harms and stakeholder impact. Understanding fairness in FL re- quires identifying who is affected, by which kinds of er- rors, and how these translate into tangible harms. This includes underrepresented clients misclassified as adver- saries (Touat and Bouchenak 2023; Ren et al. 2024), institutions excluded due to limited data (Goetz et al. 2019), or regions harmed by biased global updates (Asiedu et al. 2024). Researchers must go beyond con- vergence metrics to consider harms like reduced quality of service, loss of trust, and perpetuation of inequity. 3.Lifecycle harms require lifecycle thinking. Many fair- ness interventions in FL target isolated steps, with a fo- cus on client selection and aggregation. However, harms often emerge from a chain of design choices across the FL lifecycle. For example, performance dispari- ties may stem as much from problem formulation or model initialization as from aggregation. Tackling fair- ness effectively thus demands a holistic view: interven- tions should account for how earlier decisions constrain fairness downstream, and solutions must span multiple stages to achieve the fairness goals. 4.Fairness definitions are not interchangeable. Perfor- mance, group, and collaborative fairness objectives may conflict, especially in heterogeneous systems. In some FL settings—such as financial or healthcare collabora- tions—multiple fairness notions may be simultaneously relevant. For instance, clients may seek equitable per- formance (performance fairness), while also serving di- verse subpopulations (group fairness), and contributing to shared resources (collaborative fairness). Future work should investigate how these fairness objectives interact and how conflicts between them can be resolved. 5.Stakeholders must shape the design process. Fairness cannot be defined externally to those it affects. Partic-ipatory approaches (Birhane et al. 2022) that engage clients, domain experts, and impacted communities can improve problem formulation, risk assessment, and mod- eling decisions. As FL distributes control and respon- sibility across many actors, it also demands more in- clusive governance mechanisms. Harms cannot be ad- dressed without first identifying who could be harmed, which errors are most consequential, and how these man- ifest in specific contexts (Chehbouni et al. 2025; Shelby et al. 2023; Abercrombie et al. 2024). 7 Limitations Our analysis is subject to several limitations. First, we focus primarily on horizontal FL, as it is the most widely studied variant in fairness literature; however, other forms such as vertical FL present different
|
https://arxiv.org/abs/2505.21584v1
|
challenges that merit separate examination. Second, the coverage of our annotations may be constrained by the chosen library and search criteria, al- though we believe the sample is representative of the current research landscape. Finally, despite our effort to systematize the annotations, the process inevitably involves interpreta- tion, which may introduce subjectivity or overlook subtle nuances. References Abad, M. S. H.; Ozfatura, E.; Gunduz, D.; and Ercetin, O. 2020. Hierarchical federated learning across heterogeneous cellular networks. In ICASSP 2020-2020 IEEE Interna- tional Conference on Acoustics, Speech and Signal Process- ing (ICASSP) , 8866–8870. IEEE. Abay, A.; Zhou, Y .; Baracaldo, N.; Rajamoni, S.; Chuba, E.; and Ludwig, H. 2020. Mitigating Bias in Federated Learn- ing. CoRR , abs/2012.02447. ArXiv: 2012.02447. Abercrombie, G.; Benbouzid, D.; Giudici, P.; Golpayegani, D.; Hernandez, J.; Noro, P.; Pandit, H.; Paraschou, E.; Pownall, C.; Prajapati, J.; et al. 2024. A collaborative, Human-Centred taxonomy of AI, algorithmic, and automa- tion harms. arXiv preprint arXiv:2407.01294 . Adamek, J.; and Darup, M. S. 2024. Privacy-preserving gradient-based fair federated learning. In 2024 10th Inter- national Conference on Control, Decision and Information Technologies (CoDIT) , 2132–2138. ISSN: 2576-3555. Albaseer, A.; Abdi, N.; Abdallah, M.; Qaraqe, M.; and Al- Kuwari, S. 2024. FedPot: A Quality-Aware Collaborative and Incentivized Honeypot-Based Detector for Smart Grid Networks. IEEE Transactions on Network and Service Management , 21(4): 4844–4860. Conference Name: IEEE Transactions on Network and Service Management. Amiri, S.; Belloum, A.; Nalisnick, E. T.; Klous, S.; and Gommans, L. 2022. On the impact of non-IID data on the performance and fairness of differentially private federated learning. In 52nd Annual IEEE/IFIP International Confer- ence on Dependable Systems and Networks, DSN Workshops 2022, Baltimore, MD, USA, June 27-30, 2022 , 52–58. IEEE. Angwin, J.; Larson, J.; Mattu, S.; and Kirchner, L. 2016. Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica , 23: 77–91. Annapareddy, N.; Preston, J.; and Fox, J. 2023. Fairness and Privacy in Federated Learning and Their Implications in Healthcare. CoRR , abs/2308.07805. ArXiv: 2308.07805. Antunes, R. S.; Andr ´e da Costa, C.; K ¨uderle, A.; Yari, I. A.; and Eskofier, B. 2022. Federated learning for healthcare: Systematic review and architecture proposal. ACM Transac- tions on Intelligent Systems and Technology (TIST) , 13(4): 1–23. Asiedu, M. N.; Dieng, A.; Haykel, I.; Rostamzadeh, N.; Pfohl, S.; Nagpal, C.; Nagawa, M.; Oppong, A.; Koyejo, S.; and Heller, K. 2024. The Case for Globalizing Fair- ness: A Mixed Methods Study on Colonialism, AI, and Health in Africa. In Proceedings of the 4th ACM Confer- ence on Equity and Access in Algorithms, Mechanisms, and Optimization , 1–24. San Luis Potosi Mexico: ACM. ISBN 9798400712227. Bagdasaryan, E.; Poursaeed, O.; and Shmatikov, V . 2019. Differential Privacy Has Disparate Impact on Model Accu- racy. In Advances in Neural Information Processing Sys- tems, volume 32. Curran Associates, Inc. Balbierer, B.; Heinlein, L.; Zipperling, D.; and K ¨uhl, N. 2024. A Multivocal Literature Review on Privacy and Fair- ness in Federated Learning. CoRR , abs/2408.08666. ArXiv: 2408.08666. Birhane, A.; Isaac, W.; Prabhakaran, V .; Diaz, M.;
|
https://arxiv.org/abs/2505.21584v1
|
Elish, M. C.; Gabriel, I.; and Mohamed, S. 2022. Power to the people? Opportunities and challenges for participatory AI. InProceedings of the 2nd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization , 1–8. Bonawitz, K. 2019. Towards federated learning at scale: Syste m design. arXiv preprint arXiv:1902.01046 . Chehbouni, K.; Cock, M. D.; Caporossi, G.; Taik, A.; Rab- bany, R.; and Farnadi, G. 2025. Enhancing Privacy in the Early Detection of Sexual Predators Through Federated Learning and Differential Privacy. arXiv:2501.12537. Chen, H.; Zhu, T.; Zhang, T.; Zhou, W.; and Yu, P. S. 2024. Privacy and Fairness in Federated Learning: On the Perspec- tive of Tradeoff. ACM Comput. Surv. , 56(2): 39:1–39:37. Cho, Y . J.; Wang, J.; and Joshi, G. 2022. Towards under- standing biased client selection in federated learning. In In- ternational Conference on Artificial Intelligence and Statis- tics, 10351–10375. PMLR. Chu, W.; Xie, C.; Wang, B.; Li, L.; Yin, L.; Zhao, H.; and Li, B. 2022. FOCUS: Fairness via Agent-Awareness for Federated Learning on Heterogeneous Data. CoRR , abs/2207.10265. ArXiv: 2207.10265. Chu, Y .-W.; Han, D.-J.; Hosseinalipour, S.; and Brinton, C. G. 2024a. Rethinking the Starting Point: Enhancing Performance and Fairness of Federated Learning via Col- laborative Pre-Training. CoRR , abs/2402.02225. ArXiv: 2402.02225. Chu, Y .-W.; Hosseinalipour, S.; Tenorio, E.; Cruz, L.; Dou- glas, K.; Lan, A. S.; and Brinton, C. G. 2024b. Multi-layerpersonalized federated learning for mitigating biases in stu- dent predictive analytics. IEEE Transactions on Emerging Topics in Computing . Chuang, C.-Y .; Jampani, V .; Li, Y .; Torralba, A.; and Jegelka, S. 2023. Debiasing vision-language models via bi- ased prompts. arXiv preprint arXiv:2302.00070 . Cong, Y .; Qiu, J.; Zhang, K.; Fang, Z.; Gao, C.; Su, S.; and Tian, Z. 2024. Ada-FFL: Adaptive computing fairness fed- erated learning. CAAI Trans. Intell. Technol. , 9(3): 573–584. Ding, F.; Hardt, M.; Miller, J.; and Schmidt, L. 2021. Re- tiring Adult: New Datasets for Fair Machine Learning. In Ranzato, M.; Beygelzimer, A.; Dauphin, Y .; Liang, P. S.; and Vaughan, J. W., eds., Advances in Neural Information Pro- cessing Systems , volume 34, 6478–6490. Curran Associates, Inc. Donahue, K.; and Kleinberg, J. M. 2021. Models of fair- ness in federated learning. CoRR , abs/2112.00818. ArXiv: 2112.00818. El-Rifai, O.; Ali, M. B.; Megdiche, I.; Peninou, A.; and Teste, O. 2025. A Survey on Cluster-based Federated Learn- ing. arXiv preprint arXiv:2501.17512 . Ezzeldin, Y . H.; Yan, S.; He, C.; Ferrara, E.; and Avestimehr, S. 2021. FairFed: Enabling Group Fairness in Federated Learning. CoRR , abs/2110.00857. ArXiv: 2110.00857. Fu, L.; Zhang, H.; Gao, G.; Zhang, M.; and Liu, X. 2023. Client Selection in Federated Learning: Principles, Chal- lenges, and Opportunities. IEEE Internet of Things Journal , 10(24): 21811–21819. Gallegos, I. O.; Rossi, R. A.; Barrow, J.; Tanjim, M. M.; Kim, S.; Dernoncourt, F.; Yu, T.; Zhang, R.; and Ahmed, N. K. 2024. Bias and fairness in large language models: A survey. Computational Linguistics , 50(3): 1097–1179. Gao, J.; Mavromatis, I.; Li, P.; Carnelli, P. E.; and Khan, A. 2024. Mitigating System Bias in Resource Con- strained Asynchronous Federated
|
https://arxiv.org/abs/2505.21584v1
|
Learning Systems. CoRR , abs/2401.13366. ArXiv: 2401.13366. Goetz, J.; Malik, K.; Bui, D.; Moon, S.; Liu, H.; and Kumar, A. 2019. Active Federated Learning. ArXiv:1909.12641 [cs]. Green, B. 2021. The contestation of tech ethics: FA so- ciotechnical approach to technology ethics in practice. Jour- nal of Social Computing , 2(3): 209–225. G´alvez, B. R.; Granqvist, F.; Dalen, R. C. v.; and Seigel, M. 2021. Enforcing fairness in private federated learning via the modified method of differential multipliers. CoRR , abs/2109.08604. ArXiv: 2109.08604. Hadi, M. U.; Qureshi, R.; Shah, A.; Irfan, M.; Zafar, A.; Shaikh, M. B.; Akhtar, N.; Wu, J.; Mirjalili, S.; et al. 2023. A survey on large language models: Applications, challenges, limitations, and practical usage. Authorea Preprints . Hard, A.; Rao, K.; Mathews, R.; Ramaswamy, S.; Beaufays, F.; Augenstein, S.; Eichner, H.; Kiddon, C.; and Ramage, D. 2018. Federated learning for mobile keyboard prediction. arXiv preprint arXiv:1811.03604 . Huang, T.; Lin, W.; Wu, W.; He, L.; Li, K.; and Zomaya, A. Y . 2021. An Efficiency-boosting Client Selection Scheme for Federated Learning with Fairness Guarantee. IEEE Transactions on Parallel and Distributed Systems , 32(7): 1552–1564. ArXiv:2011.01783 [cs]. Javaherian, S.; Panta, S.; Williams, S.; Islam, M. S.; and Chen, L. 2024a. FedFairˆ3: Unlocking Threefold Fairness in Federated Learning. CoRR , abs/2401.16350. ArXiv: 2401.16350. Javaherian, S.; Panta, S.; Williams, S.; Islam, M. S.; and Chen, L. 2024b. FedFair \(ˆ\mbox3 \): Unlocking Threefold Fairness in Federated Learning. In IEEE International Con- ference on Communications, ICC 2024, Denver, CO, USA, June 9-13, 2024 , 3622–3627. IEEE. Kairouz, P.; McMahan, H. B.; Avent, B.; Bellet, A.; Bennis, M.; Bhagoji, A. N.; Bonawitz, K.; Charles, Z.; Cormode, G.; Cummings, R.; et al. 2021. Advances and open problems in federated learning. Foundations and Trends® in Machine Learning , 14(1–2): 1–210. Kaur, I.; and Jadhav, A. J. 2023. A Comprehensive Study on Model Initialization Techniques Ensuring Efficient Fed- erated Learning. ArXiv:2311.02100 [cs]. Kleinberg, J.; Ludwig, J.; Mullainathan, S.; and Rambachan, A. 2018. Algorithmic fairness. In Aea papers and proceed- ings, volume 108, 22–27. American Economic Association 2014 Broadway, Suite 305, Nashville, TN 37203. Kohavi, R.; and Becker, B. 1996. Uci adult data set. UCI Meachine Learning Repository , 5. Kone ˇcn`y, J. 2016. Federated Learning: Strategies for Improving Communication Efficiency. arXiv preprint arXiv:1610.05492 . Lai, F.; Dai, Y .; Singapuram, S.; Liu, J.; Zhu, X.; Mad- hyastha, H.; and Chowdhury, M. 2022. Fedscale: Bench- marking model and system performance of federated learn- ing at scale. In International conference on machine learn- ing, 11814–11827. PMLR. Lee, M.; Kim, H.; and Joo, C. 2024. Geographical Node Clustering and Grouping to Guarantee Data IIDness in Fed- erated Learning. arXiv preprint arXiv:2410.15693 . Li, B.; Yao, Y .; Tan, J.; Gong, R.; Lu, J.; and Luo, Y . 2024a. Rectify representation bias in vision-language models for long-tailed recognition. Neural Networks , 172: 106134. Li, T.; Sanjabi, M.; Beirami, A.; and Smith, V . 2020. Fair Resource Allocation in Federated Learning. ArXiv:1905.10497 [cs]. Li, Y .; Zhang, J.; Zhao, Y .; Chen, B.; and Yu, S. 2024b. Fairness-Aware Federated Learning Framework on Hetero-
|
https://arxiv.org/abs/2505.21584v1
|
geneous Data Distributions. In IEEE International Confer- ence on Communications, ICC 2024, Denver, CO, USA, June 9-13, 2024 , 728–733. IEEE. Lim, W. Y . B.; Luong, N. C.; Hoang, D. T.; Jiao, Y .; Liang, Y .-C.; Yang, Q.; Niyato, D.; and Miao, C. 2020. Feder- ated learning in mobile edge networks: A comprehensive survey. IEEE communications surveys & tutorials , 22(3): 2031–2063. Ling, X.; Fu, J.; Chen, Z.; Wang, K.; Li, H.; Cheng, T.; Xu, G.; and Li, Q. 2024. FedFDP: Federated Learning with Fairness and Differential Privacy. CoRR , abs/2402.16028. ArXiv: 2402.16028.Liu, L.; Kong, Y .; Li, G.; and Han, M. 2023. FairShare: An Incentive-Based Fairness-Aware Data Sharing Frame- work for Federated Learning. In Yang, H.; Liu, H.; Zou, J.; Yin, Z.; Liu, L.; Yang, G.; Ouyang, X.; and Wang, Z., eds., Intelligent Robotics and Applications - 16th Interna- tional Conference, ICIRA 2023, Hangzhou, China, July 5-7, 2023, Proceedings, Part II , volume 14268 of Lecture Notes in Computer Science , 115–126. Springer. Liu, Z.; Luo, P.; Wang, X.; and Tang, X. 2015. Deep learn- ing face attributes in the wild. In Proceedings of the IEEE international conference on computer vision , 3730–3738. Lones, M. A. 2021. How to avoid machine learning pit- falls: a guide for academic researchers. arXiv preprint arXiv:2108.02497 . Lu, J.; Liu, H.; Zhang, Z.; Wang, J.; Goudos, S. K.; and Wan, S. 2022. Toward Fairness-Aware Time-Sensitive Asyn- chronous Federated Learning for Critical Energy Infrastruc- ture. IEEE Trans. Ind. Informatics , 18(5): 3462–3472. Lu, Z.; Pan, H.; Dai, Y .; Si, X.; and Zhang, Y . 2024. Feder- ated learning with non-iid data: A survey. IEEE Internet of Things Journal . Lyu, L.; Xu, X.; and Wang, Q. 2020. Collaborative Fairness in Federated Learning. ArXiv:2008.12161 [cs]. Maheswari, G. U.; Jeslin, J. G.; Rajasuguna, M.; and S, A. 2024. Transforming Healthcare with Federated Learning- based Artficial Intelligence: Concepts, Classifications, and Challenges. In 2024 3rd International Conference on Senti- ment Analysis and Deep Learning (ICSADL) , 209–216. Makhija, D.; Han, X.; Ghosh, J.; and Kim, Y . 2024. Achiev- ing Fairness Across Local and Global Models in Federated Learning. CoRR , abs/2406.17102. ArXiv: 2406.17102. Mashhadi, A.; Tabaraei, A.; Zhan, Y .; and Parizi, R. M. 2022. An Auditing Framework for Analyzing Fairness of Spatial-Temporal Federated Learning Applications. In 2022 IEEE World AI IoT Congress (AIIoT), Seattle, WA, USA, June 6-9, 2022 , 699–707. IEEE. McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; and y Arcas, B. A. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial intelli- gence and statistics , 1273–1282. PMLR. Meerza, S. I. A.; Liu, L.; Zhang, J.; and Liu, J. 2024. GLO- CALFAIR: Jointly Improving Global and Local Group Fair- ness in Federated Learning. CoRR , abs/2401.03562. ArXiv: 2401.03562. Michieli, U.; and Ozay, M. 2021. Are All Users Treated Fairly in Federated Learning Systems? In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) , 2318–2322. Nashville, TN, USA: IEEE. ISBN 978-1-6654-4899-4. Mitchell, S.; Potash, E.; Barocas, S.; D’Amour, A.; and Lum, K. 2021. Algorithmic fairness: Choices, assumptions, and
|
https://arxiv.org/abs/2505.21584v1
|
definitions. Annual review of statistics and its applica- tion, 8(1): 141–163. Mohri, M.; Sivek, G.; and Suresh, A. T. 2019. Agnostic federated learning. In International conference on machine learning , 4615–4625. PMLR. Molamohammadi, M.; Ta ¨ık, A.; Le Roux, N.; and Farnadi, G. 2023. Unraveling the Interconnected Axes of Hetero- geneity in Machine Learning for Democratic and Inclusive Advancements. In Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Op- timization , EAAMO ’23. New York, NY , USA: Association for Computing Machinery. ISBN 9798400703812. Nishio, T.; and Yonetani, R. 2019. Client selection for feder- ated learning with heterogeneous resources in mobile edge. InICC 2019-2019 IEEE international conference on com- munications (ICC) , 1–7. IEEE. Ogier du Terrail, J.; Ayed, S.-S.; Cyffers, E.; Grimberg, F.; He, C.; Loeb, R.; Mangold, P.; Marchand, T.; Marfoq, O.; Mushtaq, E.; et al. 2022. Flamby: Datasets and benchmarks for cross-silo federated learning in realistic healthcare set- tings. Advances in Neural Information Processing Systems , 35: 5315–5334. Papadaki, A.; Mart ´ınez, N.; Bertr ´an, M.; Sapiro, G.; and Rodrigues, M. R. D. 2022. Minimax Demographic Group Fairness in Federated Learning. In FAccT ’22: 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, June 21 - 24, 2022 , 142–159. ACM. Passi, S.; and Barocas, S. 2019. Problem Formulation and Fairness. In Proceedings of the Conference on Fairness, Ac- countability, and Transparency , 39–48. Atlanta GA USA: ACM. ISBN 978-1-4503-6125-5. Pentyala, S.; Neophytou, N.; Nascimento, A. C. A.; Cock, M. D.; and Farnadi, G. 2022. PrivFairFL: Privacy- Preserving Group Fairness in Federated Learning. CoRR , abs/2205.11584. ArXiv: 2205.11584. Qi, P.; Chiaro, D.; Guzzo, A.; Ianni, M.; Fortino, G.; and Piccialli, F. 2024. Model aggregation techniques in feder- ated learning: A comprehensive survey. Future Generation Computer Systems , 150: 272–293. Qin, C.; Zhu, H.; Xu, T.; Zhu, C.; Jiang, L.; Chen, E.; and Xiong, H. 2018. Enhancing person-job fit for talent recruit- ment: An ability-aware neural network approach. In The 41st international ACM SIGIR conference on research & de- velopment in information retrieval , 25–34. Rafi, T. H.; Noor, F. A.; Hussain, T.; and Chae, D.-K. 2024. Fairness and privacy preserving in federated learning: A sur- vey. Inf. Fusion , 105: 102198. Raji, I. D.; Kumar, I. E.; Horowitz, A.; and Selbst, A. 2022. The Fallacy of AI Functionality. In 2022 ACM Conference on Fairness, Accountability, and Transparency , 959–972. Seoul Republic of Korea: ACM. ISBN 978-1-4503-9352- 2. Ren, P.; Qi, K.; Li, J.; Yan, T.; and Dai, Q. 2024. CosPer: An adaptive personalized approach for enhancing fairness and robustness of federated learning. Inf. Sci. , 675: 120760. Roy, S.; Sharma, H.; and Salekin, A. 2024. Fairness With- out Demographics in Human-Centered Federated Learning. CoRR , abs/2404.19725. ArXiv: 2404.19725. Salazar, T.; Ara ´ujo, H.; Cano, A.; and Abreu, P. H. 2024. A Survey on Group Fairness in Federated Learning: Chal-lenges, Taxonomy of Solutions and Directions for Future Research. CoRR , abs/2410.03855. ArXiv: 2410.03855. Saputra, Y . M.; Hoang, D. T.; Nguyen, D. N.; Dutkiewicz, E.; Mueck, M. D.; and Srikanteswara, S.
|
https://arxiv.org/abs/2505.21584v1
|
2019. Energy de- mand prediction with federated learning for electric vehicle networks. In 2019 IEEE global communications conference (GLOBECOM) , 1–6. IEEE. Sattler, F.; M ¨uller, K.-R.; and Samek, W. 2020. Clustered federated learning: Model-agnostic distributed multitask op- timization under privacy constraints. IEEE transactions on neural networks and learning systems , 32(8): 3710–3722. Saxena, N. A.; Huang, K.; DeFilippis, E.; Radanovic, G.; Parkes, D. C.; and Liu, Y . 2019. How do fairness definitions fare? Examining public attitudes towards algorithmic defi- nitions of fairness. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society , 99–106. Selbst, A. D.; Boyd, D.; Friedler, S. A.; Venkatasubrama- nian, S.; and Vertesi, J. 2019. Fairness and abstraction in sociotechnical systems. In Proceedings of the conference on fairness, accountability, and transparency , 59–68. Selialia, K.; Chandio, Y .; and Anwar, F. M. 2022. Federated Learning Biases in Heterogeneous Edge-Devices: A Case- Study. In Gummeson, J.; Lee, S. I.; Gao, J.; and Xing, G., eds., Proceedings of the 20th ACM Conference on Embed- ded Networked Sensor Systems, SenSys 2022, Boston, Mas- sachusetts, November 6-9, 2022 , 980–986. ACM. Selialia, K.; Chandio, Y .; and Anwar, F. M. 2023. Mitigating Group Bias in Federated Learning for Heterogeneous De- vices. CoRR , abs/2309.07085. ArXiv: 2309.07085. Shelby, R.; Rismani, S.; Henne, K.; Moon, A.; Ros- tamzadeh, N.; Nicholas, P.; Yilla-Akbari, N.; Gallegos, J.; Smart, A.; Garcia, E.; et al. 2023. Sociotechnical harms of algorithmic systems: Scoping a taxonomy for harm reduc- tion. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society , 723–741. Shi, Y .; Yu, H.; and Leung, C. 2024. Towards Fairness- Aware Federated Learning. IEEE Trans. Neural Networks Learn. Syst. , 35(9): 11922–11938. Simson, J.; Fabris, A.; and Kern, C. 2024. Lazy data prac- tices harm fairness research. In The 2024 ACM Conference on Fairness, Accountability, and Transparency , 642–659. Suresh, H.; and Guttag, J. 2021. A Framework for Under- standing Sources of Harm throughout the Machine Learn- ing Life Cycle. In Equity and Access in Algorithms, Mech- anisms, and Optimization , 1–9. – NY USA: ACM. ISBN 978-1-4503-8553-4. Tak, A.; and Cherkaoui, S. 2021. Federated Edge Learning: Design Issues and Challenges. IEEE Network , 35(2): 252– 258. Tan, A. Z.; Yu, H.; Cui, L.; and Yang, Q. 2022. Towards per- sonalized federated learning. IEEE transactions on neural networks and learning systems , 34(12): 9587–9603. Thakkar, O.; Ramaswamy, S.; Mathews, R.; and Beaufays, F. 2020. Understanding unintended memorization in feder- ated learning. arXiv preprint arXiv:2006.07490 . Touat, O.; and Bouchenak, S. 2023. Towards Robust and Bias-free Federated Learning. In Proceedings of the 3rd Workshop on Machine Learning and Systems , 49–55. Rome Italy: ACM. ISBN 979-8-4007-0084-2. Ude, B.; Odeyomi, O. T.; Roy, K.; and Yuan, X. 2023. A Sur- vey on Bias Mitigation in Federated Learning. In IEEE Sym- posium Series on Computational Intelligence, SSCI 2023, Mexico City, Mexico, December 5-8, 2023 , 1170–1175. IEEE. Vucinich, S.; and Zhu, Q. 2023. The Current State and Chal- lenges of Fairness in Federated Learning. IEEE Access , 11: 80903–80914. Wang, C.; Huang, H.; Li,
|
https://arxiv.org/abs/2505.21584v1
|
R.; Liu, J.; Cai, T.; and Zheng, Z. 2024a. Libra: A Fairness-Guaranteed Framework for Semi-Asynchronous Federated Learning. In 44th IEEE In- ternational Conference on Distributed Computing Systems, ICDCS 2024, Jersey City, NJ, USA, July 23-26, 2024 , 797– 808. IEEE. Wang, L.; Zhu, T.; Zhou, W.; and Yu, P. S. 2024b. Linkage on Security, Privacy and Fairness in Federated Learning: New Balances and New Perspectives. CoRR , abs/2406.10884. ArXiv: 2406.10884. Wang, Y .; Tang, X.; Lu, Y .; and Liu, N. 2023. Research on the Fairness of Cold-start Recommender System Based on Federated Learning Framework. In Proceedings of the 5th International Conference on Internet of Things, Automation and Artificial Intelligence, IoTAAI 2023, Nanchang, China, November 24-26, 2023 , 802–807. ACM. Xia, Y .; Ma, B.; Dou, Q.; and Xia, Y . 2024. Enhancing Federated Learning Performance Fairness via Collaboration Graph-Based Reinforcement Learning. In Linguraru, M. G.; Dou, Q.; Feragen, A.; Giannarou, S.; Glocker, B.; Lekadir, K.; and Schnabel, J. A., eds., Medical Image Computing and Computer Assisted Intervention - MICCAI 2024 - 27th Inter- national Conference, Marrakesh, Morocco, October 6-10, 2024, Proceedings, Part X , volume 15010 of Lecture Notes in Computer Science , 263–272. Springer. Xie, R.; Li, C.; Zhou, X.; and Dong, Z. 2024. Accelerat- ing Communication-Efficient Federated Multi-Task Learn- ing With Personalization and Fairness. IEEE Trans. Parallel Distributed Syst. , 35(11): 2239–2253. Xu, G.; Wu, Y .; Hu, J.; and Shi, Y . 2022. Achieving Fair- ness in Dermatological Disease Diagnosis through Auto- matic Weight Adjusting Federated Learning and Personal- ization. CoRR , abs/2208.11187. ArXiv: 2208.11187. Xu, H.; Nanda, P.; and Liang, J. 2024. Reciprocal Federated Learning Framework: Balancing incentives for model and data owners. Future Generation Computer Systems , 161: 146–161. Yan, Y .; Zhu, L.; Li, Y .; Xu, X.; Goh, R. S. M.; Liu, Y .; Khan, S. H.; and Feng, C.-M. 2024. A New Perspective to Boost Performance Fairness For Medical Federated Learn- ing. In Linguraru, M. G.; Dou, Q.; Feragen, A.; Giannarou, S.; Glocker, B.; Lekadir, K.; and Schnabel, J. A., eds., Medi- cal Image Computing and Computer Assisted Intervention - MICCAI 2024 - 27th International Conference, Marrakesh,Morocco, October 6-10, 2024, Proceedings, Part X , vol- ume 15010 of Lecture Notes in Computer Science , 13–23. Springer. Yang, K.; Qinami, K.; Fei-Fei, L.; Deng, J.; and Rus- sakovsky, O. 2020. Towards fairer datasets: Filtering and balancing the distribution of the people subtree in the ima- genet hierarchy. In Proceedings of the 2020 conference on fairness, accountability, and transparency , 547–558. Yi, L.; Shi, X.; Wang, N.; Zhang, J.; Wang, G.; and Liu, X. 2024. FedPE: Adaptive Model Pruning-Expanding for Federated Learning on Mobile Devices. IEEE Transactions on Mobile Computing , 23(11): 10475–10493. Conference Name: IEEE Transactions on Mobile Computing. Yin, X.; Zhu, Y .; and Hu, J. 2021. A comprehensive sur- vey of privacy-preserving federated learning: A taxonomy, review, and future directions. ACM Computing Surveys (CSUR) , 54(6): 1–36. Yu, H.; Liu, Z.; Liu, Y .; Chen, T.; Cong, M.; Weng, X.; Niyato, D.; and Yang, Q. 2020. A Fairness-aware Incen- tive Scheme for
|
https://arxiv.org/abs/2505.21584v1
|
Federated Learning. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society , AIES ’20, 393–399. New York, NY , USA: Association for Com- puting Machinery. ISBN 978-1-4503-7110-0. Yue, X.; Nouiehed, M.; and Kontar, R. A. 2021. GIFAIR-FL: An Approach for Group and Individual Fairness in Feder- ated Learning. CoRR , abs/2108.02741. ArXiv: 2108.02741. Zhang, D. Y .; Kou, Z.; and Wang, D. 2020. FairFL: A Fair Federated Learning Approach to Reducing Demographic Bias in Privacy-Sensitive Classification Models. In Wu, X.; Jermaine, C.; Xiong, L.; Hu, X.; Kotevska, O.; Lu, S.; Xu, W.; Aluru, S.; Zhai, C.; Al-Masri, E.; Chen, Z.; and Saltz, J., eds., 2020 IEEE International Conference on Big Data (IEEE BigData 2020), Atlanta, GA, USA, December 10-13, 2020 , 1051–1060. IEEE. Zhu, H.; Zhou, Y .; Qian, H.; Shi, Y .; Chen, X.; and Yang, Y . 2022. Online client selection for asynchronous federated learning with fairness consideration. IEEE Transactions on Wireless Communications , 22(4): 2493–2506. ¨Oks¨uz, H. Y .; Molinari, F.; Sprekeler, H.; and Raisch, J. 2024. Boosting Fairness and Robustness in Over-the-Air Federated Learning. IEEE Control. Syst. Lett. , 8: 682–687.
|
https://arxiv.org/abs/2505.21584v1
|
CellCLAT: Preserving Topology and Trimming Redundancy in Self-Supervised Cellular Contrastive Learning Bin Qin∗ qinbin21@mails.ucas.ac.cn Institute of Software, Chinese Academy of Sciences Beijing, China University of Chinese Academy of Sciences Beijing, ChinaQirui Ji∗ jiqirui2022@iscas.ac.cn Institute of Software, Chinese Academy of Sciences Beijing, China University of Chinese Academy of Sciences Beijing, ChinaJiangmeng Li†∗ jiangmeng2019@iscas.ac.cn Institute of Software, Chinese Academy of Sciences Beijing, China University of Chinese Academy of Sciences Beijing, China Yupeng Wang yupeng@iscas.ac.cn Institute of Software, Chinese Academy of Sciences Beijing, ChinaXuesong Wu† xuesong@iscas.ac.cn Institute of Software, Chinese Academy of Sciences Beijing, ChinaJianwen Cao† jianwen@iscas.ac.cn Institute of Software, Chinese Academy of Sciences Beijing, China University of Chinese Academy of Sciences Beijing, China Fanjiang Xu fanjiang@iscas.ac.cn Institute of Software, Chinese Academy of Sciences Beijing, China University of Chinese Academy of Sciences Beijing, China Abstract Self-supervised topological deep learning (TDL) represents a nascent but underexplored area with significant potential for modeling higher-order interactions in simplicial complexes and cellular com- plexes to derive representations of unlabeled graphs. Compared to simplicial complexes, cellular complexes exhibit greater expressive power. However, the advancement in self-supervised learning for cellular TDL is largely hindered by two core challenges: extrinsic structural constraints inherent to cellular complexes, and intrin- sic semantic redundancy in cellular representations. The first chal- lenge highlights that traditional graph augmentation techniques may compromise the integrity of higher-order cellular interactions, while the second underscores that topological redundancy in cellu- lar complexes potentially diminish task-relevant information. To ∗All authors contributed equally to this research. †Corresponding Author. This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. KDD ’25, Toronto, ON, Canada. ©2025 Copyright held by the owner/author(s). ACM ISBN 979-8-4007-1454-2/25/08 https://doi.org/10.1145/3711896.3736876address these issues, we introduce Cellular Complex Contrastive Learning with Adaptive Trimming (CellCLAT), a twofold framework designed to adhere to the combinatorial constraints of cellular com- plexes while mitigating informational redundancy. Specifically, we propose a parameter perturbation-based augmentation method that injects controlled noise into cellular interactions without altering the underlying cellular structures, thereby preserving cellular topol- ogy during contrastive learning. Additionally, a cellular trimming scheduler is employed to mask gradient contributions from task- irrelevant cells through a bi-level meta-learning approach, effec- tively removing redundant topological elements while maintaining critical higher-order semantics. We provide theoretical justification and empirical validation to demonstrate that CellCLAT achieves substantial improvements over existing self-supervised graph learn- ing methods, marking a significant attempt in this domain. CCS Concepts •Computing methodologies →Neural networks ;Learning latent representations ;•Information systems →Data mining . Keywords Topological deep learning; Cellular complexes; Self-supervised learning; Contrastive learning; Graph neural networksarXiv:2505.21587v1 [cs.LG] 27 May 2025 KDD ’25, August 3–7, 2025, Toronto, ON, Canada. Bin Qin et al. ACM Reference Format: Bin Qin, Qirui Ji, Jiangmeng Li, Yupeng Wang, Xuesong Wu, Jianwen Cao, and Fanjiang Xu. 2025. CellCLAT: Preserving Topology and Trimming Redundancy in Self-Supervised Cellular Contrastive Learning. In Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V.2 (KDD ’25), August 3–7, 2025, Toronto, ON, Canada. ACM, New York, NY, USA, 13 pages. https://doi.org/10.1145/3711896.3736876 KDD Availability Link: The source code of this paper has been made publicly available at https:
|
https://arxiv.org/abs/2505.21587v1
|
//doi.org/10.5281/zenodo.15514849. 1 Introduction Graph Neural Networks (GNNs) [ 46] model pairwise interactions in non-Euclidean graph structures, such as user relationship modeling in social networks [ 14], protein-protein interactions [ 44] in biology, and chemical or molecular property prediction [ 54]. GNNs achieve this by aggregating the 𝑘-hop neighborhood information, which follows the graph topology , to learn node representations. However, growing demands for higher-order interaction modeling have moti- vated emerging researches [ 39] to design more expressive GNNs [36] capable of identifying fine-grained topological patterns, en- coding substructures, and capturing long-range dependencies. For instance, protein complex identification [ 63] requires group-wise in- teraction analysis, and social network group behavior [ 2] modeling necessitates representations beyond conventional graph structures. These challenges reveal the insufficiency of graphs in expressing complex data relations with multifaceted interactions. Topological deep learning (TDL) has consequently emerged, shifting the focus from graphs to combinatorial topological spaces that learn repre- sentations of simplicial complexes or cellular complexes [12]. The message-passing mechanism-based neural networks within these spaces demonstrate expressive capabilities that are at least equiv- alent to those of the 3-Weisfeiler-Lehman (3-WL) test [ 6,13]. In contrast to traditional node-wise message-passing, TDL enables the aggregation of messages through multi-level relationships among simplices or cells, effectively capturing higher-order interactions. While hierarchical high-order interactions are highly advan- tageous, high-dimensional topological objects such as simplicial or cellular complexes present significant challenges in terms of data annotation and task definition compared to standard graph structures. These challenges arise from their inherently complex high-dimensional topological structures (e.g., defining cliques or cycles within a graph), their reliance on domain-specific knowl- edge, and the presence of topological features that do not directly correspond to conventional labels (e.g., persistent homology and topological invariants). As a result, there is an urgent need for de- veloping self-supervised TDL. Pioneering studies [ 35] have utilized contrastive learning paradigms to learn representations from simpli- cial complexes. However, as a more expressive topological structure, cellular complex-based self-supervised TDL approaches remains largely underexplored. This gap primarily stems from the fact that existing graph-based self-supervised learning (SSL) techniques are not directly applicable, due to two fundamental challenges: extrin- sic structural constraints inherent in cellular complexes, and intrinsic semantic redundancy in cellular representations.The predominant augmentation technique in graph SSL involves directly applying node-dropping or edge-perturbation to graphs [62]. However, such "loose" perturbations are insufficient for gen- erating augmented views of cellular complexes while preserving their global topological properties. Accordingly, structural con- straints [ 52] refer to the continuity of attaching maps and the closure- finiteness in the gluing process for constructing cellular complexes [25]. For instance, if a 2-cell (modeled by a polygon) has a boundary that includes a deleted edge 𝑒, the attaching maps of that 2-cell become invalid. Similarly, if a node 𝑣is removed, all 1-cells (edges) incident to𝑣must also be removed; otherwise, closure-finiteness is violated. Despite adhering to such structural constraints, achiev- ing augmented views that preserve semantically rich higher-order topological structures remains a significant challenge. Another counterintuitive yet focal phenomenon is that not all higher-order cellular interactions carry task-relevant information. Some interactions may
|
https://arxiv.org/abs/2505.21587v1
|
introduce semantic redundancy, which we term Cellular Topological Redundancy . As depicted in the Figure 1, we conduct exploratory experiments to validate this phenome- non by introducing a naive cellular complex contrastive learning paradigm1. Specifically, we observe that certain trimmed represen- tations can indeed achieve better performance than the original baseline representation, providing evidence that trimming redun- dant cells can indeed improve performance. To cope with the challenges, we propose Cellular Complex Con- trastive Learning with Adaptive Trimming (CellCLAT), a novel yet effective two-pronged framework designed to simultaneously pre- serve cellular topology and eliminate semantic redundancy within cellular complexes. The core innovation of our approach lies in two key components: 1) We develop a parameter perturbation-based augmentation method that injects controlled noise into the cellular interaction process, rather than directly altering the cellular struc- tures, to maximize the preservation of cellular topology during contrastive learning. This component circumvents the destruction of extrinsic structural constraints (e.g., attaching map continuity and closure-finiteness properties) inherent to cellular complexes during generating contrastive views. 2) Based on the obtained com- prehensive cellular representations obtained, we propose a cellular trimming scheduler that adaptively trims redundant 2-cells in a self-paced manner. Concretely, the scheduler employs a bi-level meta-learning optimization objective that integrates the trimming process with contrastive learning tasks, enabling the framework to dynamically block gradient contributions from topological elements containing confounding or task-irrelevant information. As a result, superfluous higher-order structures are discarded, enhancing the focus on task-relevant topological patterns. Ourcontributions are summarized as follows: 1) To the best of our belief, we present the first self-supervised method for Cellular TDL, effectively adhering the extrinsic structural constraints inher- ent to cellular complexes. 2) We conduct an in-depth investigation into the semantic redundancy of higher-order cellular topology and propose an adaptive cellular trimming scheduler to explore the higher-order interactions with task-relevant semantics. 3) We 1We adopt a parameter perturbation-based augmentation method, which preserves the structural integrity of the input cellular complex while generating augmented views for contrastive learning, to obtain representations of cells at all orders within the cellular complex. CellCLAT: Preserving Topology and Trimming Redundancy in Self-Supervised Cellular Contrastive Learning KDD ’25, August 3–7, 2025, Toronto, ON, Canada. 73.075.5 75.074.5 74.0 73.5 70.074.0 73.072.0 71.075.0 2-cell Trimming Ratio(%) 2-cell Trimming Ratio(%) 1 5 10 15 20 25 30 40 50 60 70 80 90 100 1 5 10 15 20 25 30 40 50 60 70 80 90 100PROTEINS IMDB-B BaselineBeyond Baseline Below Baseline Normalized DeviationClassification Accuracy(%) Figure 1: Experimental scatter diagrams obtained by randomly trimming 2-cell cellular complex contrastive learning repre- sentations on the PROTEINS and IMDB-B datasets. The baseline and the red dashed line indicate the classification accuracy achieved using the complete 2-cell representations. The x-axis values represent a fixed proportion of 2-cell removal, with each proportion interval containing 50 points. Each individual point corresponds to an independent classification result achieved by randomly trimming the original 2-cell representations with a specific trimming ratio. leverage the cellular Weisfeiler-Lehman test to demonstrate that CellCLAT‘s expressiveness surpasses GNN-based SSL methods in distinguishing non-isomorphic graphs, and we further provide solid theoretical
|
https://arxiv.org/abs/2505.21587v1
|
justification from a causal perspective to determine that the cellular topological redundancy is treated as a confounder. 4) We conduct extensive experiments across various benchmarks to empirically validate the superior performance of CellCLAT. 2 Related Work 2.1 Topological Deep Learning TDL extends the paradigm of Message Passing Neural Networks (MPNNs) [ 16] by generalizing message passing beyond graphs to relational data embedded in combinatorial topological spaces. Such relational data [ 39] often exhibit higher-order structures that are inadequately captured by traditional graph-based methods, leading to the development of two primary architectures: Message Passing Simplicial Networks (MPSNs) [7,11,45,53] and Message Pass- ing Cellular Networks (MPCNs) [6,19,22]. MPSNs extend the Weisfeiler-Lehman (WL) isomorphism test [ 51] to simplicial com- plexes (SCs) through a hierarchical coloring procedure [ 7], proving strictly more expressive than graph-based MPNNs. MPCNs further advance this by operating on cellular complexes (CWs), topological objects that flexibly generalize both SCs and graphs, thereby over- coming the strict combinatorial constraints (e.g., only cliques in graph can be referred to as simplices) inherent to SCs [ 6]. Moreover, the recent development of combinatorial complexes (CCs) provides a more general framework that unifies SCs, CWs, and hypergraphs, enabling more flexible message-passing architectures [4, 23, 40]. The landscape of TDL architectures has diversified through con- volutional and attentional mechanisms tailored to topological do- mains. Convolutional designs for simplicial [ 57–60] and cellular complexes have proliferated, demonstrating how localized filters or kernels can be defined over higher-order simplices and cells to enhance expressivity. Parallel efforts have explored attentional mechanisms , adapting self-attention layers to simplicial com- plexes [ 5,17,20,29] and more general cellular [ 18] or combinatorialcomplexes [ 23]. These attention-based networks often re-weight interactions between neighboring simplices or cells in orientation- equivariant ways [ 20], while also considering both upper and lower neighborhoods for feature aggregation [ 17]. Beyond straightfor- ward higher-order message passing, alternative approaches incorpo- rate pre-computed topological information—e.g., persistent homol- ogy [ 3,49] or other invariants [ 8,10]—into graph-level or cell-level features, thereby enriching the learned representations [ 9,26]. Com- prehensive surveys [ 13,39,41] underscore the potential of TDL to address complex data modalities where higher-order and geometric structures play an important role. However, the literature on SSL in TDL remains scarce, with only limited pioneering works [ 35] proposed for simplicial complexes to date. 2.2 Graph Contrastive Learning Numerous methods have been explored to advance graph-level con- trastive learning [ 28,30,31,48,56,61,62], with a primary focus on designing effective data augmentation techniques and contrastive objectives to enhance graph representation learning. GraphCL [ 62] introduces four general types of augmentations to enforce consis- tency between different transformed graph views. ADGCL [ 48] employs adversarial graph augmentation strategies to reduce the risk of capturing redundant information. JOAO [ 61] addresses the challenge of selecting appropriate augmentations by proposing a min-max two-level optimization framework based on GraphCL. RGCL [ 31] incorporates invariant rationale discovery to generate ro- bust graph augmentations, ensuring that the structural rationale is preserved in augmented views. SimGRACE [ 56] eliminates the need for extensive augmentation search by perturbing the encoder with random
|
https://arxiv.org/abs/2505.21587v1
|
Gaussian noise, offering a more efficient approach to gener- ating augmented views. HTML [ 30] employs knowledge distillation by incorporating the learning of graph-level and subgraph-level topological isomorphism tasks into the objective function, thereby enhancing the performance of downstream tasks. Our method in- troduces cellular complexes into graph contrastive learning, while investigating the cellular topological redundancy issue. KDD ’25, August 3–7, 2025, Toronto, ON, Canada. Bin Qin et al. 𝑋(1) 𝜙𝛼1𝑋(2) 𝜙𝛼2 Figure 2: The gluing process of constructing a cellular com- plex from a graph is achieved through a sequence of contin- uous attaching maps. 3 Methodology In this section, we introduce a feasible self-supervised solution that is adaptable to the current topological deep learning, as illustrated in Figure 3. Specifically, in Section 3.1, we summarize the current learning paradigm of cellular complex networks, which extends the strictly restricted combinatorial structure of simplicial com- plexes. It not only inherits the structural properties of simplicial complexes but also introduces additional flexibility, enabling the handling of more complex and flexible topological structures.2We develop a self-supervised TDL framework based on cellular com- plex networks, which exhibits a stronger topological expressiveness compared to GNN-based models. In Section 3.2, we implement a self-supervised framework that maximizes the retention of cellular topology of the original cellular complex. By introducing pertur- bations (such as Gaussian noise) at the encoder level, rather than directly augmenting the input data, we can avoid damaging the lo- cal cellular structures of the cellular complex. This approach allows us to generate the necessary different views for contrastive learning while preserving the integrity of higher-order cellular topology. Finally, in the Section 3.3, we explore the semantically relevant components within cellular topology. A cellular trimming sched- uler trims the complete cellular representations obtained in Section 3.2 and employs a bi-level meta-learning optimization process to suppress the gradient contributions of task-irrelevant cells, thereby achieving a dynamic balance between the need for higher-order interactions and task-relevant semantics. 3.1 Learning Paradigm of Cellular Complex Neural Networks Topological deep learning focuses on topological spaces formed by structured data in non-Euclidean spaces, which can leverage the rich geometric and topological properties within the data. Here, we primarily focus on cellular complexes. A cellular complex is a topo- logical space built from a collection of cells of various dimensions, composed of discrete sets of points. An 𝑛-dimensional cell, denoted as𝑛-cell, is a space homeomorphic to an open 𝑛-dimensional ball 𝐵𝑛{︃𝑥,𝑟}︃= 𝑥∈R𝑛∶∏︁𝑥∏︁<𝑟 . Formally, the construction of a cellu- lar complex is achieved through a recursive gluing process: (1)0-Skeleton: Start with a discrete set of points, 𝑋{︃0}︃, which serves as the 0-skeleton of the complex. 2In the two-dimensional case, simplicial complexes are limited to triangular simplices, whereas two-dimensional cellular complexes can represent arbitrary induced cycles.(2)Attaching Higher-Dimensional Cells: To construct the 𝑛- skeleton𝑋{︃𝑛}︃, attach𝑛-cells 𝜎{︃𝑛}︃ 𝛼 to𝑋{︃𝑛−1}︃via continu- ous maps called attaching maps :𝜙𝑛𝛼∶𝜕𝜎{︃𝑛}︃ 𝛼→𝑋{︃𝑛−1}︃.The new space𝑋{︃𝑛}︃is defined as the union: 𝑋{︃𝑛}︃=𝑋{︃𝑛−1}︃∪ ⋃𝛼𝜎{︃𝑛}︃ 𝛼,where each𝜎{︃𝑛}︃ 𝛼is glued to𝑋{︃𝑛−1}︃along its bound- ary using𝜙𝑛𝛼. Taking the graph 𝐺={︃𝑉,𝐸}︃as an example, we provide an intuitive explanation of how the cellular complex 𝑋extends the topological structure of
|
https://arxiv.org/abs/2505.21587v1
|
the graph, as shown in Figure 2. Starting from the vertex set 𝑉, the 0-cells in 𝑋correspond to these vertices, i.e.,𝑉=𝑋{︃0}︃. Next, by gluing the endpoints of line segments to these vertices, we obtain the 1-cells 𝜎{︃1}︃ 𝛼 in𝑋, which correspond to the edges 𝐸. Therefore, the graph 𝐺can be seen as the one- dimensional skeleton 𝑋{︃1}︃of the cellular complex 𝑋. Finally, take a two-dimensional closed disk and attach its boundary (a circle) to any (induced) cycle in graph 𝐺. Hence, polygons in the graph can be represented as 2-cells, with their boundaries being the edges that form the cycle. The 2-dimensional cellular complex 𝑋=𝑋{︃2}︃ characterizes higher-order relational structures while restricting modifications to the original graph structure. Unlike graphs, which only have a single adjacency relationship, higher-order interactions in cellular complexes are manifested in theneighborhood structure defined between cells of different dimensions [ 24,41]. Let𝜎≺𝜏denote that the {︃𝑟−1}︃-cell𝜎is theboundary of the𝑟-cell𝜏, for example, nodes are the bound- aries of edges, and edges are the boundaries of polygons. A 𝑟-cell 𝜏has four types of neighborhood structures [ 6,7]: Boundary Ad- jacent Neighborhood ℬ{︃𝜏}︃= 𝜎⋃︀𝜎≺𝜏 , Co-Boundary Adjacent Neighborhood 𝒞{︃𝜏}︃= 𝜎⋃︀𝜏≺𝜎 , Lower Adjacent Neighborhood 𝒩↓{︃𝜏}︃= 𝜎⋃︀∃𝛿s.t.𝛿≺𝜎∧𝛿≺𝜏 , Upper Adjacent Neighborhood 𝒩↑{︃𝜏}︃= 𝜎⋃︀∃𝛿s.t.𝜎≺𝛿∧𝜏≺𝛿 . ACellular Complex Neural Network (CCNN) built upon the message passing paradigm updates features associated with cells in a cellular complex through hierarchical, structured interac- tions across cells of varying dimensions. Given a 𝑟-cell𝜏and𝑟′-cell 𝜎, let the embedding at the 𝑙-th layer be denoted as ℎ{︃𝑙}︃ 𝑟{︃𝜏}︃and ℎ{︃𝑙}︃ 𝑟′{︃𝜎}︃. First, the message received by 𝜏is computed as 𝑚{︃𝜎→ 𝜏}︃=𝜓𝒩𝑖{︃ℎ{︃𝑙}︃ 𝑟{︃𝜏}︃,ℎ{︃𝑙}︃ 𝑟′{︃𝜎}︃,Θ{︃𝑙}︃ 𝑖}︃, where 𝒩𝑖∈𝒩=⃥︁̂︀√︂∐︁̂︁̃︀⌈︁̃︀̃︁⎷⧸︁∐︁⌈︁⎷⎞ℬ,𝒞,𝒩↓,𝒩↑⃥︁̂︀√︂∐︁̂︁̃︀√︂]︁̃︂[︁⎷⧸︁∐︁⌈︁⎷⎞ represents messages from four types of neighbors, and 𝜓𝒩𝑖denotes differentiable functions, such as MLPs, with Θ{︃𝑙}︃ 𝑖being the learnable parameters. Then, the messages undergo two types of aggregation: 1) intra-neighborhood aggregation 𝑀𝒩𝑖{︃𝜏}︃=⊕𝜎∈𝒩𝑖{︃𝜏}︃𝑚{︃𝜎→𝜏}︃, where⊕is a permutation-invariant operation (e.g. summation or mean); 2) inter-neighborh-ood aggregation 𝑀{︃𝜏}︃=⊗𝒩𝑖∈𝒩𝑀𝒩𝑖{︃𝜏}︃, where⊗is a combining function (e.g., summation or concatena- tion) that combines information from neighborhoods. The update of the embedding at the {︃𝑙+1}︃-th layer can be formalized as: ℎ{︃𝑙+1}︃ 𝑟 {︃𝜏}︃=𝜙⃥︁⎸{︁]︁⎡⎤⎫⎜ ⃥︁⎸{︁]︁⎡⎤⎫⟨ℎ{︃𝑙}︃ 𝑟{︃𝜏}︃,⊗ 𝒩𝑖∈𝒩⊕ 𝜎∈𝒩𝑖{︃𝜏}︃𝜓𝒩𝑖⃥︁√︀∐︁√︂̃︀{︁⌈︁̃︀̃︁⎷⧸︁∐︁⌈︁⎷⎡ℎ{︃𝑙}︃ 𝑟{︃𝜏}︃,ℎ{︃𝑙}︃ 𝑟′{︃𝜎}︃,Θ{︃𝑙}︃ 𝑖⃥︁√︀∐︁√︂̃︀{︁√︂]︁̃︂[︁⎷⧸︁∐︁⌈︁⎷⎡⃥︁⎸{︁]︁⎡⎤⎫⟩ ⃥︁⎸{︁]︁⎡⎤⎠⎛, (1) where𝜙is a learnable update function. CellCLAT: Preserving Topology and Trimming Redundancy in Self-Supervised Cellular Contrastive Learning KDD ’25, August 3–7, 2025, Toronto, ON, Canada. 3.2 Contrastive Learning with Maximized Cellular Topology Preservation The complex topological structure of cellular complexes renders commonly used techniques for graph augmentation ineffective, as the unrestricted removal of nodes or edges may disrupt the continu- ity of the attaching map [ 25,52], making it impossible to reasonably define high-dimensional cells.3TopoSRL [ 35] is the first attempt to implement data augmentation for simplicial complexes. How- ever, the random removal of closed simplices similarly violates the subset-closed property of higher-order simplices. Furthermore, the high-order structures in cellular complexes (e.g., polygons repre- sented by 2-cells) often carry important semantic information, such as ring structures in molecular graphs or community structures in social networks. Random augmentation strategies, such as the addition or removal of polygons, may alter the semantics of these higher-order structures. Adaptive augmentation that preserves se- mantic information often requires manually selecting
|
https://arxiv.org/abs/2505.21587v1
|
augmentation strategies for each dataset or performing a tedious search for suit- able augmentation combinations, sometimes relying on expensive domain knowledge. These limitations become even more problem- atic when applied to cellular complex data structures. To address this, inspired by SimGRACE [ 56], we propose gener- ating augmented embeddings for cellular complexes by introducing noise into the network parameters of the message-passing process. This approach avoids disrupting local topological structures, pre- serves high-order semantic information, and maximally extracts the topological information from the input views. Specifically, we implement Equation (1) as follows: ℎ{︃𝑙+1}︃ 𝑟 {︃𝜏}︃=MLP{︃𝑙}︃ 𝑈,𝑟{︃∫︁ 𝒩𝑖∈𝒩MLP{︃𝑙}︃ 𝒩𝑖,𝑟{︃{︃1+𝜖𝒩𝑖}︃ℎ{︃𝑙}︃ 𝑟{︃𝜏}︃+∑ 𝜎∈𝒩𝑖{︃𝜏}︃ℎ{︃𝑙}︃ 𝑟′{︃𝜎}︃}︃}︃. (2) The final embedding of the cellular complex 𝑋is defined as H𝑋= ∑2 𝑟=0MLP𝑋,𝑟{︃∑ℎ{︃𝐿}︃ 𝑟{︃𝜏}︃}︃. LetMLP{︃𝑙}︃ 𝑈,𝑟,MLP{︃𝑙}︃ 𝒩𝑖,𝑟, and MLP𝑋,𝑟have parameters Θ{︃𝑙}︃ 𝑘,𝑟,𝑘=𝑈,𝒩𝑖,𝑋, respectively. We denote the corre- sponding encoder by 𝑓Θ{︃⋅}︃. Gaussian noise is added to all param- eters to obtain perturbed network parameters ˜Θ{︃𝑙}︃ 𝑘,𝑟=Θ{︃𝑙}︃ 𝑘,𝑟+𝜂⋅ 𝜀{︃𝑙}︃ 𝑘,𝑟,where𝜀{︃𝑙}︃ 𝑘,𝑟∼𝒩⃥︁√︀∐︁√︂̃︀{︁⌈︁̃︀̃︁⎷⧸︁∐︁⌈︁⎷⎡0,{︃std{︃Θ{︃𝑙}︃ 𝑘,𝑟}︃}︃2⃥︁√︀∐︁√︂̃︀{︁√︂]︁̃︂[︁⎷⧸︁∐︁⌈︁⎷⎡. Here,𝜂controls the magni- tude of noise. Consequently, H𝑋and the augmented embedding ˜H𝑋obtained from the perturbed network parameters constitute a positive pair. Suppose a batch contains 𝑁cellular complexes 𝑋1,𝑋2,...,𝑋𝑁, yielding a total of 2𝑁embeddings in the latent space. Each cellular complex 𝑋𝑖and its augmented embedding form a positive sample pair {︃H𝑋𝑖,˜H𝑋𝑖}︃, while augmented embeddings of other cellular complexes 𝑋𝑗form negative sample pairs {︃H𝑋𝑖,˜H𝑋𝑗}︃. To better align positive pairs and uniformly distribute negative pairs, as explored in [ 50], we project the embedding H𝑋onto a hypersphere via a nonlinear transformation 𝑔Φ{︃⋅}︃to obtain Z𝑋. We use a normalized temperature-scaled cross-entropy loss (NT- Xent) [ 55] with a temperature parameter 𝜌to maximize mutual information between positive pairs, resulting in the cellular complex 3For instance, in the case of a 2-dimensional quadrilateral cell, if one of its edges is removed, the quadrilateral is no longer well-defined. The boundary of the quadrilateral (a topological circle) cannot be mapped to a disconnected path in the 1-skeleton.contrastive learning (CellCL) loss: ℒCellCL=−1 𝑁𝑁 ∑ 𝑖=1logexp{︃𝑠{︃Z𝑋𝑖,˜Z𝑋𝑖}︃⇑𝜌}︃ ∑𝑁 𝑗=1I(︀𝑗≠𝑖⌋︀exp{︃𝑠{︃Z𝑋𝑖,˜Z𝑋𝑗}︃⇑𝜌}︃,(3) here,𝑠{︃a,b}︃denotes the cosine similarity between aandb, de- fined as𝑠{︃a,b}︃=a⊺b ⋃︀a⋃︀⋃︀b⋃︀.I(︀𝑗≠𝑖⌋︀is an indicator function that excludes other samples outside the positive pair, and 𝜌is the temperature parameter that controls the sensitivity of contrastive learning. 3.3 Adaptive Trimming of Cellular Topological Redundancy Through the framework introduced in Section 3.2, we have obtained a complete set of high-order information while preserving the orig- inal graph skeleton. However, not all of this high-order information is necessarily semantically meaningful; in fact, some of it may even degrade the performance of downstream tasks. We refer to this key observation as Cellular Topological Redundancy . As illustrated in the motivation Figure 1, when a fixed proportion of 2-cells is removed (e.g., 10% interval in the figure), certain data points exhibit improved performance compared to the baseline with the complete set of 2-cells. This evidence suggests that some high-order topo- logical information can actually degrade task-relevant information. To address the issue of cellular topological redundancy, a Cellular Trim ming Scheduler (CellTrim) adaptively removes 2-cells from the original cellular distribution 𝑃𝒞{︃𝑋}︃, constructing a refined distribu- tion where high-order structures are task-relevant and semantically meaningful. This trimming scheduler is optimized by
|
https://arxiv.org/abs/2505.21587v1
|
coupling cel- lular sparsification loss with contrastive learning task gradients through bi-level meta-learning. Cellular Trimming Scheduler Ψ{︃𝜏𝛼}︃.For a given cellular complex𝑋, the original distribution of 2-cells is denoted as 𝑃𝒞{︃𝑋}︃∈ 0,1 𝑁2, which includes the entire set of 2-cells 𝒞{︃2}︃= 𝜏{︃2}︃ 𝛼 𝑁2 𝛼=1, where𝑁𝑟represents the number of 𝑟-cells in𝑋. Suppose that after 𝐿layers of message passing, the embedding of a 2-cell 𝜏{︃2}︃ 𝛼is given byℎ{︃𝐿}︃ 2{︃𝜏𝛼}︃∈R𝑑. We wish to decide whether to retain or trim the 2-cell by modeling a cellular trimming scheduler Ψvia a categorical distribution. In particular, we assume that the decision variable, represented as a two-dimensional vector y={︃𝑦𝛼,0,𝑦𝛼,1}︃, is sampled from a categorical distribution Cat{︃𝜆𝛼}︃, where the two entries correspond to “trim” (index 0) and “retain” (index 1), respectively. The probability vector 𝜆𝛼∈R2is conditioned on the embedding ℎ{︃𝐿}︃ 2{︃𝜏𝛼}︃as follows: 𝜆𝛼=Softmax {︃𝑊Ψ⋅ℎ{︃𝐿}︃ 2{︃𝜏𝛼}︃+𝑏}︃, (4) with𝑊Ψ∈R2×𝑑being a trainable weight matrix. Direct sam- pling from Cat{︃𝜆𝛼}︃is non-differentiable; hence, to enable back- propagation, we adopt the Gumbel-Softmax trick [ 27,34]. Con- cretely, for each category 𝑖∈ 0,1 we compute 𝑦𝛼,𝑖=exp{︃{︃log𝜆𝛼,𝑖+𝑔𝑖}︃⇑𝜁}︃ ∑1 𝑗=0exp⃥︁√︀∐︁√︂̃︀{︁⌈︁̃︀̃︁⎷⧸︁∐︁⌈︁⎷⎞{︃log𝜆𝛼,𝑗+𝑔𝑗}︃⇑𝜁⃥︁√︀∐︁√︂̃︀{︁√︂]︁̃︂[︁⎷⧸︁∐︁⌈︁⎷⎞, (5) where𝑔𝑖∼Gumbel {︃0,1}︃is noise sampled from the Gumbel dis- tribution and 𝜁>0is a temperature parameter controlling the approximation’s smoothness. As 𝜁→0, the vector y𝛼={︃𝑦𝛼,0,𝑦𝛼,1}︃ KDD ’25, August 3–7, 2025, Toronto, ON, Canada. Bin Qin et al. 𝑓(⋅|𝛩) 𝑓(⋅|෨𝛩)Perturb𝐻𝑋′ ෩𝐻𝑋′𝑔𝛷 𝑔𝛷𝑍𝑋′ ෨𝑍𝑋′𝐿𝐶𝑒𝑙𝑙𝐶𝐿 𝛹(⋅|𝛶) 𝑓(⋅|𝛩) 𝑓(⋅|෨𝛩)Perturb𝐻𝑋′ ෩𝐻𝑋′𝑍𝑋′ ෨𝑍𝑋′𝐿𝐶𝑒𝑙𝑙𝐶𝐿𝐹𝑖𝑥 𝛶,𝑈𝑝𝑑𝑎𝑡𝑒 𝛩 𝑎𝑛𝑑 𝛷 𝐹𝑖𝑥 Θ 𝑎𝑛𝑑 Φ,𝑈𝑝𝑑𝑎𝑡𝑒 Υ via meta −learning approach gluing progress0-cell complex 1-cell complex 2-cell complex 𝑔Φ 𝑔ΦOne gradient step Figure 3: The framework of CellCLAT . The blue dashed lines indicate the standard contrastive learning phase, where the encoder𝑓{︃⋅;Θ}︃and the projection head 𝑔{︃⋅;Φ}︃are updated while keeping the Cellular Trimming Scheduler Ψfixed. The red dashed lines represent the update process of Ψthrough the bi-level optimization process. approaches a one-hot vector, we can directly use 𝑦𝛼,1as the indica- tor for retaining the 2-cell. Thus, we obtain the cellular trimming scheduler as Ψ{︃𝜏𝛼}︃=𝑦𝛼,1, which indicates whether the embedding ℎ{︃𝐿}︃ 2{︃𝜏𝛼}︃contributes to the final embedding of the cellular complex. The overall embedding of the cellular complex is computed as H′ 𝑋=1 ∑ 𝑟=0MLP𝑋,𝑟{︃∑ℎ{︃𝐿}︃ 𝑟{︃𝜏}︃}︃+MLP𝑋,2{︃𝑁2 ∑ 𝛼=1Ψ{︃𝜏𝛼}︃⋅ℎ{︃𝐿}︃ 2{︃𝜏𝛼}︃}︃. (6) Bi-level Meta-learning Optimization Process. Recalling that the encoder of the cellular complex network is defined as H𝑋= 𝑓{︃𝑋;Θ}︃, the parameter perturbation-based augmented embedding is represented as ˜H𝑋=𝑓{︃𝑋;˜Θ}︃. These embeddings are processed through a shared projection head, yielding Z𝑋=𝑔{︃H𝑋;Φ}︃and ˜Z𝑋=𝑔{︃˜H𝑋;Φ}︃. Let the cellular trimming scheduler be denoted asΨ{︃𝜏𝛼;Υ}︃. As formulated in Equation (6), it produces the refined embeddings H′ 𝑋and ˜H′ 𝑋. These embeddings are then projected to obtain Z′ 𝑋and˜Z′ 𝑋, which are used for training with the contrastive loss in Equation (3). Our objective is to jointly learn the parameters of the Cellular Trimming Scheduler Υalong with the contrastive network parameters Θ,Φ . The overall training procedure consists of two stages. In the first conventional training stage, the cellular trimming scheduler Ψ remains fixed while minimizing the Cellular Complex Contrastive Learning loss ℒCellCL , which is computed over the trimmed cellular distribution in a standard manner. In the second stage, a meta- learning-based approach is employed, wherein Ψis updated viaa bi-level optimization process.
|
https://arxiv.org/abs/2505.21587v1
|
The goal is to guide Ψtowards suppressing the gradient contributions of higher-order 2-cells that contain task-irrelevant information in the contrastive learning task loss. To achieve this, the cellular sparsification loss is computed with respect to the performance of 𝑓{︃⋅;Θ}︃and𝑔{︃⋅;Φ}︃, which is measured using the gradients of 𝑓{︃⋅;Θ}︃and𝑔{︃⋅;Φ}︃during the backpropagation of contrastive loss, formulated as follows: min ΥℒCellCL ⃥︁√︀∐︁√︂̃︀{︁⌈︁̃︀̃︁⎷⧸︁∐︁⌈︁⎷⎤ Z′ 𝑋,˜Z′ 𝑋 ;Θ⋆{︃Υ}︃,Φ⋆{︃Υ}︃⃥︁√︀∐︁√︂̃︀{︁√︂]︁̃︂[︁⎷⧸︁∐︁⌈︁⎷⎤, where Θ⋆{︃Υ}︃,Φ⋆{︃Υ}︃=arg min Θ,ΦℒCellCL ⃥︁√︀∐︁√︂̃︀{︁⌈︁̃︀̃︁⎷⧸︁∐︁⌈︁⎷⎤ Z′ 𝑋,˜Z′ 𝑋 ;Θ,Φ,Υ⃥︁√︀∐︁√︂̃︀{︁√︂]︁̃︂[︁⎷⧸︁∐︁⌈︁⎷⎤.(7) At iteration 𝑘, we adopt a second-derivative technique [ 32,33] to approximate Θ⋆{︃Υ}︃≈ˆΘ{︃𝑘+1}︃{︃Υ{︃𝑘}︃}︃andΦ⋆{︃Υ}︃≈ˆΦ{︃𝑘+1}︃{︃Υ{︃𝑘}︃}︃ by performing a single gradient step from the current contrastive network parameters Θ{︃𝑘}︃andΦ{︃𝑘}︃, as follows: ˆΘ{︃𝑘+1}︃{︃Υ{︃𝑘}︃}︃=Θ{︃𝑘}︃−𝛼∇Θ{︃𝑘}︃ℒ⃥︁√︀∐︁√︂̃︀{︁⌈︁̃︀̃︁⎷⧸︁∐︁⌈︁⎷⎤ Z′ 𝑋,˜Z′ 𝑋 ;Θ{︃𝑘}︃,Φ{︃𝑘}︃,Υ{︃𝑘}︃⃥︁√︀∐︁√︂̃︀{︁√︂]︁̃︂[︁⎷⧸︁∐︁⌈︁⎷⎤, ˆΦ{︃𝑘+1}︃{︃Υ{︃𝑘}︃}︃=Φ{︃𝑘}︃−𝛽∇Φ{︃𝑘}︃ℒ⃥︁√︀∐︁√︂̃︀{︁⌈︁̃︀̃︁⎷⧸︁∐︁⌈︁⎷⎤ Z′ 𝑋,˜Z′ 𝑋 ;Θ{︃𝑘}︃,Φ{︃𝑘}︃,Υ{︃𝑘}︃⃥︁√︀∐︁√︂̃︀{︁√︂]︁̃︂[︁⎷⧸︁∐︁⌈︁⎷⎤, (8) where𝛼and𝛽are the respective learning rates. Finally, the update of the cellular trimming scheduler Ψin Equation (7)is transformed as follows: Υ{︃𝑘+1}︃=arg min Υ{︃𝑘}︃ℒCellCL ⃥︁√︀∐︁√︂̃︀{︁⌈︁̃︀̃︁⎷⧸︁∐︁⌈︁⎷⎤ Z′ 𝑋,˜Z′ 𝑋 ;ˆΘ{︃𝑘+1}︃{︃Υ{︃𝑘}︃}︃,ˆΦ{︃𝑘+1}︃{︃Υ{︃𝑘}︃}︃⃥︁√︀∐︁√︂̃︀{︁√︂]︁̃︂[︁⎷⧸︁∐︁⌈︁⎷⎤. (9) CellCLAT: Preserving Topology and Trimming Redundancy in Self-Supervised Cellular Contrastive Learning KDD ’25, August 3–7, 2025, Toronto, ON, Canada. Dataset NCI1 PROTEINS MUTAG NCI109 IMDB-B IMDB-M A.R. ↓ node2vec 54.9±1.6 57.5 ±3.6 72.6 ±10.0 - - - 11.0 sub2vec 52.8±1.5 53.0 ±5.6 61.1 ±15.8 - 55.3±1.5 - 11.8 graph2vec 73.2±1.8 73.3 ±2.0 83.2 ±9.3 - 71.1±0.5 - 9.3 InfoGraph 76.2±1.0 74.4 ±0.3 89.0 ±1.1 76.2±1.3 73.0±0.9 48.1±0.3 5.7 GraphCL 77.9±0.4 74.4 ±0.5 86.8 ±1.3 78.1 ±0.4 71.2±0.4 48.9 ±0.3 6.2 ADGCL 73.9±0.8 73.3 ±0.5 88.7 ±1.9 72.4 ±0.4 70.2±0.7 48.1 ±0.4 8.3 JOAO 78.1±0.5 74.6 ±0.4 87.4 ±1.0 77.2 ±0.6 70.2±3.1 48.9 ±1.2 6.7 JOAOv2 78.4±0.5 74.1 ±1.1 87.7 ±0.8 78.2 ±0.8 70.8±0.3 49.2 ±0.9 5.3 RGCL 78.1±1.0 75.0 ±0.4 87.7 ±1.0 77.7 ±0.3 71.9±0.9 49.3 ±0.4 4.2 SimGRACE 79.1±0.4 75.3±0.1 89.0±1.3 78.4±0.4 71.3±0.8 49.1 ±0.8 2.7 HTML 78.2±0.7 75.0 ±0.3 88.9 ±0.8 77.9 ±0.2 71.7±0.4 48.9 ±0.6 4.0 CellCLAT 79.4±0.2 75.7±0.1 89.7 ±0.3 78.9 ±0.4 73.4±0.1 50.6 ±0.2 1.0 Table 1: Unsupervised representation learning classification accuracy (%) on TU datasets. A.R denotes the average rank of the results. The best results are highlighted in bold, and the second best results are highlighted with underline . The training of 𝑓{︃⋅;Θ}︃and𝑔{︃⋅;Φ}︃alternates with the bi-level optimization process for training Ψ{︃⋅;Υ}︃until convergence. 4 Theoretical Justification of CellCLAT Topological Expressiveness of CellCLAT. We characterize the discriminability of CellCLAT’s encoder: CCNN, in comparison to GNNs by analyzing its relationship with the WL test. The following theorem provides a rigorous proof that the topological expressive- ness of CCNN is strictly stronger than that of the WL test, demon- strating its superior ability to recognize higher-order structures. Theorem 1 (CCNN is strictly more expressive than the WL test). Let𝑓∶𝒢→𝒳be a skeleton-preserving gluing process: from graphs to cellular complexes. Let 𝐺1,𝐺2be graphs such that the 1-WL test (and hence any GNN that is bounded by WL) cannot distinguish between them, i.e., c𝐺1,𝑡=c𝐺2,𝑡for all iterations 𝑡. Then, there exists an iteration𝑡⋆such that the CCNN colouring b𝑓{︃𝐺}︃,𝑡⋆ of the 0-cells of the lifted complexes 𝑓{︃𝐺1}︃and𝑓{︃𝐺2}︃satisfies b𝑓{︃𝐺1}︃,𝑡⋆ ≠b𝑓{︃𝐺2}︃,𝑡⋆ , implying that CCNN distinguishes 𝐺1and𝐺2. We adopt the techniques from [ 6], with notation defined as c𝐺,𝑡 andb𝑓{︃𝐺}︃,𝑡⋆ , and the detailed proof is provided in Appendix A.1. Causal Analysis of Cellular Topological Redundancy. We employ
|
https://arxiv.org/abs/2505.21587v1
|
causal inference techniques [ 42,43] to theoretically analyze the motivational experimental phenomena (Figure 1). By construct- ing a bottom-up structural causal model (SCM) [ 43] that underlies the participation of high-order cellular topology in the message- passing process, we demonstrate that redundant cellular topology acts as a confounder . We define the causal variables as follows: 𝐸represents the graph- level embedding, ˆ𝑌denotes the predicted label, and 𝑇corresponds to the high-order cellular topology within the cell complex 𝑋con- structed over the graph 𝐺, i.e., the distribution of 2-cells 𝑃𝒞{︃𝑋}︃. The corresponding SCM graph is illustrated in Figure 6. We explain the relationships of each dependence edge in SCM: ●𝐸→ˆ𝑌. This dependence arises from the forward computa- tion of CCNN , where the graph-level embedding computed under the learned parameters of the current epoch is used for label prediction.●𝑇→𝐸. The embedding update is influenced by the aggrega- tion of 2-cells during the message-passing process. ●𝑇→ˆ𝑌. The influence of 𝑇on the downstream task can be either beneficial or detrimental. This corresponds to the motivational experiment where randomly removing 2-cells (altering their distribution) affects performance, as observed in the scatter plots of the baseline. The backdoor path 𝐸←𝑇→ˆ𝑌[42] prevents us from learning a stable causal relationship 𝐸→ˆ𝑌, thereby proving that redundant cellular topology serves as a confounder. To robustly identify the causal effect of 𝐸onˆ𝑌, we adopt the 𝑑𝑜-operation from causal inference. Formally, we train the model by optimizing 𝑃{︃ˆ𝑌⋃︀𝑑𝑜{︃𝐸}︃}︃: 𝑃{︃ˆ𝑌⋃︀𝑑𝑜{︃𝐸}︃}︃=∫𝑃{︃ˆ𝑌⋃︀𝐸,𝑇}︃𝑃{︃𝑇}︃𝑑𝑇. (10) Each value of 𝑇=𝑡𝑖can be estimated using the cellular trimming scheduler Ψ. The scheduler Ψdynamically trims 2-cells to fit the new distribution 𝑇=𝑡𝑖. The term𝑃{︃ˆ𝑌⋃︀𝐸,𝑇=𝑇𝑖}︃represents the prediction obtained from the embedding after trimming 𝑇. Thus, the implementation of CellCLAT to address cellular topological redundancy can be interpreted through the backdoor adjustment formula (Equation (10)) applied to confounder 𝑇. For further details on the causal background, please refer to the Appendix A.2. 5 Experiments 5.1 Experimental Setup Datasets. For unsupervised learning, we benchmark our proposed CellCLAT on six established datasets in TU datasets [ 37] including four bioinformatics datasets (NCI1, PROTEINS, MUTAG, NCI109) and two social network networks datasets (IMDB-B, IMDB-M). For semi-supervised learning, we use the same datasets as in the unsupervised setting, except for MUTAG, as its small size leads to severe class imbalance issues during k-fold cross-validation in downstream tasks. Compared Baselines. For unsupervised learning, we compare CellCLAT with ten unsupervised baselines including Node2Vec [21], Sub2Vec [ 1], Graph2Vec [ 38], InfoGraph [ 47], GraphCL [ 62], ADGCL [ 48], JOAO [ 61], RGCL [ 31], SimGRACE [ 56] and HTML KDD ’25, August 3–7, 2025, Toronto, ON, Canada. Bin Qin et al. Dataset NCI1 PROTEINS NCI109 IMDB-B IMDB-M A.R. ↓ InfoGraph 68.9±0.8 69.5 ±0.9 67.8 ±0.7 68.5±0.6 43.8 ±0.2 6.4 GraphCL 69.6±0.3 69.7 ±0.1 69.2 ±0.3 67.8±1.3 43.9 ±0.3 4.4 ADGCL 65.6±0.6 68.5 ±2.0 65.7 ±0.8 67.7±0.9 41.9 ±0.9 8.8 JOAO 69.1±0.1 70.9 ±2.0 69.2 ±0.3 67.6±0.8 43.9 ±1.0 5.0 JOAOv2 69.1±0.4 70.1 ±1.4 69.2 ±0.2 68.3±1.0 43.6 ±1.1 5.2 RGCL 70.2±0.7 71.2±0.9 69.1 ±0.3 68.9±0.3 44.5 ±0.9 2.4 SimGRACE 69.3±0.3 71.3 ±0.7 69.0±0.2 68.6±0.7 44.2±0.6 3.4 HTML 69.4±0.1 69.1 ±1.5
|
https://arxiv.org/abs/2505.21587v1
|
68.9 ±0.1 68.4±0.3 43.2 ±0.3 6.4 CellCLAT 70.4±0.5 71.8 ±0.1 69.5 ±0.3 68.5±0.4 44.0 ±0.3 1.8 Table 2: Semi-supervised representation learning classification accuracy (%) on TU datasets. Dataset NCI1 PROTEINS MUTAG NCI109 IMDB-B IMDB-M SimGRACE 79.1±0.4 75.3 ±0.1 89.0 ±1.3 78.4 ±0.4 71.3±0.8 49.1 ±0.8 CellCL 79.3±0.3 75.6 ±0.1 89.4 ±0.3 78.6 ±0.3 73.0±0.4 50.3 ±0.1 CellCL w 0-CellTrim 78.7±0.2 75.4 ±0.2 88.9 ±0.8 77.8 ±0.1 73.5±0.1 50.5±0.3 CellCL w 1-CellTrim 79.3±0.2 75.6 ±0.2 88.5 ±0.9 78.2 ±0.1 73.5±0.2 50.4±0.3 CellCLAT (CellCL w 2-CellTrim) 79.4±0.2 75.7 ±0.1 89.7 ±0.3 78.9 ±0.4 73.4±0.1 50.6±0.2 Table 3: Ablation studies on the unsupervised settings. [30]. In the semi-supervised setting, we select seven unsupervised baselines for comparison. Evaluate Protocols. For unsupervised learning, we employ the complete datasets to train CellCLAT on unsupervised datasets. Subsequently, these representations are utilized as input to a down- stream SVM classifier with 10-fold cross-validation. We run five times with different seeds on each dataset and the mean and stan- dard deviation of classification accuracy is reported. For semi- supervised learning, the training phase is the same as the unsuper- vised setting. In the downstream classification task, we utilize only 10% of the labeled data to train a SVM classifier, while leveraging the remaining unlabeled data through pseudo-labeling. Specifically, we randomly select 10% of the training set as labeled data and train an initial classifier. The classifier is then used to generate pseudo- labels for the remaining unlabeled data, and a final model is trained using both the labeled and pseudo-labeled data. To ensure reliabil- ity, we run each experiment with five random seeds and report the mean and standard deviation of accuracy. The code is available at: https://github.com/ByronJi/CellCLAT. 5.2 Unsupervised Learning The results of unsupervised graph-level representation learning for downstream graph classification tasks are presented in Table 1. Our method consistently achieves the best performance across multiple datasets, obtaining the lowest average rank 1.0 among all meth- ods, highlighting the effectiveness of our approach in capturing discriminative representations. In addition, our method has smaller variance compared to other contrastive learning methods, which further demonstrates the stability of our method. 5.3 Semi-supervised Learning Table 2 presents the results of semi-supervised graph-level repre- sentation learning for downstream classification tasks. Our method, Figure 4: Hyper-parameter sensitivity analysis. CellCLAT, achieves the lowest average rank 1.8 among all com- pared methods, demonstrating the best overall performance across multiple datasets. This result indicates that the representations learned during pretraining are highly effective, enabling strong performance even when labeled data is sparse in downstream tasks. 5.4 Ablation Studies We conduct ablation studies on the unsupervised settings, shown in Table 3. Simgrace is a graph-based contrastive learning method without data augmentation, whereas CellCL, as described in Sec- tion 3.2, employs cellular complex contrastive learning to obtain all complete representations of 0-cells, 1-cells, and 2-cells. While our motivation experiments demonstrate semantic redundancy in 2-cells, we aim to investigate whether lower-order cells exhibit similar phenomena. To this end, we slightly modify the notation of CellTrim: 0-CellTrim, 1-CellTrim, and 2-CellTrim denote the appli- cation of the cellular trimming scheduler to 0-cells (nodes),
|
https://arxiv.org/abs/2505.21587v1
|
1-cells (edges), and 2-cells (polygons), respectively. Interestingly, trimming 0-cell representations (CellCL w/ 0-CellTrim) and 1-cell representa- tions (CellCL w/ 1-CellTrim) leads to performance degradation to CellCLAT: Preserving Topology and Trimming Redundancy in Self-Supervised Cellular Contrastive Learning KDD ’25, August 3–7, 2025, Toronto, ON, Canada. Dataset NCI1 PROTEINS MUTAG NCI109 IMDB-B IMDB-M Time 15s 14s 1s 15s 37s 54s Table 4: Gluing time across different datasets. Dataset GraphCL RGCL HTML Ours PROTEINS 17s 41s 43s 51s NCI1 94s 134s 148s 172s Table 5: Training time across different methods. varying degrees compared to using the complete cellular represen- tation (CellCL). Notably, 0-cells demonstrate greater significance, as trimming 0-cells leads to the most substantial performance degra- dation. As anticipated, trimming 2-cell representations yields the most significant improvement, validating that CellCLAT effectively trims redundant topological information. 5.5 Hyper-parameter Sensitivity Analysis Ring size. The ring size refers to an upper bound 𝑚on the number of edges in polygons or rings during the gluing process. Figure 4(a) presents the classification accuracy on the NCI1 dataset for different values of 𝑚, ranging from 6 to 18. We observe that on the NCI1 dataset, the highest accuracy is achieved when 𝑚=6, and as𝑚increases, the performance gradually declines. This may be because NCI1 primarily consists of functional groups that contain six-membered rings (e.g., benzene), making 𝑚=6the most effective choice in our method. Permuted rate. We analyze the effect of the permuted rate 𝜂 on model performance using the NCI1 dataset. As shown in Figure 4(b), the classification accuracy is highest when 𝜂=0.1. 5.6 Complexity and Efficiency Analysis The complexity of CellCLAT can be divided into two components: 1) the gluing process lifting a graph to a cellular complex, and 2) the message passing procedure. First, gluing process is a one-time preprocessing step before training, executable in Θ{︃{︃⋃︀𝐸⋃︀+⋃︀𝑉⋃︀𝑁}︃polylog ⋃︀𝑉⋃︀}︃time [ 15], where 𝑁is the number of induced cycles (upper bounded by a small, dataset-dependent constant). We report the gluing times for TU datasets in Table 4, showing that it scales approximately linearly with the size of the input graph. Second, for an 𝑟-cell𝜎in a cellular complex 𝑋{︃2}︃with maximum boundary size 𝐵𝑟, the computational complexity of the four types of messages is given by Θ⃥︁√︀∐︁√︂̃︀{︁⌈︁̃︀̃︁⎷⧸︁∐︁⌈︁⎷⎣2 ∑ 𝑟=0𝐵𝑟𝑁𝑟+⃥︁√︀∐︁√︂̃︀{︁⌈︁̃︀̃︁⎷⧸︁∐︁⌈︁⎷⎣𝐵𝑟 2⃥︁√︀∐︁√︂̃︀{︁√︂]︁̃︂[︁⎷⧸︁∐︁⌈︁⎷⎣𝑁𝑟+𝐵𝑟+1𝑁𝑟+1+⃥︁√︀∐︁√︂̃︀{︁⌈︁̃︀̃︁⎷⧸︁∐︁⌈︁⎷⎣𝐵𝑟+1 2⃥︁√︀∐︁√︂̃︀{︁√︂]︁̃︂[︁⎷⧸︁∐︁⌈︁⎷⎣𝑁𝑟+1⃥︁√︀∐︁√︂̃︀{︁√︂]︁̃︂[︁⎷⧸︁∐︁⌈︁⎷⎣, (11) simplified as Θ⃥︁√︀∐︁√︂̃︀{︁⌈︁̃︀̃︁⎷⧸︁∐︁⌈︁⎷⎡∑2 𝑟=0⃥︁√︀∐︁√︂̃︀{︁⌈︁̃︀̃︁⎷⧸︁∐︁⌈︁⎷⎞𝐵𝑟+1 2⃥︁√︀∐︁√︂̃︀{︁√︂]︁̃︂[︁⎷⧸︁∐︁⌈︁⎷⎞𝑁𝑟⃥︁√︀∐︁√︂̃︀{︁√︂]︁̃︂[︁⎷⧸︁∐︁⌈︁⎷⎡, where𝑁𝑟denotes the number of 𝑟-cells. Standard GNNs are a special case ( 𝑟∈0,1) with Θ{︃𝑁0+𝑁1}︃= Θ{︃⋃︀𝑉⋃︀+⋃︀𝐸⋃︀}︃. Therefore, our model’s runtime (include bi-level meta cellular trimming process) is comparable to GNNs, with training times in Table 5. GraphCL SimGRACE RGCL HTMLJOAOv2 OursFigure 5: t-SNE visualization of six methods on MUTAG. 5.7 Visualization Results In Figure 5, we present the t-SNE visualization of six different meth- ods on the MUTAG dataset. Our method exhibits a more distinct and well-separated clustering structure, indicating its ability to learn discriminative representations compared to the baselines. 6 Conclusion In this paper, we are the first to propose a self-supervised framework for learning cellular complex representations, namely CellCLAT, which overcomes the challenge of extrinsic structural constraints inherent to cellular complex spaces. Furthermore, we unveil a key phenomenon: Cellular Topological Redundancy, which introduces significant performance degeneration
|
https://arxiv.org/abs/2505.21587v1
|
for understanding the higher- order topological structures on downstream tasks. Our approach offers the first self-supervised method capable of learning semantic- rich representations from cellular complexes. The promising results of our framework highlight its potential for more expressive mod- eling of complex data relations and encourage further exploration of SSL techniques in combinatorial topological spaces. Acknowledgments The authors would like to thank anonymous reviewers for their valuable comments. This work is supported by National Natural Sci- ence Foundation of China, Grant No. 62406313, Postdoctoral Fellow- ship Program, Grant No. GZC20232812, China Postdoctoral Science Foundation, Grant No. 2024M753356, 2023 Special Research Assis- tant Grant Project of the Chinese Academy of Sciences, 2023 BDA project (MIIT), Basic Research Project of the Institute of Software, Chinese Academy of Sciences (Project No. ISCAS-JCZD-202402) and National Key R&D Program of China (No. 2023YFB3002901). References [1]Bijaya Adhikari, Yao Zhang, Naren Ramakrishnan, and B Aditya Prakash. 2018. Sub2vec: Feature learning for subgraphs. In Advances in Knowledge Discovery and Data Mining: 22nd Pacific-Asia Conference, PAKDD 2018, Melbourne, VIC, Australia, June 3-6, 2018, Proceedings, Part II 22 . Springer, 170–182. [2]Devanshu Arya and Marcel Worring. 2018. Exploiting relational information in social networks using geometric deep learning on hypergraphs. In Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval . 117–125. [3]Rubén Ballester and Bastian Rieck. 2023. On the expressivity of persistent homology in graph learning. arXiv preprint arXiv:2302.09826 (2023). [4]Claudio Battiloro, Ege Karaismailoğlu, Mauricio Tec, George Dasoulas, Michelle Audirac, and Francesca Dominici. 2024. E (n) Equivariant Topological Neural Networks. arXiv preprint arXiv:2405.15429 (2024). KDD ’25, August 3–7, 2025, Toronto, ON, Canada. Bin Qin et al. [5]Claudio Battiloro, Lucia Testa, Lorenzo Giusti, Stefania Sardellitti, Paolo Di Lorenzo, and Sergio Barbarossa. 2024. Generalized simplicial attention neural networks. IEEE Transactions on Signal and Information Processing over Networks (2024). [6]Cristian Bodnar, Fabrizio Frasca, Nina Otter, Yuguang Wang, Pietro Lio, Guido F Montufar, and Michael Bronstein. 2021. Weisfeiler and lehman go cellular: Cw networks. Advances in neural information processing systems 34 (2021), 2625– 2640. [7]Cristian Bodnar, Fabrizio Frasca, Yuguang Wang, Nina Otter, Guido F Montufar, Pietro Lio, and Michael Bronstein. 2021. Weisfeiler and lehman go topological: Message passing simplicial networks. In International Conference on Machine Learning . PMLR, 1026–1037. [8]Davide Buffelli, Farzin Soleymani, and Bastian Rieck. 2024. CliquePH: Higher- Order Information for Graph Neural Networks through Persistent Homology on Clique Graphs. arXiv preprint arXiv:2409.08217 (2024). [9]Yuzhou Chen, Baris Coskunuzer, and Yulia Gel. 2021. Topological relational learning on graphs. Advances in neural information processing systems 34 (2021), 27029–27042. [10] Jun Dan, Weiming Liu, Chunfeng Xie, Hua Yu, Shunjie Dong, and Yanchao Tan. 2024. TFGDA: Exploring Topology and Feature Alignment in Semi-supervised Graph Domain Adaptation through Robust Clustering. In The Thirty-eighth An- nual Conference on Neural Information Processing Systems . [11] Stefania Ebli, Michaël Defferrard, and Gard Spreemann. 2020. Simplicial neural networks. arXiv preprint arXiv:2010.03633 (2020). [12] Herbert Edelsbrunner and John Harer. 2010. Computational topology: an intro- duction . American Mathematical Soc. [13] Yam Eitan, Yoav Gelberg, Guy Bar-Shalom, Fabrizio Frasca, Michael Bronstein, and Haggai Maron. 2024. Topological blind spots: Understanding and extend- ing topological deep
|
https://arxiv.org/abs/2505.21587v1
|
learning through the lens of expressivity. arXiv preprint arXiv:2408.05486 (2024). [14] Wenqi Fan, Yao Ma, Qing Li, Yuan He, Eric Zhao, Jiliang Tang, and Dawei Yin. 2019. Graph neural networks for social recommendation. In The world wide web conference . 417–426. [15] Rui Ferreira, Roberto Grossi, Romeo Rizzi, Gustavo Sacomoto, and Marie-France Sagot. 2014. Amortized-delay algorithm for listing chordless cycles in undirected graphs. In European Symposium on Algorithms . Springer, 418–429. [16] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. 2017. Neural message passing for quantum chemistry. In International conference on machine learning . PMLR, 1263–1272. [17] Lorenzo Giusti, Claudio Battiloro, Paolo Di Lorenzo, Stefania Sardellitti, and Sergio Barbarossa. 2022. Simplicial attention neural networks. arXiv preprint arXiv:2203.07485 (2022). [18] Lorenzo Giusti, Claudio Battiloro, Lucia Testa, Paolo Di Lorenzo, Stefania Sardel- litti, and Sergio Barbarossa. 2023. Cell attention networks. In 2023 International Joint Conference on Neural Networks (IJCNN) . IEEE, 1–8. [19] Lorenzo Giusti, Teodora Reu, Francesco Ceccarelli, Cristian Bodnar, and Pietro Liò. 2023. Cin++: Enhancing topological message passing. arXiv preprint arXiv:2306.03561 (2023). [20] Christopher Wei Jin Goh, Cristian Bodnar, and Pietro Lio. 2022. Simplicial attention networks. arXiv preprint arXiv:2204.09455 (2022). [21] Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining . 855–864. [22] Mustafa Hajij, Kyle Istvan, and Ghada Zamzmi. 2020. Cell complex neural networks. arXiv preprint arXiv:2010.00743 (2020). [23] Mustafa Hajij, Ghada Zamzmi, Theodore Papamarkou, Nina Miolane, Aldo Guzmán-Sáenz, and Karthikeyan Natesan Ramamurthy. 2022. Higher-order attention networks. arXiv preprint arXiv:2206.00606 2, 3 (2022), 4. [24] Mustafa Hajij, Ghada Zamzmi, Theodore Papamarkou, Nina Miolane, Aldo Guzmán-Sáenz, Karthikeyan Natesan Ramamurthy, Tolga Birdal, Tamal K Dey, Soham Mukherjee, Shreyas N Samaga, et al .2023. Topological Deep Learning: Going Beyond Graph Data. arXiv preprint arXiv:2206.00606 (2023). [25] Jakob Hansen and Robert Ghrist. 2019. Toward a spectral theory of cellular sheaves. Journal of Applied and Computational Topology 3, 4 (2019), 315–358. [26] Max Horn, Edward De Brouwer, Michael Moor, Yves Moreau, Bastian Rieck, and Karsten Borgwardt. 2021. Topological graph neural networks. arXiv preprint arXiv:2102.07835 (2021). [27] Eric Jang, Shixiang Gu, and Ben Poole. 2016. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144 (2016). [28] Qirui Ji, Jiangmeng Li, Jie Hu, Rui Wang, Changwen Zheng, and Fanjiang Xu. 2024. Rethinking dimensional rationale in graph contrastive learning from causal perspective. In Proceedings of the AAAI Conference on Artificial Intelligence , Vol. 38. 12810–12820. [29] See Hian Lee, Feng Ji, and Wee Peng Tay. 2022. SGAT: Simplicial graph attention network. arXiv preprint arXiv:2207.11761 (2022). [30] Jiangmeng Li, Yifan Jin, Hang Gao, Wenwen Qiang, Changwen Zheng, and Fuchun Sun. 2024. Hierarchical topology isomorphism expertise embedded graph contrastive learning. In Proceedings of the AAAI Conference on ArtificialIntelligence , Vol. 38. 13518–13527. [31] Sihang Li, Xiang Wang, An Zhang, Yingxin Wu, Xiangnan He, and Tat-Seng Chua. 2022. Let invariant rationale discovery inspire graph contrastive learning. InInternational conference on machine learning . PMLR, 13052–13065. [32] Hanxiao Liu, Karen Simonyan, and Yiming Yang. 2018. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055
|
https://arxiv.org/abs/2505.21587v1
|
(2018). [33] Shikun Liu, Andrew Davison, and Edward Johns. 2019. Self-supervised generali- sation with meta auxiliary learning. Advances in Neural Information Processing Systems 32 (2019). [34] Chris J Maddison, Andriy Mnih, and Yee Whye Teh. 2016. The concrete distri- bution: A continuous relaxation of discrete random variables. arXiv preprint arXiv:1611.00712 (2016). [35] Hiren Madhu and Sundeep Prabhakar Chepuri. 2023. TopoSRL: topology pre- serving self-supervised simplicial representation learning. Advances in Neural Information Processing Systems 36 (2023). [36] Christopher Morris, Fabrizio Frasca, Nadav Dym, Haggai Maron, Ismail Ilkan Ceylan, Ron Levie, Derek Lim, Michael M Bronstein, Martin Grohe, and Stefanie Jegelka. 2024. Position: Future Directions in the Theory of Graph Machine Learning. In Forty-first International Conference on Machine Learning . [37] Christopher Morris, Nils M Kriege, Franka Bause, Kristian Kersting, Petra Mutzel, and Marion Neumann. 2020. Tudataset: A collection of benchmark datasets for learning with graphs. arXiv preprint arXiv:2007.08663 (2020). [38] Annamalai Narayanan, Mahinthan Chandramohan, Rajasekar Venkatesan, Lihui Chen, Yang Liu, and Shantanu Jaiswal. 2017. graph2vec: Learning distributed representations of graphs. arXiv preprint arXiv:1707.05005 (2017). [39] Theodore Papamarkou, Tolga Birdal, Michael M Bronstein, Gunnar E Carlsson, Justin Curry, Yue Gao, Mustafa Hajij, Roland Kwitt, Pietro Lio, Paolo Di Lorenzo, et al.2024. Position: Topological Deep Learning is the New Frontier for Relational Learning. In Forty-first International Conference on Machine Learning . [40] Mathilde Papillon, Guillermo Bernárdez, Claudio Battiloro, and Nina Miolane. 2024. TopoTune: A Framework for Generalized Combinatorial Complex Neural Networks. arXiv preprint arXiv:2410.06530 (2024). [41] Mathilde Papillon, Sophia Sanborn, Mustafa Hajij, and Nina Miolane. 2023. Archi- tectures of Topological Deep Learning: A Survey on Topological Neural Networks. arXiv preprint arXiv:2304.10031 (2023). [42] Judea Pearl. 2009. Causality . Cambridge university press. [43] Judea Pearl, Madelyn Glymour, and Nicholas P Jewell. 2016. Causal inference in statistics: A primer . John Wiley & Sons. [44] V Srinivasa Rao, K Srinivas, GN Sujini, and GN Sunand Kumar. 2014. Protein- protein interaction detection: methods and analysis. International journal of proteomics 2014, 1 (2014), 147648. [45] T Mitchell Roddenberry, Nicholas Glaze, and Santiago Segarra. 2021. Principled simplicial neural networks for trajectory prediction. In International Conference on Machine Learning . PMLR, 9020–9029. [46] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. 2008. The graph neural network model. IEEE transactions on neural networks 20, 1 (2008), 61–80. [47] Fan-Yun Sun, Jordan Hoffman, Vikas Verma, and Jian Tang. [n. d.]. InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization. In International Conference on Learning Rep- resentations . [48] Susheel Suresh, Pan Li, Cong Hao, and Jennifer Neville. 2021. Adversarial graph augmentation to improve graph contrastive learning. Advances in Neural Infor- mation Processing Systems 34 (2021), 15920–15933. [49] Yogesh Verma, Amauri H Souza, and Vikas Garg. 2024. Topological Neural Net- works go Persistent, Equivariant, and Continuous. arXiv preprint arXiv:2406.03164 (2024). [50] Tongzhou Wang and Phillip Isola. 2020. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In International conference on machine learning . PMLR, 9929–9939. [51] Boris Weisfeiler and Andrei Leman. 1968. The reduction of a graph to canonical form and the algebra which appears therein.
|
https://arxiv.org/abs/2505.21587v1
|
nti, Series 2, 9 (1968), 12–16. [52] John HC Whitehead. 1949. Combinatorial homotopy I. Bull. Amer. Math. Soc 55, 3 (1949), 213–245. [53] Hanrui Wu, Andy Yip, Jinyi Long, Jia Zhang, and Michael K Ng. 2023. Simplicial complex neural networks. IEEE Transactions on Pattern Analysis and Machine Intelligence (2023). [54] Zhenqin Wu, Bharath Ramsundar, Evan N Feinberg, Joseph Gomes, Caleb Ge- niesse, Aneesh S Pappu, Karl Leswing, and Vijay Pande. 2018. MoleculeNet: a benchmark for molecular machine learning. Chemical science 9, 2 (2018), 513–530. [55] Zhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua Lin. 2018. Unsupervised feature learning via non-parametric instance discrimination. In Proceedings of the IEEE conference on computer vision and pattern recognition . 3733–3742. [56] Jun Xia, Lirong Wu, Jintao Chen, Bozhen Hu, and Stan Z Li. 2022. Simgrace: A simple framework for graph contrastive learning without data augmentation. In Proceedings of the ACM web conference 2022 . 1070–1079. [57] Maosheng Yang and Elvin Isufi. 2023. Convolutional learning on simplicial complexes. arXiv preprint arXiv:2301.11163 (2023). CellCLAT: Preserving Topology and Trimming Redundancy in Self-Supervised Cellular Contrastive Learning KDD ’25, August 3–7, 2025, Toronto, ON, Canada. [58] Maosheng Yang, Elvin Isufi, and Geert Leus. 2022. Simplicial convolutional neural networks. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . IEEE, 8847–8851. [59] Maosheng Yang, Elvin Isufi, Michael T Schaub, and Geert Leus. 2022. Simplicial convolutional filters. IEEE Transactions on Signal Processing 70 (2022), 4633–4648. [60] Ruochen Yang, Frederic Sala, and Paul Bogdan. 2022. Efficient representation learning for higher-order data with simplicial complexes. In Learning on Graphs Conference . PMLR, 13–1. [61] Yuning You, Tianlong Chen, Yang Shen, and Zhangyang Wang. 2021. Graph contrastive learning automated. In International conference on machine learning . PMLR, 12121–12132. [62] Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, and Yang Shen. 2020. Graph contrastive learning with augmentations. Advances in neural information processing systems 33 (2020), 5812–5823. [63] Javad Zahiri, Abbasali Emamjomeh, Samaneh Bagheri, Asma Ivazeh, Ghasem Mahdevar, Hessam Sepasi Tehrani, Mehdi Mirzaie, Barat Ali Fakheri, and Morteza Mohammad-Noori. 2020. Protein complex prediction: A survey. Genomics 112, 1 (2020), 174–183. A Derivation of Theoretical Justification A.1 Proof of Theorem 1 Definition 1 (Color Refinement on Cellular Complexes). Let𝑋be a cellular complex, and let the set of cells in 𝑋be denoted as𝒞{︃𝑋}︃. We define a color mapping c∶𝒞{︃𝑋}︃→N, which assigns cells to a set of natural numbers (i.e., colors). The iterative process of color refinement is represented as follows: at iteration 𝑡, the color of a cell𝜎is denoted by c𝑋,𝑡𝜎. The process continues until some iteration 𝑡⋆satisfies c𝑋,𝑡⋆ 𝜎=c𝑋,𝑡⋆+1𝜎 , at which point we say the color mapping chas converged. The final color of any cell 𝜎∈𝑋is denoted as c𝑋,∞𝜎. Specifically, for a given cell 𝜎∈𝑋, we denote its final color as c𝑋𝜎or c𝑋{︃𝜎}︃. Definition 2 (Color Eqivalence on Cellular Complexes). Let𝑋1and𝑋2be two distinct cellular complexes, and let cbe a cellular color refinement. After applying color refinement c, the color sets of 𝑋1and𝑋2are given by c𝑋1= c𝑋1𝜎⋃︀∀𝜎∈𝑋1 andc𝑋2= c𝑋2𝜏⋃︀∀𝜏∈ 𝑋2 , respectively. We say that 𝑋1and𝑋2are color-equivalent under c, denoted as 𝑋1∼c𝑋2,
|
https://arxiv.org/abs/2505.21587v1
|
if and only if c𝑋1=c𝑋2. In a more intuitive sense, this definition implies that the num- ber of cells with dimension 𝑛and a given color in 𝑋1equals the number of cells with the same color and dimension 𝑛in𝑋2. Color equivalence reflects some structural similarities between the two complexes but does not imply isomorphism. This depends on the specific Color Refinement used. Next, we define a measure of the ability of different Color Refine- ment (CR) methods to distinguish non-isomorphic graphs, allowing for comparison of the topological expressiveness of two CR models. Definition 3 (Topological Reduction). Letaandbbe two CR models. We say that acan be reduced to b, denoted as a⪯b, if for all cellular complexes 𝑋1and𝑋2, and for all cells 𝜎∈𝑋1and𝜏∈𝑋2 with dim(𝜎)=dim(𝜏), the following condition holds: b𝑋1𝜎=b𝑋2𝜏/Leftr⫯g⊸tl⫯ne⇒ a𝑋1𝜎=a𝑋2𝜏 (12) Topological reduction, combined with Definition 2, provides a means of comparing the topological expressiveness of two CR models. The equation b𝑋1𝜎=b𝑋2𝜏indicates that the CR model b cannot distinguish between non-isomorphic 𝑋1and𝑋2, so Equation (12)implies that if CR model bfails to distinguish non-isomorphic 𝑋1and𝑋2, then CR model awill also fail to do so. Therefore, if a⪯bholds, CR model bhas stronger expressiveness in distinguishing non-isomorphic topological structures. 𝑌 ET Figure 6: The proposed SCM graph for the CellCLAT frame- work. Redundant cellular topology 𝑇acts as a confounder. Figure 7: Two non-isomorphic graphs that cannot be distin- guished by the WL test can be differentiated by CCNN, as the former contains triangular and quadrilateral 2-cells. With the aforementioned notations at hand, we proceed to prove the statement in Theorem 1: there exist non-isomorphic graphs 𝐺1 and𝐺2that cannot be distinguished by the 1-WL test (i.e., c𝐺1= c𝐺2), but can be differentiated by the CCNN model (i.e., b𝑓{︃𝐺1}︃≠ b𝑓{︃𝐺2}︃). This implies that b𝑡⪯̸c𝑡. To do so, it suffices to find a pair of non-isomorphic graphs that cannot be distinguished by the 1-WL test but can be differentiated by the CCNN network, as illustrated in Figure 7. To demonstrate that the topological expressiveness of CCNN is strictly stronger than that of the 1-WL test, we also need to prove that c𝑡⪯b𝑡. We firstly extend the color reduction for individual cells in Defi- nition 3 to a color reduction for sets of cells. Lemma 1. Let𝑋1and𝑋2be two distinct cellular complexes, and let𝐴⊂𝑋1and𝐵⊂𝑋2be sets of cells, both of dimension 𝑛. For two CR models aandb, ifa⪯band a𝑋1𝜎⋃︀∀𝜎∈𝐴 ≠ a𝑋2𝜏⋃︀∀𝜏∈𝐵 , then b𝑋1𝜎⋃︀∀𝜎∈𝐴 ≠ b𝑋2𝜏⋃︀∀𝜏∈𝐵 . Proof: Assume the multiset a𝑋1𝜎⋃︀∀𝜎∈𝐴 contains𝑘dis- tinct colors 𝑁1,𝑁2,...,𝑁𝑘, where𝑁𝑘∈Nrepresents a specific color. Since a𝑋1𝜎⋃︀∀𝜎∈𝐴 ≠ a𝑋2𝜏⋃︀∀𝜏∈𝐵 , there must exist a color𝑁●that appears with different frequencies in a𝑋1𝜎⋃︀∀𝜎∈𝐴 and a𝑋2𝜏⋃︀∀𝜏∈𝐵 . Define𝐴′=⃥︁̂︀√︂∐︁̂︁̃︀⌈︁̃︀̃︁⎷⧸︁∐︁⌈︁⎷⎡𝜎∈𝐴⋃︀a𝑋1𝜎=𝑁●⃥︁̂︀√︂∐︁̂︁̃︀√︂]︁̃︂[︁⎷⧸︁∐︁⌈︁⎷⎡and𝐵′= ⃥︁̂︀√︂∐︁̂︁̃︀⌈︁̃︀̃︁⎷⧸︁∐︁⌈︁⎷⎡𝜏∈𝐵⋃︀a𝑋2𝜏=𝑁●⃥︁̂︀√︂∐︁̂︁̃︀√︂]︁̃︂[︁⎷⧸︁∐︁⌈︁⎷⎡, then ⋂︀𝐴′⋂︀≠⋂︀𝐵′⋂︀. Now, consider the cells 𝛾in{︃𝐴∪𝐵}︃∖{︃𝐴′∪𝐵′}︃. Since a⪯b, the converse-negative proposition of (12)implies that a𝑋1𝜎≠a𝑋2𝛾⇒ b𝑋1𝜎≠b𝑋2𝛾. This means that the colors assigned by CR model bto the cells in𝐴′∪𝐵′differ from those in {︃𝐴∪𝐵}︃∖{︃𝐴′∪𝐵′}︃. KDD ’25, August 3–7, 2025, Toronto, ON, Canada. Bin Qin et al. By contradiction, assume b𝑋1𝜎⋃︀∀𝜎∈𝐴 = b𝑋2𝜏⋃︀∀𝜏∈𝐵 . Then, we would have b𝑋1𝜎⋃︀∀𝜎∈𝐴∖𝐴′ = b𝑋2𝜏⋃︀∀𝜏∈𝐵∖𝐵′ and b𝑋1𝜎⋃︀∀𝜎∈𝐴′ = b𝑋2𝜏⋃︀∀𝜏∈𝐵′ . This leads to ⋂︀𝐴′⋂︀=⋂︀𝐵′⋂︀, which is a contradiction. ◻ Theorem 2. Let𝑋1and𝑋2be two distinct cellular complexes,
|
https://arxiv.org/abs/2505.21587v1
|
and let two CR models aandbsatisfy a⪯b. Ifa𝑋1≠a𝑋2, then b𝑋1≠b𝑋2. Proof: By Lemma 1, when we traverse the sets of cells of different dimensions in 𝑋1and𝑋2, we obtain that if a𝑋1= a𝑋1𝜎⋃︀∀𝜎∈𝑋1 ≠ a𝑋2𝜏⋃︀∀𝜏∈𝑋2 =a𝑋2, then b𝑋1= b𝑋1𝜎⋃︀∀𝜎∈𝑋1 ≠ b𝑋2𝜏⋃︀∀𝜏∈ 𝑋2 =b𝑋2. ◻ Theorem 2 states that for a pair of non-isomorphic cellular com- plexes {︃𝑋1,𝑋2}︃,a⪯bindicates that if model acan distinguish the non-isomorphic pair, then model bcan as well. The aforementioned topological reduction theory provides tools for comparing the topo- logical expressiveness of two CR models. a⪯bholds if and only if model bpossesses at least the same topological expressiveness as model a. Specifically, a≡bholds if and only if the topological expressiveness of models aandbis identical. According to the technical approach outlined above, we will now proceed to finally prove Theorem 1. Proof: Consider the skeleton-preserving gluing process 𝑓∶𝒢→ 𝒳, which maps any graph 𝐺∈𝒢to a cellular complex 𝑋=𝑓{︃𝐺}︃ so that the vertices of 𝐺correspond to the 0-cells of 𝑋. Let 𝑔𝐺∶𝑉𝐺→𝑃𝑓{︃𝐺}︃{︃0}︃, (13) be the isomorphism between the vertices of 𝐺and the 0-cells of 𝑋. For each graph 𝐺, letc𝐺,𝑡denote the WLcolouring of 𝐺at iteration 𝑡and let a𝑓{︃𝐺}︃,𝑡be the WLcolouring of the 1-skeleton 𝑓{︃𝐺}︃{︃1}︃ induced by 𝑔𝐺(i.e., for each 𝑣∈𝑉𝐺, set a𝑓{︃𝐺}︃{︃1}︃,𝑡 𝑔{︃𝑣}︃∶=c𝐺,𝑡𝑣). Since the 1-skeleton 𝑓{︃𝐺}︃{︃1}︃is isomorphic to 𝐺, we have a𝑓{︃𝐺}︃{︃1}︃,𝑡=c𝐺,𝑡. (14) Letb𝑡denote the colouring (or feature labelling) of the 0-cells as produced by the CCNN at iteration𝑡. By design, the CCNN update for a cell𝜏is given by ℎ{︃𝑙+1}︃ 𝑟 {︃𝜏}︃=𝜙⃥︁⎸{︁]︁⎡⎤⎫⎜ ⃥︁⎸{︁]︁⎡⎤⎫⟨ℎ{︃𝑙}︃ 𝑟{︃𝜏}︃,⊗ 𝒩𝑖∈𝒩⊕ 𝜎∈𝒩𝑖{︃𝜏}︃𝜓𝒩𝑖⃥︁√︀∐︁√︂̃︀{︁⌈︁̃︀̃︁⎷⧸︁∐︁⌈︁⎷⎡ℎ{︃𝑙}︃ 𝑟{︃𝜏}︃,ℎ{︃𝑙}︃ 𝑟′{︃𝜎}︃,Θ{︃𝑙}︃ 𝑖⃥︁√︀∐︁√︂̃︀{︁√︂]︁̃︂[︁⎷⧸︁∐︁⌈︁⎷⎡⃥︁⎸{︁]︁⎡⎤⎫⟩ ⃥︁⎸{︁]︁⎡⎤⎠⎛, (15) where the neighbourhoods are defined as follows: ●Boundary Adjacent: ℬ{︃𝜏}︃= 𝜎⋃︀𝜎≺𝜏 , ●Co-Boundary Adjacent: 𝒞{︃𝜏}︃= 𝜎⋃︀𝜏≺𝜎 , ●Lower Adjacent: 𝒩↓{︃𝜏}︃= 𝜎⋃︀∃𝛿s.t.𝛿≺𝜎∧𝛿≺𝜏 , ●Upper Adjacent: 𝒩↑{︃𝜏}︃= 𝜎⋃︀∃𝛿s.t.𝜎≺𝛿∧𝜏≺𝛿 . For all𝐺1,𝐺2∈𝒢, our aim is to show that for all cellular com- plexes𝑋=𝑓{︃𝐺1}︃,𝑌=𝑓{︃𝐺2}︃∈𝑓{︃𝒢}︃theWLcolouring a𝑡is topological reduced to the CCNN colouring b𝑡; that is, a𝑡⪯b𝑡. By Theorem 2, a𝑡⪯b𝑡signifies that if a𝑓{︃𝐺1}︃{︃1}︃,𝑡≠a𝑓{︃𝐺2}︃{︃1}︃,𝑡, then it follows that b𝑓{︃𝐺1}︃,𝑡≠b𝑓{︃𝐺2}︃,𝑡. Furthermore, from Equation (14), we have that if a𝑓{︃𝐺1}︃{︃1}︃,𝑡≠a𝑓{︃𝐺2}︃{︃1}︃,𝑡, then c𝐺1,𝑡≠c𝐺2,𝑡. By transitivity, it follows that if c𝐺1,𝑡≠c𝐺2,𝑡, then b𝑓{︃𝐺1}︃,𝑡≠b𝑓{︃𝐺2}︃,𝑡,implying that for any pair of non-isomorphic graphs distinguish- able by the WLcoloring c, theCCNN coloring bcan also distinguish them. The next goal is to prove a𝑡⪯b𝑡. We now proceed by induction on the iteration 𝑡. 1) Base Case. At iteration 𝑡=0, the initial features (or colours) of the 0-cells in CCNN are obtained directly from the node features of the graph. By construction, we have b𝑓{︃𝐺1}︃,0=b𝑓{︃𝐺2}︃,0/Leftr⫯g⊸tl⫯ne⇒ a𝐺1,0=a𝐺2,0. 2) Inductive Step. Assume that after 𝑡iterations the WLcolour- inga𝑡is topological reduced to the CCNN colouring b𝑡, i.e., a𝑡⪯b𝑡. Consider two 0-cells 𝜎in𝑋and𝜏in𝑌such that b𝑡+1{︃𝜎}︃=b𝑡+1{︃𝜏}︃. We know that b𝑡+1{︃𝜎}︃= b𝑡{︃𝜎}︃,b𝑡{︃ℬ{︃𝜎}︃}︃,b𝑡{︃𝒞{︃𝜎}︃}︃,b𝑡{︃𝒩↓{︃𝜎}︃}︃,b𝑡{︃𝒩↑{︃𝜎}︃}︃ , (16) However, for 0-cells 𝜎, the boundary ℬ{︃𝜎}︃and lower adjacency 𝒩↓{︃𝜎}︃are not defined, so we only need to consider b𝑡{︃𝜎}︃, b𝑡{︃𝒞{︃𝜎}︃}︃, and b𝑡{︃𝒩↑{︃𝜎}︃}︃. Given that b𝑡+1{︃𝜎}︃=b𝑡+1{︃𝜏}︃, we ob- tain: b𝑡{︃𝜎}︃=b𝑡{︃𝜏}︃,b𝑡{︃𝒞{︃𝜎}︃}︃=b𝑡{︃𝒞{︃𝜏}︃}︃andb𝑡{︃𝒩↑{︃𝜎}︃}︃=b𝑡{︃𝒩↑{︃𝜏}︃}︃. (17) By the inductive hypothesis a𝑡⪯b𝑡, we further obtain a𝑡{︃𝜎}︃= a𝑡{︃𝜏}︃and a𝑡{︃𝒩↑{︃𝜎}︃}︃= a𝑡{︃𝛿1}︃⋃︀𝛿1∈𝒩↑{︃𝜎}︃ = a𝑡{︃𝛿2}︃⋃︀𝛿2∈𝒩↑{︃𝜏}︃ =a𝑡{︃𝒩↑{︃𝜏}︃}︃.(18) Since the 1-skeleton corresponds to a graph, and WLcoloring aggregates only self and neighbor features, we have a𝑡+1{︃𝜎}︃= a𝑡{︃𝜎}︃,a𝑡{︃𝒩↑{︃𝜎}︃}︃ . Combining this with Equation (18), we con- clude that a𝑡+1{︃𝜎}︃=a𝑡+1{︃𝜏}︃. Finally, since b𝑡+1{︃𝜎}︃=b𝑡+1{︃𝜏}︃ implies
|
https://arxiv.org/abs/2505.21587v1
|
a𝑡+1{︃𝜎}︃=a𝑡+1{︃𝜏}︃, we establish that a𝑡+1⪯b𝑡+1. By the inductive hypothesis, we have proven that a𝑡⪯b𝑡. Further leveraging transitivity and Theorem 2, we conclude that c𝑡⪯b𝑡.◻ A.2 Causal Background Knowledge Structural Causal Models and Intervention. A structural causal model (SCM) [ 42] is a triple 𝑀=∐︀𝑋,𝑈,𝐹 ̃︀, where𝑈is known as the exogenous variable , determined by external factors of the model.𝑋= 𝑋1,𝑋2,...,𝑋𝑛 is referred to as the endogenous variable , whose changes are determined by the functions 𝐹= 𝑓1,𝑓2,...,𝑓𝑛 . Each𝑓𝑖represents 𝑓𝑖∶𝑈𝑖∪𝑃𝐴𝑖→𝑋𝑖 , where𝑈𝑖⊆𝑈,𝑃𝐴𝑖⊆𝑋⃥︁√︃⌈︁∐︁√︃[︁⧸︁√︂]︁̃︂[︁⎷𝑋𝑖, satisfying: 𝑥𝑖=𝑓𝑖{︃𝑝𝑎𝑖,𝑢𝑖}︃, 𝑖=1,2,...,𝑛. (19) Each causal model 𝑀corresponds to a directed acyclic graph (DAG) 𝐺, where each node corresponds to a variable in 𝑋∪𝑈, and directed edges point from 𝑈𝑖∪𝑃𝐴𝑖to𝑋𝑖. Anintervention refers to forcing a variable 𝑋𝑖to take a fixed value𝑥𝑖. This equivalently removes 𝑋𝑖from the influence of its original functional mechanism 𝑥𝑖=𝑓𝑖{︃𝑝𝑎𝑖,𝑢𝑖}︃and replaces it with a constant function 𝑋𝑖=𝑥𝑖. Formally, we denote the intervention as𝑑𝑜{︃𝑋𝑖=𝑥𝑖}︃, or simply𝑑𝑜{︃𝑥𝑖}︃. After the intervention on 𝑋𝑖, the corresponding causal graph 𝐺𝑥𝑖is obtained by removing all arrows pointing to𝑋𝑖in𝐺to represent the post-intervention world. Path and𝑑-separation. We summarize definitions [ 42] to help us determine the independence between variables in the SCM graph. CellCLAT: Preserving Topology and Trimming Redundancy in Self-Supervised Cellular Contrastive Learning KDD ’25, August 3–7, 2025, Toronto, ON, Canada. ExperimentLearning Type Unsupervised Semi-supervised Number of layers 3 3 Cellular embedding dim 32 32 Number of projections 2 2 Nonlinear transformation dim 96 96 Graph norm BatchNorm BatchNorm Jump Mode cat cat Pooling global_add_pool global_add_pool Pre-train𝑙𝑟 0.001 0.001 Temperature 𝜏 0.2 0.2 Training epochs 20 20 Batch size 128 128 Permuted rate 𝜂 {0.1,1,10,100} {0.1,1,10,100} Ring size𝑘 6 6 Table 6: Model architectures and hyper-parameters. Definition 4. (Path) In the SCM graph, the paths from variable 𝑋 to𝑌include three types of structures: 1) Chain Structure: 𝐴→𝐵→𝐶 or𝐴←𝐵←𝐶, 2) Fork Structure: 𝐴←𝐵→𝐶, and 3) Collider Structure:𝐴→𝐵←𝐶. Definition 5. (𝑑-separation) A path𝑝is blocked by a set of nodes𝑍if and only if: (1)𝑝contains a chain of nodes 𝐴→𝐵→𝐶or a fork𝐴←𝐵→𝐶 such that middle node 𝐵is in𝑍(i.e.,𝐵is conditioned on), or (2)𝑝contains a collider 𝐴→𝐵←𝐶such that the collider node 𝐵 is not in𝑍, and no descendant of 𝐵is in𝑍. If𝑍blocks every path between two nodes 𝑋and𝑌, then𝑋 and𝑌are𝑑-separated , conditional on 𝑍, and thus are independent conditional on 𝑍, denoted as 𝑋/upmodels𝑌⋃︀𝑍. Backdoor and Backdoor Adjustment. Definition 6. (Backdoor) In a DAG𝐺, a set of variables 𝑍satis- fies the backdoor criterion for an ordered pair of variables {︃𝑋𝑖,𝑋𝑗}︃ if: (1)No node in𝑍is a descendant of 𝑋𝑖. (2)𝑍blocks all paths between 𝑋𝑖and𝑋𝑗that are directed into 𝑋𝑖. Similarly, if 𝑋and𝑌are two disjoint subsets of nodes in 𝐺, then𝑍 is said to satisfy the backdoor criterion for {︃𝑋,𝑌}︃if𝑍satisfies the backdoor criterion for any pair of variables {︃𝑋𝑖,𝑋𝑗}︃, where𝑋𝑖∈𝑋 and𝑋𝑗∈𝑌. Definition 7. (Backdoor adjustment) If a set of variables 𝑍 satisfies the backdoor criterion for {︃𝑋,𝑌}︃, then the causal effect of 𝑋 on𝑌is identifiable and can be given by the following formula: 𝑃{︃𝑌=𝑦⋃︀𝑑𝑜{︃𝑋=𝑥}︃}︃=∑ 𝑧𝑃{︃𝑌=𝑦⋃︀𝑋=𝑥,𝑍=𝑧}︃𝑃{︃𝑍=𝑧}︃.(20) B More Experimental Details B.1 Model Configurations The details of our model architectures and corresponding hyper- parameters are summarized in Table 6.B.2 Running Environment The results of unsupervised
|
https://arxiv.org/abs/2505.21587v1
|
learning and semi-supervised learn- ing were obtained using a single NVIDIA V100 GPU with 32G of memory. We performed the experiments on Ubuntu 20.04 as our operating system. B.3 Additional Hyper-parameter Analysis Batch size. Figure 8(a) shows the classification accuracy of our models after training twenty epochs using different batch sizes from 32 to 256 on the NCI1 dataset. We observe that increasing the batch size led to improved model performance. Epochs. Figure 8(b) illustrates the classification accuracy on the NCI1 dataset across different training epochs, ranging from 20 to 100. We observe a general upward trend in accuracy as the number of epochs increases, indicating that prolonged training allows the model to learn more discriminative representations. Additionally, the variance remains relatively stable, suggesting the robustness of our approach across different training durations. Figure 8: Hyper-parameter sensitivity analysis. Figure 9: Visualization of unsupervised learning results. B.4 Additional Visualization Results Figure 9 illustrates the results of unsupervised learning compar- isons through a radar chart, where each axis represents a dataset, and the vertices correspond to the classification accuracy of dif- ferent methods. The different colors denote the top-5 performing models. Our method consistently outperforms others across multi- ple datasets, highlighting its robustness and generalization ability in unsupervised representation learning.
|
https://arxiv.org/abs/2505.21587v1
|
arXiv:2505.21588v1 [cs.MA] 27 May 2025Herd Behavior: Investigating Peer Influence in LLM-based Multi-Agent Systems Young-Min Cho, Sharath Chandra Guntuku, Lyle Ungar University of Pennsylvania jch0@seas.upenn.edu Abstract Recent advancements in Large Language Mod- els (LLMs) have enabled the emergence of multi-agent systems where LLMs interact, col- laborate, and make decisions in shared envi- ronments. While individual model behavior has been extensively studied, the dynamics of peer influence in such systems remain under- explored. In this paper, we investigate herd behavior , the tendency of agents to align their outputs with those of their peers, within LLM- based multi-agent interactions. We present a series of controlled experiments that reveal how herd behaviors are shaped by multiple fac- tors. First, we show that the gap between self- confidence and perceived confidence in peers significantly impacts an agent’s likelihood to conform. Second, we find that the format in which peer information is presented plays a critical role in modulating the strength of herd behavior. Finally, we demonstrate that the de- gree of herd behavior can be systematically controlled, and that appropriately calibrated herd tendencies can enhance collaborative out- comes. These findings offer new insights into the social dynamics of LLM-based systems and open pathways for designing more effective and adaptive multi-agent collaboration frame- works1. 1 Introduction Herd behavior refers to the phenomenon of in- dividuals in a group to mimic the actions, deci- sions, or behaviors of a larger group, often dis- regarding their own analysis or instincts (Baner- jee, 1992; Bikhchandani et al., 1992). Humans often adjust their behavior in response to observing peers, aligning their decisions towards perceived group consensus (Raafat et al., 2009; Muchnik et al., 2013). This human tendency raises ques- tions about whether similar dynamics emerge in artificial intelligence. In Large Language Model 1Code and data will be released in the camera-ready. Figure 1: An example of herd behavior: Even when un- certain, individuals tend to follow the crowd, sometimes against their own judgment. (LLM)-based multi-agent systems (MAS), multi- ple autonomous agents powered by LLMs inter- act and reason collectively, creating fertile ground for social behaviors such as conformity to emerge (Guo et al., 2024; Park et al., 2023). Understanding whether and how these agents exhibit herd behavior is crucial for evaluating the robustness, diversity, and effectiveness of collective decision-making. Herd behavior in LLM-based MAS can be a double-edged sword. On one hand, convergence towards a group consensus can streamline decision- making, reduce conflict, and enhance coordination, particularly in scenarios where agreement or col- lective confidence is desirable (Guo et al., 2024). It can also serve as a mechanism for amplifying strong signals or leveraging collective intelligence, allowing agents to compensate for individual uncer- tainty by incorporating peer input (Liu et al., 2024a; Du et al., 2023). On the other hand, excessive con- formity can suppress diversity of thought, lead to premature consensus, and propagate errors if initial signals are flawed (Cho et al., 2024; Weng et al., 2025). Such blind alignment may reduce the sys- tem’s robustness, hinder exploration of alternative solutions, and make the collective more suscepti- ble to cascading
|
https://arxiv.org/abs/2505.21588v1
|
failures (Wu and Ito, 2025; Zhu et al., 2024). Understanding when herd behavior is beneficial and when it is detrimental is essen- tial for building trustworthy, adaptive, and resilient multi-agent LLM systems. However, the mechanisms underlying the emer- gence of herd behavior, as well as the factors that modulate its intensity, remain understudied in the context of LLM-driven multi-agent collaboration. Understanding and intentionally managing herd be- haviors within multi-agent collaborations is crucial. In this study, we design a set of controlled ex- periments using LLM-based agents to investigate herd behaviors in MAS. We manipulate key vari- ables such as agents’ self-confidence, perceived peer confidence, and the format of peer informa- tion presentation to systematically observe their influence on conformity behavior. By quantifying alignment patterns and measuring task outcomes under different conditions, we uncover the mech- anisms behind herd tendencies and explore how they can be tuned to optimize collaboration quality. We find that flip rates peak when agents have very low self-confidence and perceive peers as confident (Figure 2), with the most persuading peer answer driving the strongest herding (0.48 avg. flip rate) but also reducing accuracy on factual tasks (Table 6). Format of peer information also significantly impact the herd behavior, where using the combi- nation of factors that amplify herding yields the highest flip rate (0.63) and group accuracy (0.29). In contrast, prompt-based controls have minimal effect (Table 3). Our experiments provide the following contribu- tions: 1.We find that herd behavior in LLM-based agents is primarily driven by the relationship between an agent’s self-confidence and its perceived confidence in peers. In particular, larger gaps between these two measures signif- icantly increase the likelihood of conformity. 2.We show that the presentation format of peer responses critically affects the degree of herd behavior. Notably, placing disagreeing opin- ions before agreeing ones amplifies confor- mity, suggesting that ordering and framing effects shape social influence among agents. 3.We demonstrate that herd behavior can be sys- tematically tuned, and that appropriate cali- bration of conformity levels can enhance the effectiveness of multi-agent collaboration, of- fering design principles for future adaptive MAS systems.2 Preliminaries In this section, we introduce the preliminaries of the experiments. Problem setting. In a multi-agent collaboration, each agent ai∈Ais prompted with a question q and provides a response ri∈R, where: •A={a1, a2, . . . , a |A|}is the set of agents involved in the collaboration, •R={r1, r2, . . . , r |A|}is the corresponding set of responses, where riis the response from agent ai. All agents share the same generation distribution Pτ(· |C), which is conditioned on the context C and modulated by temperature τ. The context C for each agent includes the question qand option- ally the responses of the other agents, denoted as R−i={rj|j̸=i, aj∈A}. Each agent selects the response with the highest probability under this distribution. Agents do not have an external mem- ory module. For simplicity, all questions are multiple choice questions, where r∈ R ={A,B,C, . . .}is one of the discrete candidate responses, and Ris the set of all candidate responses for question q.
|
https://arxiv.org/abs/2505.21588v1
|
Definition 1: Confidence. Following the works of Xiao and Wang 2019, we define an agent’s con- fidence (preference) in its response to a question as the probability assigned by the generation dis- tribution P(r|C). Since the responses are fixed categorical choices, we treat each ras a single- token label, and define the confidence as: P(r|C) =exp(zr)P r′∈Rexp(zr′), where zris the unnormalized logit score for choice r. The higher the probability assigned to a response r, the more confident the agent is in its correctness. Definition 2: Preference Update. We define how an agent’s response preference changes when peer information is introduced. Given a question q: •The original response of the agent, based solely on the question, is defined as: r′= arg max r∈RP(r|q) •Therevised response , incorporating peer in- formation, is defined as: rh= arg max r∈RP(r|q, R−i) This formulation captures how the presence of other agents’ responses R−ican influence an agent’s selected answer. Definition 3: Herd Behavior. Following the works of Laban et al. 2023, we define herd be- havior as the tendency of an agent to change its initial decision after observing or interacting with others. Formally, we define the herd behavior of an agent aion question qkas a binary indicator: Iflip(ai, qk) =( 1,ifr′ i,k̸=rh i,k 0,otherwise We define the flip rate as the average fraction of agents who changed their answers, aggregated across all questions: Flip Rate =1 |Q| · |A|X qk∈QX ai∈AIflip(ai, qk) where Qis the set of questions. A higher flip rate indicates a stronger degree of herd behavior. 3 Self and Perceived Confidence - Primary Driver of Herd Behavior Herd behavior in human society is influenced by multiple factors, with numerous studies indicating that confidence plays a central role in driving this phenomenon. Studies in behavioral economics and psychology have shown that confident individuals can disproportionately influence group decisions, especially when others are uncertain (Zarnoth and Sniezek, 1997; Bang et al., 2017; Fu et al., 2017). In group settings, individuals often defer to those who express higher certainty, regardless of accu- racy (Pescetelli et al., 2021; Moussaïd et al., 2013). Inspired by previous studies, we explore how confidence influences agents’ tendency to ex- hibit herd behavior in a MAS setting. Specifi- cally, we categorize confidence into two levels: self-confidence andperceived confidence . Self- confidence refers to how certain an agent is about its own original response, while perceived confi- dence refers to how confident the agent perceives its peers to be in their original responses. We hypothesize that lower self-confidence, combined with higher perceived confidence in peers, leads to stronger herd behavior. 3.1 Experiment Setting To examine the effects of self-confidence and per- ceived confidence on herd behavior, we adopt aType BenchmarkNumber of QuestionsAvg. Number of Choices FactualMMLU-Pro (Wang et al., 2024)12,032 9.47 GPQA-Diamond (Rein et al., 2024)198 4.00 ARC-Challenge (Clark et al., 2018)1,172 4.00 OpinionatedOpinionQA (Santurkar et al., 2023)1,506 3.24 GlobalOpinionQA (Durmus et al., 2023)2,555 4.09 SOCIAL IQA (Sap et al., 2019)1,954 3.00 Table 1: Basic statistics of the benchmarks used in our experiments. minimal MAS configuration involving only two agents, aiandaj. This simplification
|
https://arxiv.org/abs/2505.21588v1
|
ensures that each agent interacts with only one peer, allowing for clearer attribution of behavioral changes. From the agent’s original distribution P(r|q) over possible responses to question q, we manually select one of four types of responses to serve as the peer’s opinion rj: •1st: The most probable response, which coincides with the agent’s original response ri. •2nd: The second most probable response, chosen to represent a highly persuasive alternative from the agent’s perspective. •rnd: A randomly sampled response from the distribution P(r|q). •last: The least probable response, assumed to be the least persuasive to the agent. Given a question q∈Qand the selected peer opinion rj, the agent generates a revised response rh= arg max r∈RP(r|q, rj). We then compute the flip rate across all questions, analyzing how the strength of herd behavior related with the agent’s self-confidence P(ri|q)and the perceived confi- dence P(rj|q). Additionally, we examine varying degrees of per- ceived confidence based on the peer’s persona. By manipulating factors such as education level ( grad- uate degree, college degree, high school diploma ), social hierarchy ( employer vs. employee ), and do- main expertise ( in-domain vs. out-of-domain )2, we investigate how these factors impact the strength of herd behavior. These experiments are performed with 2ndresponse type for strongest signal. 2Only MMLU-Pro and GPQA-Diamond contain domain- specific questions. We label a peer as in-domain if their pro- vided expertise matches the question’s domain. Peer ConditionFactual OpinionatedAverageMMLU-Pro GPQA-Diamond ARC-Challenge OpinionQA GlobalOpinionQA SOCIAL IQA 1st 0.03 0.05 0.01 0.01 0.01 0.02 0.03 2nd 0.51* 0.58* 0.09* 0.61* 0.69* 0.16* 0.48* rnd 0.31 0.40 0.04 0.55 0.60 0.09 0.33 last 0.25 0.37 0.04 0.52 0.56 0.09 0.29 Graduate Degree 0.50* 0.56* 0.08 0.76* 0.83* 0.15* 0.51* College Degree 0.47 0.49 0.08 0.74 0.83 0.14 0.48 High School Diploma 0.44 0.48 0.07 0.71 0.79 0.14 0.46 Employer 0.57* 0.71 0.10 0.71 0.77 0.20* 0.54* Employee 0.53 0.71 0.09 0.74* 0.79* 0.17 0.52 In-Domain 0.55* 0.72* - - - - 0.55* Out-Of-Domain 0.48 0.46 - - - - 0.48 Table 2: Flip rates across different peer conditions to evaluate the impact of perceived confidence on herd behavior. Bolded values represent the highest flip rate within each group, indicating the strongest herd influence. Asterisks (*) denote statistical significance (p < 0.05) based on paired t-tests within each group. Figure 2: Flip rate across varying levels of self- confidence and perceived confidence. The experiment includes all benchmarks under the 2nd, rnd, andlast peer conditions. Lower self-confidence or higher per- ceived confidence corresponds to stronger herd behav- ior. 3.2 Dataset We select six multiple choice benchmarks to en- sure the generalizability of our experiments. They cover both factual and opinionated questions, since real-world decision-making often involves a mix of objective knowledge and subjective judgment. While factual questions have gold answers, opin- ionated questions do not. The basic statistics of the selected benchmarks are shown in Table 1. 3.3 Results Confidence-Driven Herding Figure 2 shows the flip rate under varying levels of self-confidence and perceived confidence, averaged across all bench- marks using 2nd, rnd andlastpeer conditions. The heatmap reveals
|
https://arxiv.org/abs/2505.21588v1
|
a clear pattern: flip rates are high- est when self-confidence is low and perceived peer confidence is high, indicating stronger herd behav- ior in such conditions. As self-confidence increases, individuals become less likely to switch their an-swers, even when peers appear confident. Con- versely, when perceived confidence from peers in- creases, individuals with low self-confidence are more prone to change their responses. These find- ings highlight the significant role that both inter- nal certainty and social influence play in shaping decision-making behavior. Peer Influence Dynamics Table 2 and Table 6 examine how perceived confidence from different peer conditions influences the strength of herd be- havior. Table 2 shows that 2nd, the second most probably response consistently results in the high- est flip rates across benchmarks, indicating strong tendency to peer influence. Educational back- ground, social hierarchy, and domain relevance also modulate flip rates, where peer’s persona with graduate degree ,employer orin-domain expertise caused the stronger herd tendencies. Interestingly, employer as peer caused weaker herd behavior in opinionated benchmarks, which indicates opposite effect on subjective decisions with social hierarchy. Herding and Accuracy Table 6 evaluates the accuracy of the revised responses after exposure to peer input. Interestingly, the 2nd peer condi- tion, which induces the strongest herd effect, also leads to a statistically significant drop in accuracy compared to the original response, particularly on factual benchmarks like MMLU-Pro and ARC- Challenge. This suggests that following a confident peer does not always yield better outcomes—in some cases, it may degrade accuracy. 4 Format of Peer Information - Modulator of Herd Behavior In complex collaborative settings involving large groups, confidence is shaped not only by the re- Figure 3: Comparison of average flip rates across five presentation formats. Each heatmap shows the average flip rate based on different combinations of agreeing and disagreeing peers. The x-axis represents the number of agreeing agents, and the y-axis represents the number of disagreeing agents. Higher flip rates are shown in red, while lower rates are shown in blue. Figure 4: Comparison of average flip rates across two presentation orders. Each heatmap shows the average flip rate based on different combinations of agreeing and disagreeing peers. The x-axis represents the number of agreeing agents, and the y-axis represents the number of disagreeing agents. Higher flip rates are shown in red, while lower rates are shown in blue. sponse content or peer demographics, but also by the number of peers who express agreement or dis- agreement with a given response. Prior research indicates that social validation, such as the quantity of agreeing peers, can strongly influence individual confidence, often exerting a greater effect than the intrinsic merit of the original response (Asch, 2016; Moussaïd et al., 2013). The format in which peer information is con- veyed to individuals is also crucial. In particular, how peer input is summarized and presented, espe- cially when representing large groups, can shape perception in distinct ways. Furthermore, because peer information is communicated through lan- guage, its inherently sequential nature introduces an unavoidable ordering, which can influence how individuals interpret the information. To explore the
|
https://arxiv.org/abs/2505.21588v1
|
role of information format in af- fecting herd behavior, we conduct a series of exper- iments to assess how factors such as the number of agreeing or disagreeing agents, presentation meth- ods, and the presentation order affect the magnitude of herd behavior.4.1 Experiment Setting We extended the experimental design from Section 3.1 with several key modifications to examine the effects of peer information format on herd behavior. I. Number of Agreeing and Disagreeing Agents In contrast to the previous setup, which included only a single peer, we introduced multiple peers and categorized them into two groups: agreeing agents AAand disagreeing agents AD. Agreeing agents share the same response as the target agent (ri=rj), while disagreeing agents provide the 2nd response type, the most persuasive alternative, as peer response. II. Presentation Methods To convey peer infor- mation to the target agent, we compared the follow- ing five methods of presentation: •Count : Present the number of peers supporting each response (e.g., "X agents think the answer is A"). •Ratio : Present the percentage of peers supporting each response (e.g., "X% of agents think the answer is A"). •List: List the agents supporting each response (e.g., "Agents A, B, and C think the answer is A"). •Disc: Display each peer’s response individually (e.g., "Agent A thinks the answer is A; Agent B thinks the answer is B; ..."). •Reason : Extend the Disc method by including jus- tifications for each response (e.g., "Agent A thinks the answer is A because ..."). III. Presentation Order To assess the influ- ence of order in information delivery, we em- ployed two sequencing conditions: presenting agreeing agents ( AA) before disagreeing agents (AD) (Agree First ), and vice versa ( Disagree First ). 4.2 Dataset We continue using the six benchmarks described in Section 3.2, applying random sampling and cap- ping the number of questions at 200 per benchmark to ensure data balance and adhere to budget con- straints. 4.3 Results (Dis)agreement Size and Herding Table 4 and Figure 3 illustrate how the strength of herding be- havior varies with the size of agreement or dis- agreement. Specifically, Table 4 reports the aver- age flip rate and the Pearson correlation between the flipping indicator Iand the number of agreeing agents ( |AA|), disagreeing agents ( |AD|), and their difference ( |AA| − |AD|), evaluated across differ- ent presentation formats and benchmark datasets. Overall, among the various formats, herding be- havior is most pronounced when participants are presented with reasons. Interestingly, this effect is negligible on opinion-based benchmarks, sug- gesting that listing reasons is primarily effective in objective tasks. Moreover, across both factual and opinionated benchmarks, an increase in the number of agreeing agents or a decrease in the number of disagreeing agents generally leads to weaker herding behavior, and vice versa. Among all metrics, the difference between agreeing and dis- agreeing agents, reflecting the relative confidence between the individual and their peers, emerges as the strongest predictor of herding. Effect of Presentation Format Figure 3 com- pares five different presentation formats, where each subfigure displays a heatmap of
|
https://arxiv.org/abs/2505.21588v1
|
the average flip rate as a function of the number of agreeing and disagreeing agents. The choice of presentation format significantly influences herding behavior. In theCount andRatio formats, the heatmaps reveal a distinct separation into upper and lower triangles along the diagonal where the number of agreeing and disagreeing agents is equal. The upper triangle, representing cases where agreement outnumbers disagreement, exhibits consistently low flip rates regardless of the total number of peers, whereas the lower triangle shows much higher flip rates. This clear division suggests that numerical summaries of peer opinions help agents assess the balance between agreement and disagreement more effec- tively. In contrast, the other three formats, List, Disc, and Reason , do not show such a sharp divi- sion. Instead, they demonstrate strong herding be-havior only when the number of agreeing agents is small ( ≤2), and exhibit more resistance to change when three or more agents agree. Notably, the Rea- sonformat results in the highest overall flip rates, even when agreement is greater, suggesting that providing justifications enhances persuasive power among agents. Effect of Presentation Order One notable find- ing from our experiment concerns the order in which information about agreeing and disagree- ing agents is presented. Figure 4 displays heatmaps comparing two conditions: one where agreement is shown first, and another where disagreement is shown first, averaged across all presentation for- mats. The results reveal a strong difference be- tween the two orders. When disagreement is pre- sented first, herding behavior is generally stronger. The separation between the upper and lower tri- angles in the heatmap is more pronounced in this condition. In contrast, when agreement is shown first, high flip rates occur primarily when the num- ber of agreeing agents is small ( ≤2), suggesting that the sequence in which peer opinions are re- vealed can influence tendency of herd behavior. 5 Controllable Herd Behavior We have shown that herd behavior can be sig- nificantly influenced by factors such as presenta- tion format, agreement size, and information order. These findings suggest that herd behavior is not fixed, but controllable. In MAS applications, certain tasks benefit from strong herd behavior. For instance, in consensus- building or decision aggregation, quick conver- gence improves coordination and efficiency (Cho et al., 2024). This is useful in tasks like collective prediction or distributed sensing. In contrast, tasks that rely on exploration or creativity, such as idea generation or strategy search, require diverse per- spectives (Hong et al., 2023; Xu et al., 2023). In these cases, strong herding can suppress innova- tion and lead to premature convergence, making independent reasoning more valuable. Therefore, being able to control the strength of herd behavior is crucial. By adjusting how peer information is presented, we can encourage either convergence or independence depending on the task needs. MMLU-Pro GlobalOpinionQA Condition Flip Rate ( ↑) Entropy ( ↓) Consensus Rate ( ↑) Accuracy ( ↑) Flip Rate ( ↑) Entropy ( ↓) Consensus Rate ( ↑) Original - 1.10 0.13 0.04 - 0.82 0.14 CoT - 0.91 0.28 0.23 - 0.63 0.34 Baseline 0.55 0.43 0.56
|
https://arxiv.org/abs/2505.21588v1
|
0.18 0.44 0.29 0.69 Strong Factors 0.63* 0.43 0.54 0.29* 0.59* 0.49 0.46 Weak Factors 0.36 0.28* 0.69* 0.16 0.23 0.22* 0.76* Strong Prompt 0.55 0.43 0.57 0.17 0.44 0.29 0.69 Weak Prompt 0.55 0.43 0.56 0.18 0.46 0.36 0.61 Table 3: The effects of different control conditions on herd behaviors across factual and opinionated benchmarks. Flip rate, consensus rate, and accuracy (MMLU-Pro only) are higher-is-better metrics, while entropy is a lower- is-better metric. Bolded values are the best value in the column, and asterisks (*) denote statistical significance (p <0.05) based on paired t-tests within each column. Strong Factors yield the highest accuracy on MMLU-Pro and the highest flip rate on both datasets, indicating greater sensitivity to peer input. Weak Factors exhibit the lowest entropy and highest consensus rate on both datasets, suggesting more aligned responses. 5.1 Experiment Setting We simulate a collaborative scenario involving five agents. Each agent first generates an initial re- sponse to a given question qusing a high temper- ature ( τ= 1) setting to promote diversity. Then, each agent independently revises their response after being shown the answers of the other four agents.The following metrics are used: •Flip Rate : Measures how often agents change their initial response. •Entropy : Quantifies the diversity in the final responses, reflecting overall alignment or disagree- ment among agents. •Consensus Rate : Indicates whether a unanimous consensus is reached. •Accuracy : For factual tasks, this captures the collective correctness of the agents’ final responses. If a unanimous agreement is not reached, we mark it as incorrect. To assess the controllability of herd behavior, we compare the following conditions: •Original : Baseline condition using agents’ initial responses before any peer input. •CoT : Extend the Original condition by adding chain-of-thought reasoning (Wei et al., 2022). •Baseline : Baseline condition without peer per- sona; both presentation format and order are ran- domized. •Strong Factors : Combines elements that amplify herding— graduate degree persona, Reason format, and showing disagreeing responses first. •Weak Factors : Combines elements that dampen herding—peers have a high school diploma , use theDisc format, and show agreeing responses first. •Strong Prompt : Uses the system prompt “Please be agreeable” to promote conformity, with randompresentation format and order. •Weak Prompt : Uses the system prompt “Please be stubborn” to encourage resistance to peer influ- ence, with random presentation format and order. 5.2 Dataset Diversity in initial responses is essential for study- ing herd behavior. To ensure this, we filtered for questions where the highest probability among orig- inal responses was less than 0.8, indicating suffi- cient variation across agents. To maintain a rea- sonable dataset size, we selected two benchmarks: MMLU-Pro and GlobalOpinionQA, sampling 500 questions from each. It is worth noting that this fil- tering process favors more contentious or ambigu- ous questions, which may increase the difficulty of the task. 5.3 Results Effect of Strong vs. Weak Factors on Herding Table 3 summarizes the impact of different con- trol conditions on herd behavior across two bench- marks. The Strong Factors condition yields the highest flip rate on both
|
https://arxiv.org/abs/2505.21588v1
|
datasets (0.63 on MMLU- Pro and 0.59 on GlobalOpinionQA), indicating that agents are more likely to revise their answers when exposed to highly persuasive peer input. This set- ting also leads to the highest group accuracy (0.29) on MMLU-Pro, even higher than CoT, suggesting that well-structured peer influence can improve col- lective performance on factual tasks. In contrast, theWeak Factors condition results in the lowest flip rate (0.36 and 0.23) and entropy (0.28 and 0.22), demonstrating more consistent and aligned final responses with reduced peer influence. Despite reduced herding, the consensus rate remains high, suggesting that consensus can still emerge even when agents are less swayed. Limited Effect of Prompt-Based Control The Strong Prompt andWeak Prompt conditions show similar flip rates (0.55) and entropy levels (0.43) with Baseline , indicating that prompt-level control has weaker effects compared to presentation fac- tors, especially on the factual dataset (MMLU-Pro). While some effect is observed on the opinionated dataset, structural cues in peer presentation remain more effective in modulating herding. 6 Discussion 6.1 Understanding and Controlling Herd Behavior Our findings reveal a nuanced picture of how herd- ing emerges in multi-agent decision-making and the factors that modulate its intensity. Confidence alignment between the self and perceived peers plays a central role: herding is strongest when indi- viduals feel uncertain while perceiving high confi- dence from peers. This dynamic is further shaped by social cues, where peer personas with higher sta- tus or domain relevance amplify conformity, partic- ularly in objective tasks. However, this does not al- ways lead to better outcomes; the drop in accuracy under the 2ndresponse type highlights the risks of misplaced trust. Furthermore, while herding is of- ten viewed negatively, our results demonstrate that under carefully designed conditions, such as the Strong Factors setting, peer input can enhance col- lective performance, suggesting that not all herding is detrimental. Our study also underscores the importance of structural presentation in shaping social influence. Formats like Count andRatio facilitate clear com- parative reasoning, reducing flips when agreement is strong. Conversely, Reason increases overall flip rates, emphasizing the persuasive power of justifications. Order of information presentation also matters: leading with disagreement encour- ages greater conformity than leading with agree- ment. Interestingly, prompt-level interventions had minimal effect compared to structural changes. To- gether, these insights offer actionable strategies for both harnessing and regulating herding behavior in collaborative AI and human-AI systems. 6.2 (Ir)rationality in Agents Our analysis reveals that agents often behave ratio- nally in response to confidence signals and social cues. Flip rates align with the interplay of self andperceived peer confidence: agents are more likely to switch when their own confidence is low and peers appear confident. Similarly, agents respond predictably to peer personas, with higher flip rates for authoritative figures. In Count andRatio for- mats, flip behavior scales logically with the number of agreeing and disagreeing peers, suggesting quan- titative reasoning based on social consensus. However, we also observe deviations from ratio- nality. Formats like List,Disc, and Reason break the expected trend, showing weaker links between peer agreement size and flip rates. Presentation order
|
https://arxiv.org/abs/2505.21588v1
|
also affects behavior, akin to first-impression bias, despite identical information. Moreover, prompt-based instructions have minimal effect compared to structural cues, indicating that agents are more influenced by framing than by explicit guidance. These findings point to bounded ratio- nality shaped by presentation and context. 7 Related Works Recent studies have explored the cognitive impacts and practical consequences of AI-driven systems, offering insights into how these technologies influ- ence human reasoning and decision-making pro- cesses (Chen et al., 2024; Shaki et al., 2023). In parallel, research on the structural dynamics of lan- guage models has uncovered how architectural and training factors shape model behavior and outputs (Jumelet et al., 2024; Sinclair et al., 2022). Ad- ditionally, a growing body of work has examined prosocial forms of irrationality, such as herd behav- ior, highlighting how collective decision-making can deviate from individual rationality while serv- ing social cohesion or group benefits (Liu et al., 2024b). However, these works have not thoroughly examined the underlying factors driving herd be- havior or investigated the extent to which such be- havior can be controlled. 8 Conclusion This work presents a comprehensive analysis of herding behavior in multi-agent decision-making, revealing how confidence and presentation formats shape social influence. While agents often act ra- tionally in response to structured cues, they remain vulnerable to framing effects and presentation bi- ases. Our findings offer actionable insights for designing collaborative AI systems that balance influence and autonomy. Limitations While our study sheds light on the dynamics of herd behavior in LLM-based MAS, several limitations warrant discussion. First, our experimental setup is constrained to controlled decision-making scenarios using multiple-choice questions across six benchmarks. Although these benchmarks span factual and opin- ionated domains, they may not fully capture the complexity and ambiguity of real-world collabora- tive tasks, such as open-ended discussions, multi- turn reasoning, or creative problem solving. The discrete nature of the response space may limit the generalizability of our findings to tasks requiring nuanced textual generation or longer context main- tenance. Second, we model perceived confidence and peer influence using static representations. These proxies may not capture the rich, dynamic inter- play of trust, reputation, or credibility in more so- phisticated agent interactions. Additionally, the absence of memory or learning mechanisms pre- vents agents from adapting their behavior over time, which could either dampen or exacerbate herd ten- dencies in longitudinal settings. Third, our experiments involve agents from the same underlying language model architecture, which might limit behavioral diversity and ob- scure effects that could emerge from heterogeneous agents. Real-world MAS may involve agents with varying objectives, training data, or model sizes, introducing additional factors that could modulate conformity behaviors. Finally, although we attempt to manipulate so- cial influence through structured prompts and pre- sentation formats, our findings on the weak efficacy of prompt-based controls suggest that LLMs may not reliably interpret meta-instructions in multi- agent settings. This points to a broader challenge in aligning emergent social behavior with high-level design intentions, particularly when using black- box models. Future work could extend this research by incor- porating more ecologically valid tasks, exploring
|
https://arxiv.org/abs/2505.21588v1
|
heterogeneous agent configurations, and integrat- ing adaptive learning mechanisms to better simu- late evolving social dynamics in collaborative AI systems.References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 . Solomon E Asch. 2016. Effects of group pressure upon the modification and distortion of judgments. In Organizational influence processes , pages 295–303. Routledge. Abhijit V Banerjee. 1992. A simple model of herd behavior. The quarterly journal of economics , 107(3):797–817. Dan Bang, Laurence Aitchison, Rani Moran, Santiago Herce Castanon, Banafsheh Rafiee, Ali Mahmoodi, Jennifer YF Lau, Peter E Latham, Bahador Bahrami, and Christopher Summerfield. 2017. Confidence matching in group decision-making. Nature Human Behaviour , 1(6):0117. Sushil Bikhchandani, David Hirshleifer, and Ivo Welch. 1992. A theory of fads, fashion, custom, and cul- tural change as informational cascades. Journal of political Economy , 100(5):992–1026. Nuo Chen, Jiqun Liu, Xiaoyu Dong, Qijiong Liu, Tet- suya Sakai, and Xiao-Ming Wu. 2024. Ai can be cog- nitively biased: An exploratory study on threshold priming in llm-based batch relevance assessment. In Proceedings of the 2024 Annual International ACM SIGIR Conference on Research and Development in Information Retrieval in the Asia Pacific Region , pages 54–63. Young-Min Cho, Raphael Shu, Nilaksh Das, Tamer Alkhouli, Yi-An Lai, Jason Cai, Monica Sunkara, and Yi Zhang. 2024. Roundtable: Investigating group decision-making mechanism in multi-agent collabo- ration. arXiv preprint arXiv:2411.07161 . Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question an- swering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457 . Yilun Du, Shuang Li, Antonio Torralba, Joshua B Tenen- baum, and Igor Mordatch. 2023. Improving factual- ity and reasoning in language models through multia- gent debate. In Forty-first International Conference on Machine Learning . Esin Durmus, Karina Nguyen, Thomas I Liao, Nicholas Schiefer, Amanda Askell, Anton Bakhtin, Carol Chen, Zac Hatfield-Dodds, Danny Hernandez, Nicholas Joseph, et al. 2023. Towards measuring the representation of subjective global opinions in language models. arXiv preprint arXiv:2306.16388 . Liye Fu, Lillian Lee, and Cristian Danescu-Niculescu- Mizil. 2017. When confidence and competence col- lide: Effects on online decision-making discussions. InProceedings of the 26th international conference on world wide web , pages 1381–1390. Taicheng Guo, Xiuying Chen, Yaqi Wang, Ruidi Chang, Shichao Pei, Nitesh V Chawla, Olaf Wiest, and Xi- angliang Zhang. 2024. Large language model based multi-agents: A survey of progress and challenges. arXiv preprint arXiv:2402.01680 . Sirui Hong, Xiawu Zheng, Jonathan Chen, Yuheng Cheng, Jinlin Wang, Ceyao Zhang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, et al. 2023. Metagpt: Meta programming for multi-agent collabo- rative framework. arXiv preprint arXiv:2308.00352 , 3(4):6. Jaap Jumelet, Willem Zuidema, and Arabella Sin- clair. 2024. Do language models exhibit human- like structural priming effects? arXiv preprint arXiv:2406.04847 . Philippe Laban, Lidiya Murakhovs’ ka, Caiming Xiong, and Chien-Sheng Wu. 2023. Are you sure? challeng- ing llms leads to performance drops in the flipflop experiment. arXiv preprint arXiv:2311.08596 . Tongxuan Liu, Xingyu Wang, Weizhe Huang, Wenjiang Xu,
|
https://arxiv.org/abs/2505.21588v1
|
Yuting Zeng, Lei Jiang, Hailong Yang, and Jing Li. 2024a. Groupdebate: Enhancing the efficiency of multi-agent debate using group discussion. arXiv preprint arXiv:2409.14051 . Xuan Liu, Jie Zhang, Haoyang Shang, Song Guo, Chengxu Yang, and Quanyan Zhu. 2024b. Explor- ing prosocial irrationality for llm agents: A social cognition view. arXiv preprint arXiv:2405.14744 . Mehdi Moussaïd, Juliane E Kämmer, Pantelis P Ana- lytis, and Hansjörg Neth. 2013. Social influence and the collective dynamics of opinion formation. PloS one, 8(11):e78433. Lev Muchnik, Sinan Aral, and Sean J Taylor. 2013. Social influence bias: A randomized experiment. Sci- ence, 341(6146):647–651. OpenAI. 2023. Gpt-4o-mini: Advancing cost-efficient intelligence. Accessed: 2024-08-18. OpenAI. 2024. GPT-4.1. https://openai.com/ index/gpt-4-1/ . Accessed: 2025-05-20. Joon Sung Park, Joseph O’Brien, Carrie Jun Cai, Mered- ith Ringel Morris, Percy Liang, and Michael S Bern- stein. 2023. Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th an- nual acm symposium on user interface software and technology , pages 1–22. Niccolò Pescetelli, Anna-Katharina Hauperich, and Nick Yeung. 2021. Confidence, advice seeking and changes of mind in decision making. Cognition , 215:104810. Ramsey M Raafat, Nick Chater, and Chris Frith. 2009. Herding in humans. Trends in cognitive sciences , 13(10):420–428.David Rein, Betty Li Hou, Asa Cooper Stickland, Jack- son Petty, Richard Yuanzhe Pang, Julien Dirani, Ju- lian Michael, and Samuel R Bowman. 2024. Gpqa: A graduate-level google-proof q&a benchmark. In First Conference on Language Modeling . Shibani Santurkar, Esin Durmus, Faisal Ladhak, Cinoo Lee, Percy Liang, and Tatsunori Hashimoto. 2023. Whose opinions do language models reflect? In In- ternational Conference on Machine Learning , pages 29971–30004. PMLR. Maarten Sap, Hannah Rashkin, Derek Chen, Ronan LeBras, and Yejin Choi. 2019. Socialiqa: Com- monsense reasoning about social interactions. arXiv preprint arXiv:1904.09728 . Jonathan Shaki, Sarit Kraus, and Michael Wooldridge. 2023. Cognitive effects in large language models. In ECAI 2023 , pages 2105–2112. IOS Press. Arabella Sinclair, Jaap Jumelet, Willem Zuidema, and Raquel Fernández. 2022. Structural persistence in language models: Priming as a window into abstract language representations. Transactions of the Associ- ation for Computational Linguistics , 10:1031–1050. Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, Shiguang Guo, Weiming Ren, Aaran Arulraj, Xuan He, Ziyan Jiang, et al. 2024. Mmlu-pro: A more robust and challenging multi-task language understanding benchmark. In The Thirty- eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track . Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits rea- soning in large language models. Advances in neural information processing systems , 35:24824–24837. Zhiyuan Weng, Guikun Chen, and Wenguan Wang. 2025. Do as we do, not as you think: the con- formity of large language models. arXiv preprint arXiv:2501.13381 . Zengqing Wu and Takayuki Ito. 2025. The hidden strength of disagreement: Unraveling the consensus- diversity tradeoff in adaptive multi-agent systems. arXiv preprint arXiv:2502.16565 . Yijun Xiao and William Yang Wang. 2019. Quantifying uncertainties in natural language processing tasks. InProceedings of the AAAI conference on artificial intelligence , volume 33, pages 7322–7329. Zelai Xu, Chao Yu, Fei Fang, Yu Wang,
|
https://arxiv.org/abs/2505.21588v1
|
and Yi Wu. 2023. Language agents with reinforcement learn- ing for strategic play in the werewolf game. arXiv preprint arXiv:2310.18940 . Paul Zarnoth and Janet A Sniezek. 1997. The so- cial influence of confidence in group decision mak- ing. Journal of Experimental Social Psychology , 33(4):345–366. Xiaochen Zhu, Caiqi Zhang, Tom Stafford, Nigel Col- lier, and Andreas Vlachos. 2024. Conformity in large language models. arXiv preprint arXiv:2410.12428 . A Different LLMs In our experiments, we employ gpt-4o-mini-2024-07-18 (OpenAI, 2023), using all default settings except for the temperature parameter, which is set to 0 unless stated otherwise. In this section, we report flip rates across different peer conditions and datasets using a range of LLMs, including gpt-4o (2024-11-20) , gpt-4o-mini (2024-07-18) , gpt-4.1 (2025-04-14) ,gpt-4.1-mini (2025-04-14) , and gpt-4.1-nano (2025-04-14) (Achiam et al., 2023; OpenAI, 2024). We compare models’ responses when the perceived confidence occurs at the 2ndversus the lastposition. Table 5 sum- marizes flip rates across all models and datasets. Overall, responses positioned 2nd consistently exhibit higher flip rates than those in the last position, suggesting a greater susceptibility to herd behavior when confidence is perceived earlier. B Prompt Details In this section, we present the prompt we use in our experiment settings. Below are persona prompt used for peer educa- tion: Peer Education Graduate Degree: Agent A’s highest degree is graduate degree. Agent A thinks the answer is { rj}. College Degree: Agent A’s highest degree is college degree. Agent A thinks the answer is { rj}. High School Diploma: Agent A’s highest degree is high school diploma. Agent A thinks the answer is { rj}. Below are persona prompt used for peer social hierarchy: Peer Social Hierarchy Employer: Your employer thinks the answer is { rj}. Employee: Your employee thinks the answer is { rj}. Below is persona prompt used for domain ex- pertise. There are 14 domains in MMLU-Pro (law, computer science, history, health, philoso- phy, psychology, business, engineering, biology,chemistry, physics, math, economics, other ), and 3 domains in GPQA-Diamond ( Biology, Physics, Chemistry ). While in-domain examples gives the same domain as the given question, out-domain ramdomly picked one from the complement set. Peer Domain Expertice Agent A is an expert in { domain } domain. Agent A thinks the answer is { rj}. Below are prompt used for presentation meth- ods: Presentation Methods Count: {agree _size } agent{ plural } think{ s} the answer is {rj}. and vise versa to disagreeing agents. Ratio: Among { peer _size } agents, {agree _ratio }% think the answer is { rj}. and vise versa to disagreeing agents. List: Agent { list_of_agree _agents } think the answer is {rj}. Agent { list_of_disagree _agents } think the answer is {rk}. Disc: Agent A think the answer is { rj}. Agent B think the answer is { rk}. ... Reason: Agent A think the answer is { rj}, because { reason j}. Agent B think the answer is { rk}, because { reason k}. ... C Details of Datasets MMLU-Pro, GPQA-Diamond, ARC-Challenge is under MIT license, ARC-Challenge is under cc- by-sa-4.0 license, OpinionQA
|
https://arxiv.org/abs/2505.21588v1
|
and SOCIAL IQA is not under a license, and GlobalOpinionQA is under cc-by-nc-sa-4.0 license. Our use of the dataset is consistent with the intended use. The datasets do not contain personally identifying info or offensive content. All the datasets are in english. Factual Opinionated Presentation FormatAvg. Flip Rateρ(I,|AA|)ρ(I,|AD|)ρ(I,|AA| − |AD|)Avg. Flip Rateρ(I,|AA|)ρ(I,|AD|)ρ(I,|AA| − |AD|) Count 0.22 -0.17 0.16 -0.23 0.27 -0.36 0.31 -0.47 Ratio 0.22 -0.28 0.25 -0.37 0.30 -0.41 0.38 -0.56 List 0.21 -0.17 0.15 -0.23 0.30 -0.22 0.22 -0.31 Disc 0.21 -0.11 0.09 -0.14 0.28 -0.24 0.23 -0.33 Reason 0.30 -0.05 0.05 -0.07 0.30 -0.11 0.11 -0.15 Table 4: Average flip rate and Pearson r correlation between the flipping indicator Iand the number of agreeing agents ( |AA|), disagreeing agents ( |AD|), or their difference ( |AA| − |AD|), evaluated across various presentation formats and benchmark datasets. All Pearson r correlations are statistically significant (p < 0.001). Overall, herd behavior is strongest when presented with reasons. The difference between agreeing and disagreeing agents is the strongest predictor of herd behavior. MethodFactual OpinionatedAverageMMLU- ProGPQA- DiamondARC- ChallengeOpinionQA GlobalOpinionQA SOCIAL IQA gpt-4o-mini_2nd 0.5* 0.59* 0.11* 0.64* 0.71* 0.16* 0.45* gpt-4o-mini_last 0.25 0.42 0.06 0.54 0.59 0.06 0.32 gpt-4o_2nd 0.55* 0.70* 0.06* 0.52* 0.57* 0.26* 0.44* gpt-4o_last 0.34 0.45 0.02 0.34 0.43 0.14 0.29 gpt-4.1_2nd 0.51* 0.63* 0.05* 0.66* 0.66* 0.22* 0.45* gpt-4.1_last 0.23 0.35 0.02 0.49 0.54 0.10 0.29 gpt-4.1-mini_2nd 0.46* 0.49* 0.04* 0.4* 0.42* 0.2* 0.33* gpt-4.1-mini_last 0.30 0.32 0.00 0.25 0.33 0.11 0.22 gpt-4.1-nano_2nd 0.62* 0.57* 0.18* 0.51* 0.62* 0.33* 0.47* gpt-4.1-nano_last 0.46 0.39 0.12 0.42 0.52 0.16 0.34 Table 5: Flip rates for 2ndandlastresponse types across different LLMs, used to assess the generalizability of perceived confidence effects on herd behavior. Bolded values indicate the highest flip rate within each group, reflecting the greatest herd influence. Asterisks (*) mark statistically significant differences (p < 0.05) based on paired t-tests conducted within each group. Factual Peer Condition MMLU- ProGPQA- DiamondARC- ChallengeAverage Original 0.45 0.35 0.93 0.49 1st 0.45 0.34 0.92 0.49 2nd 0.41* 0.30 0.90* 0.45* rnd 0.42 0.32 0.91 0.46 last 0.43 0.35 0.91 0.48 Graduate Degree 0.40* 0.29 0.89 0.44* College Degree 0.41 0.32 0.90 0.45 High School Diploma 0.41 0.31 0.90 0.45 Employer 0.40* 0.27 0.74* 0.44* Employee 0.41 0.27 0.76 0.45 In-Domain 0.40* 0.29 - 0.40 Out-Of-Domain 0.41 0.29 - 0.40 Table 6: Accuracy of revised response after receiving peer information. The first row, Original, represents accuracy of original response before receiving peer in- formation. Bolded values represent the lowest accuracy within each group, and asterisks (*) denote statistical significance (p < 0.05) based on paired t-tests within each group.
|
https://arxiv.org/abs/2505.21588v1
|
arXiv:2505.21589v1 [cs.CV] 27 May 2025Do you see what I see? An Ambiguous Optical Illusion Dataset exposing limitations of Explainable AI Carina Newen TU Dortmund University , Research Center Trustworthy Data Science and Security carina.newen@cs.tu-dortmund.de Luca Hinkamp TU Dortmund University, Research Center Trustworthy Data Science and Security Maria Ntonti TU Dortmund University Emmanuel Müller TU Dortmund University, Research Center Trustworthy Data Science and Security emmanuel.mueller@cs.tu-dortmund.de Abstract From uncertainty quantification to real-world object detection, we recognize the importance of machine learning algorithms, particularly in safety-critical domains such as autonomous driving or medical diagnostics. In machine learning, ambigu- ous data plays an important role in various machine learning domains. Optical illusions present a compelling area of study in this context, as they offer insight into the limitations of both human and machine perception. Despite this rele- vance, optical illusion datasets remain scarce. In this work, we introduce a novel dataset of optical illusions featuring intermingled animal pairs designed to evoke perceptual ambiguity. We identify generalizable visual concepts, particularly gaze direction and eye cues, as subtle yet impactful features that significantly influence model accuracy. By confronting models with perceptual ambiguity, our findings underscore the importance of concepts in visual learning and provide a founda- tion for studying bias and alignment between human and machine vision. To make this dataset useful for general purposes, we generate optical illusions sys- tematically with different concepts discussed in our bias mitigation section. The dataset is accessible in Kaggle via Ambivision. Our source code can be found at https://github.com/KDD-OpenSource/Ambivision.git . 1 Introduction The motivation of this particular optical illusion dataset stems from a novel problem in the explainabil- ity domain (XAI). When we discuss explainable AI, uncovering the black-box nature of models, and visualizing the internal workings of a machine learner for image data, research has so far focused on highlighting pixels that are most important or influential in the decision-making process Ribeiro et al. [2016], Selvaraju et al. [2017]. Other approaches highlight important regions Ribeiro et al. [2018] or target the most essential features Lundberg and Lee [2017b], Sundararajan et al. [2017]. However, Preprint. Under review. these methods often fall short when confronted with perceptual ambiguity- situations where even human interpretation is uncertain. Take a look at Figure 1. Depending on the viewer’s perception Figure 1: In this image, you can see both a rabbit and a duck. Common XAI methods that highlight important pixels could output exactly the same explanation for either of those classes without improving human understanding of which class was chosen why. This is a critical research gap in explanations- pixel highlighting is simply not enough. One way of distinguishing the two depends on the way you consider the eyes to be looking in. While this is not the only way to approach the problem, we will highlight the usefulness of the gaze for the classification task in our evaluations. However, we show that with one very small addition, we can erase the ambiguity of an image for humans and improve it for machine learners. We argue that the future in XAI lies in uncovering such
|
https://arxiv.org/abs/2505.21589v1
|
concepts rather than highlighting pixels, which is the critical research gap we address in this paper. of the eye’s direction, the image may be interpreted as either a rabbit or a duck. If you were to use any current XAI algorithm to generate explanations of why it is a rabbit or a duck, the explanations could look exactly the same. They would keep the actual reason behind this optical illusion secret. This is due to the fact that current XAI methods highlight important pixels or areas, which, in this case, are shared by both classes. This suggests that standard XAI methods are currently inadequate when semantic interpretation relies on abstract perceptual cues rather than pixel-level interpretation. Examples of this phenomenon on the provided dataset can be seen in Section 3. Clearly, our internal decision process goes beyond what we see- resolving ambiguity by assigning direction, intent and anthropomorphizing Wan and Chen [2021]. One such so far neglected concept is the viewing direction of the animal, and not just where we look at as the supervisor. In this paper, we introduce three major contributions: First, we expose the existing methodologies of XAI by showing limitations: Highlighting pixels or areas alone, at least in the image domain, is not enough. Second, we introduce a new open-source optical illusion dataset featuring animal images labelled with bounding boxes, gaze and viewing direction annotations. And third, we demonstrate that integrating gaze direction and eye coordinates into the learning process improves model performance, even when all other aspects (architecture, epochs, learning rate, dataset structure, optimizer) remain the same. Our novel dataset is generated with sophisticated ChatGPT OpenAI [2024] models and considerable computational power. It incorporates the understanding of a generative AI model in generating optical illusions while achieving images that are difficult for humans. However, we address how we mitigated potential biases in Section 6.1. Furthermore, the difficulty of generating convincing optical illusions makes it hard to provide large datasets that are drawn by human artists: We argue that a machine generated set offers a valuable perspective into human vs AI perceptual learning. 2 Related Work Explainable AI, often shortened to XAI, is broadly considered to be split into two categories: transparency design and post-hoc explanations Xu et al. [2019]. We criticize that the current approaches of visualization methods in the image domain highlight pixels or areas of images that show the importance of specific features. Saliency-based methods, such as LIME Ribeiro et al. [2016], Grad- CAM Selvaraju et al. [2017], or Anchors Ribeiro et al. [2018] offer local explanations by attributing predictions to pixel importance or localized regions. The broader landscape of explainable AI encompasses countless methods Lundberg and Lee [2017a] from counterfactuals Mothilal et al. [2020], Antorán et al. [2020], prototypes Nauta et al. [2023], Chen et al. [2018], to global explainability 2 methods Setzu et al. [2021], Morichetta et al. [2019], Newen and Müller [2022]. We highlight the need for concept-based XAI in ambiguous settings. One type of concept-based XAI works is based on predefined concepts and relies on human supervision Kim et al. [2018],
|
https://arxiv.org/abs/2505.21589v1
|
Bontempelli et al. [2022], Yeh et al. [2020], Goyal et al. [2019]. This is why this was extended to automatic concept-based extraction, which relies on segmentation strategies that then employ importance scores to dismiss outliers Fel et al. [2023a], Ghorbani et al. [2019], Zhang et al. [2021], Fel et al. [2023b]. However, this type of approach also comes with its limitations: We argue that all of these approaches still segment pixel-based logic. We apply ACE Ghorbani et al. [2019] to our dataset as a prominent representative of the field and show with examples that it also had problems extracting useful concepts. We demonstrate in this work that general concepts for domains exist, such as gaze and the eye, that help the overall performance on a whole domain rather than just a specific type of animal. These concepts were not spotted when applying ACE. In our experimental section in Section 5, as well as further experiments in the Appendix A, we show the improvement when those concepts are considered versus learning without the concepts on our ambiguous dataset. Figure 2: We feature here several examples of our dataset. For example, on the upper left side, a penguin can be seen hidden within a horse, depending on the direction we consider the animal to be looking. All of these examples have two animals distinguishable by the eye coordinate and the gaze vector, meaning they might be looking in the same direction, but their right eye (if more than one is visible) is positioned somewhere differently. That is why this is unique even in the case of the lion and eagle image, as the right eye of both is in a different position (image on the bottom right). For most of these images, however, the looking direction will also be distinguishable. The goal of this dataset was to test whether the gaze and eye coordinates prove to be useful general concepts in ambiguous settings but can, in general, be used for the evaluation of XAI as baseline for classification performance with optical illusions. More example pictures are included in the Appendix A in Figure 11. 2.1 Improving classifier performances using additional key features such as gaze annotations Apart from being beneficial in this specific example, gazes already prove useful to also enhance object detection model performances. It is important to note that we still provide a unique angle: For example, Saab et al. [2019] take advantage of passively collected gaze information to decrease the number of training examples needed for the effective performance of a learner. In the literature, gaze annotations for humans often capture the direction of the person’s gaze within the image. In contrast, for animals or inanimate objects, annotations tend to reflect points of interest identified by human observers, rather than attempting to label the gaze direction of the animal or object itself. We, however, focus on where the animal is looking and is being classified. There is various literature support that emphasizes that using additional features other than the image itself can create more effective learners with fewer training data: Wang
|
https://arxiv.org/abs/2505.21589v1
|
et al. [2017] validate that gaze annotations can help improve the accuracies of classification approaches. However, both authors focused on annotating what humans look at and deem important. Kellnhofer et al. [2019] use gaze annotations to increase the generalizability of their models compared to other benchmark datasets. 3 Our work wants to incorporate the direction of the animal that this animate being looks in. Wang et al. [2017] evaluated their claims on two public datasets, and published their own dataset where food is annotated using important gaze points. The key difference between these ideas and our new contribution is that instead of focusing on saliency and repeating the concept of certain pixels or features that are most important in an image itself, we do not highlight where the person gazing at the image looks in but where the animate being contained in the image is looking. We give you an overview of related datasets and their contributions in Table 1 in the Appendix for further research context clarification. 3 Limitations of current Explainable AI Algorithms While current explainable AI algorithms do a phenomenal job at explaining attributions of models in general, our dataset is intentionally constructed to challenge local attribution methods by presenting overlapping or intermingled visual features from multiple animals. As we can see below, we provide some example areas where the highlighted features of the dataset mark features that do not belong to one of the animals specifically. In the following, we show instances where common XAI algorithms fail. We chose popular representatives from the saliency-based XAI area, such as Grad-CAM Selvaraju et al. [2017] and integrated gradients. We then present an example using a prototypical XAI method, namely PipNet Nauta et al. [2023]. We do the same for a representative of the automatic concept-based XAI area, ACE Ghorbani et al. [2019]. We show that these methods fail to distinguish the two animals well due to the shared features of the animals: These limitations underscore a critical shortcoming of local attribution techniques and emphasize the need for concept-level reasoning. Figure 3: Pixel-based attribution explanations like Grad-CAM struggle to distinguish between the intermingling areas of the two animals. We later show that adding a single feature significantly enhances performance on ambiguous data. To go on, we show the same for Pipnet Nauta et al. [2023], a prototypical XAI algorithm. In Figure 5, we can see that the prototypes marked by the algorithm contain all kinds of examples from animals, including eagle feathers (which were intentionally made similarly looking to confuse the learner). One of the prototypes even includes tiger eyes, if you look closely. The explanation of Pipnet marks both the cheetah fur and the eagle patterns because they admittedly look very similar. This proves our point, however: As a human, we draw in our head an invisible boundary between the cheetah and the eagle, because we think in concepts. The similarity of the fur does not matter to us. Furthermore, we then tried to evaluate how concept-based explainable AI methods, such as ACE Ghorbani et al. [2019], perform in the
|
https://arxiv.org/abs/2505.21589v1
|
automatic detection of concepts in these ambiguous settings. While concept-based explanations in general require human-annotated concepts, ACE promises to automatically detect concepts using segmentation and clustering techniques using convolutional neural networks. When applying ACE to our dataset, it does not extract really meaningful concepts: For example, looking at the bird class in Figure 6, ACE does not really extract anything on the leftmost image; the middle one detected the dots on the bird wings, but this is not a meaningful feature in general. The rightmost image might be the wings with a lot of effort to see something, but it also highlights something meaningless at the bottom of the image. Furthermore, ACE fails to find the two general concepts that we suggest work well: gaze direction and the eyes. We argue that this is because, again, ACE employs segmentation on a pixel level and derives concepts from there. This general setup is shared among automatic concept-detection methods Fel et al. [2023a], Zhang et al. [2021], Fel et al. [2023b]. We argue beyond pixel-level segmentation. 4 Figure 4: The same can be spotted for example using Integrated Gradients: The attributions for the classes are very similar, and often not very sensible. The explanation for bear clearly marks the rabbits’ head in the picture. Both the rabbit and bear explanation include markings on the bear face area. Clearly, the model struggles to distinguish the two animals, and the explanations are limited in their meaningfulness and clarity. Figure 5: In this image, we see prototypes extracted via Pipnet Nauta et al. [2023] for the eagle class. Pipnet also struggles to distinguish the cheetah fur and the eagle feathers. The darker the box, the more important it is for the classification. Again, we argue that this is due to the area-based and not concept-based explanations, a clear limitation for ambiguous data. Next to the original image, we see example concepts extracted that should show similar features. 4 Methodology One of the contributions of this paper is to show the usefulness of generalizable concepts such as eye coordinates and gaze direction. The direction of the gaze is a problem statement often considered in various different settings in literature Mukherjee and Robertson [2015], Liu et al. [2019]. Overall, it has already been recognised as an important factor in social interactions Wang et al. [2021] or even in object detection itself Bâce et al. [2017]. The goal of this new dataset is to provide an ambiguous dataset baseline that might help evaluate future XAI algorithms on their ability to provide meaningful explanations on ambiguous data. A key distinction for this proposition of including the gaze, in contrast to existing ones, is that we track where the animal or human is looking, not what the person looking at. We consider gaze following as: The gaze direction starting from the eye coordinate and then considering the head tilt, the direction of the gaze. The mathematical definition consists of the following components: 5 Figure 6: Example concepts extracted by ACE Ghorbani et al. [2019] for the bird class. We argue that ACE
|
https://arxiv.org/abs/2505.21589v1
|
cannot find abstract concepts, such as gaze direction, because it clusters segmentations on a pixel-based level. We argue that we are currently missing concept-based XAI that goes beyond the grouping of pixels. •The Eye Position (e) : A 2D point representing the eye’s position in the plane, denoted as a vector [ex, ey]∈R2. •Head Looking Direction(d): : A unit vector representing the normalized direction in which the head is oriented, given as [dx, dy]∈R2. We can then define the gaze direction as g=e+α·d, with α∈Ra scaling constant to ensure a set length. For explanation purposes, we aim to make it intuitively long enough to be well visible to the human eye. αis given a length based on the given image size. We normalize the gaze, ensuring a unified vector notation. For the sake of this evaluation methodology, we define that looking straight ahead is annotated as (0.0, 0.0). We always annotated the position of the right eye if two were present. 5 Experiments: Benchmarking the concept of gaze direction and the eye on Ambivision- our dataset Despite having literature supporting our claims that specific additional features help the learning process Saab et al. [2019], Wang et al. [2017], we wanted to provide an additional experimental evaluation on learning improvements when including the gaze vector. In this work, we focus not on the gaze of the observer (in contrast to eye-tracking literature), but on the depicted gaze direction of the object in the image—e.g., which way the animal is looking. This distinction is critical, as it provides a novel perspective. For evaluation purposes, we downloaded several state-of-the-art pre-trained Imagenet classifiers, namely Resnet18, Resnet34, Resnet52, VGG13 and VGG16 He et al. [2015]. We fine-tuned the networks on learning rates 0.0001, 0.00001, 0.000005 and over various amount of epochs. For the direction, we included arrows in the image that annotated the gaze direction in the training set, and validated it allowing either animal to be classified. We gathered the same results when only allowing one animal to be classified correctly. Figure 7 shows the accuracies when allowing both classes. Despite allowing both classes as correct, in all cases, including the direction leads to significantly higher accuracies with otherwise the same hyperparameters. We include graphs with all tested learning rates in the Appendix A. As baseline check, we tested the same by annotating random other areas in the image, to double check that the concept incorporated was meaningful. We also included one experiment where we tested how eye annotation alone performed in our dataset. Our experiments show that both gaze direction and eye annotation alone lead to meaningful improvements, raising accuracy rates by over 20% in a setting allowing up to 1000 classes. Our results shown here were generated using the ADAM optimizer Kingma and Ba [2017], because initial experiments revealed to us that this was the optimizer that produced the best accuracy results in practice. ADAM is known to be state-of-the-art in practice Choi [2019]. All experiments were performed using a NVIDIA A100-SXM4-80GB GPU. Calculating lower epochs took seconds, the overall plotting of all accuracies
|
https://arxiv.org/abs/2505.21589v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.