text string | source string |
|---|---|
e fut ed b y the da t a pr esen t ed. [ U pda t ed K e y Clues]: - Str ok e v olume r emains unchang ed during sauna ba thing , despit e incr eases in heart r a t e and c ar diac output. - Sauna use is link ed t o r educed c ar dio v ascular mort ality (including pr esumably heart a tt ack risk), impr o v ed men t al h... | https://arxiv.org/abs/2505.21503v1 |
How does Alignment Enhance LLMs’ Multilingual Capabilities? A Language Neurons Perspective Shimao Zhang1†*Zhejian Lai1*Xiang Liu1*Shuaijie She1Xiao Liu2 Yeyun Gong2Shujian Huang1‡Jiajun Chen1 1National Key Laboratory for Novel Software Technology, Nanjing University 2Microsoft Research Asia {smzhang,laizj,liuxiang,shes... | https://arxiv.org/abs/2505.21505v1 |
into three parts: multilingual understanding, resolv- ing tasks, and generating outputs in the target language. This three-stage inference workflow clearly demonstrates how LLMs leverage English as a pivot language to handle multilingualism us- ing a unified pattern. Inspired by the neurobio- logical underpinnings of h... | https://arxiv.org/abs/2505.21505v1 |
LLMs’ multilingual performance by transferring the capabilities from high-resource languages to low-resource languages [Eronen et al., 2023, Zhao et al., 2024c,a, She et al., 2024], which efficiently and effectively improves the model performance in low-resource language scenarios. Furthermore, Zhang et al. [2024] firs... | https://arxiv.org/abs/2505.21505v1 |
et al., 2024] effectively and efficiently improves the LLMs’ multilingual performance. Additionally, it is also important for us to understand and analyze the mechanism of LLMs’ multilingual capabilities and multilingual alignment. Moreover, some studies on the identification of the language-specific and language-agnos... | https://arxiv.org/abs/2505.21505v1 |
ifNi,j= 1, and as a language-related neuron if1< N i,j< l. Both of them belong to language neuron . In contrast, a neuron is considered language-agnostic neuron if it exhibits high activation probabilities across all llanguages. Finally, given our focus on multilingual reasoning tasks, we select neurons exclusively bas... | https://arxiv.org/abs/2505.21505v1 |
languages. Experiments are conducted on both the base model and the aligned model, with results presented in Figure 2. We report the results of both language-specific neurons and language neurons. It can be found that whether deactivating language-specific neurons or all language neurons, the results consistently exhib... | https://arxiv.org/abs/2505.21505v1 |
2.36 0.29 0.44 0.31 -0.00 0.39 0.27 0.25 0.35 2.75 13.97 0.79 1.07 0.48 0.03 0.69 0.08 0.26 0.35 0.78 4.44 13.63 2.50 0.70 0.03 0.71 0.07 0.33 0.27 2.12 3.17 2.12 23.49 0.75 -0.01 0.08 0.27 0.39 0.37 0.07 0.21 0.11 0.18 3.51 05101520 (d) Aligned - Language Neurons Figure 2: PPL changes of MistralMathOctopus on MGSM aft... | https://arxiv.org/abs/2505.21505v1 |
of neurons and the basic partitioning of the model. We further analyze the changes in different types of neurons before and after multilingual alignment. Based on the four functional stages in LLMs, we quantify the layer- wise changes ( ∆) in the number of different types of neurons. Figure 4 presents the re- sults for... | https://arxiv.org/abs/2505.21505v1 |
neurons, which are applicable to only a single language. Meanwhile, during the alignment process, the model improves its understanding of task-relevant common knowledge. Therefore, the overall number of language- agnostic neurons also increases significantly. 4.6 Spontaneous Multilingual Alignment Analysis The importan... | https://arxiv.org/abs/2505.21505v1 |
Furthermore, based on this finding, we quantify the number of language neurons for English and non-English languages based on the MistralMathOctopus base model (Table 3). Our analysis reveals that English has significantly fewer neurons than other languages, both in terms of language-specific and language-related neuro... | https://arxiv.org/abs/2505.21505v1 |
preprint arXiv:2304.04675 , 2023. Shimao Zhang, Changjiang Gao, Wenhao Zhu, Jiajun Chen, Xin Huang, Xue Han, Junlan Feng, Chao Deng, and Shujian Huang. Getting more from less: Large language models are good spontaneous multilingual learners. arXiv preprint arXiv:2405.13816 , 2024. Minheng Ni, Haoyang Huang, Lin Su, Edw... | https://arxiv.org/abs/2505.21505v1 |
Finn. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems , 36:53728–53741, 2023. Noam Shazeer. Glu variants improve transformer. arXiv preprint arXiv:2002.05202 , 2020. Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang,... | https://arxiv.org/abs/2505.21505v1 |
for preference alignment, and the vLLM engine Kwon et al. [2023] for inference. B.1 MAPO We employ the officially released scripts5to generate the preference data. For the alignment process, we set the learning rate to 1e-6 and the batch size to 16. LoRA is utilized to fine-tune the model with a LoRA rank of 64, a LoRA... | https://arxiv.org/abs/2505.21505v1 |
0.06 0.10 0.05 0.01 0.00 0.21 0.05 0.04 -0.00 0.06 -0.00 0.01 0.06 -0.01 0.01 -0.02 0.40 0.03 -0.02 0.00 -0.00 0.02 0.06 0.02 0.06 0.03 0.05 0.93 0.03 0.01 0.01 0.02 0.10 0.00 -0.01 -0.01 -0.04 -0.01 0.87 0.59 0.09 0.22 0.04 -0.05 0.06 -0.05 -0.04 0.01 0.19 7.32 0.40 0.62 0.03 0.01 0.28 -0.03 -0.01 -0.00 -0.01 1.64 8.6... | https://arxiv.org/abs/2505.21505v1 |
0.15 0.13 0.09 0.30 0.12 0.11 0.13 0.11 0.01 -0.01 1.06 0.12 0.13 0.04 0.07 0.00 0.02 0.10 -0.02 -0.16 0.05 2.11 0.14 0.06 0.04 0.00 0.04 0.12 0.00 0.00 0.11 0.13 2.30 0.08 0.06 0.04 0.04 0.19 0.00 0.23 0.08 0.05 0.03 4.36 1.90 0.54 0.68 0.13 -0.19 0.25 -0.13 0.28 0.11 1.76 10.79 0.98 0.88 0.13 0.14 0.83 0.14 0.25 0.33... | https://arxiv.org/abs/2505.21505v1 |
MGSM after deactivating language neurons. “Base” indicates the results of the base model. “Aligned” indicates the results of the aligned model. 051015202530Layer050010001500200025003000 NumberLanguage-Specific Language-Related Language-Agnostic (a) MistralMathOctopus on MSV AMP. 051015202530Layer05001000150020002500300... | https://arxiv.org/abs/2505.21505v1 |
1 AITEE - Agentic Tutor for Electrical Engineering Christopher Knievel, Alexander Bernhardt, Christian Bernhardt Intelligent tutoring systems combined with large language models offer a promising approach to address students’ diverse needs and promote self-efficacious learning. While large language models possess good ... | https://arxiv.org/abs/2505.21582v1 |
and Information Technology, HTWG Hochschule Konstanz, University of Applied Sciences, Germany (email: {cknievel,abernhard,cbernhard }@htwg-konstanz.de).students lose their sense of self-efficacy when solving tasks independently due to excessive support and instead develop a dependency on the tutor [10, 11]. The applica... | https://arxiv.org/abs/2505.21582v1 |
identification of the electrical circuit as well as the graph-representation and the subsequent similarity measure are discussed. Four different large language models (LLMs) are evaluated in Chapter IV concerning their capabilities of understanding electrical circuits. Furthermore, the performance of all four LLMs is e... | https://arxiv.org/abs/2505.21582v1 |
or components to emphasize particular circuit characteristics. These pedagogically motivated layouts present unique challenges that existing connection recogni- tion methods cannot easily address. Although the Connected Component Analysis [19, 20] could be a potential solutions, it is not well-suited for processing han... | https://arxiv.org/abs/2505.21582v1 |
Fig. 4. For the calculation of a graph similarity, the U1R2 N2 R3 R5 R4 R6 N6 R1 Fig. 4: Graph representation of the exemplary circuit. graph neural network Φ, parameterized by the weights θ, maps each circuit ciinto an embedding space of ddimensions [25]: fi= Φ ( ci;θϕ) (1) where fi∈Rdis referred to as the feature rep... | https://arxiv.org/abs/2505.21582v1 |
line-loss metric that quantifies the geometric proximity be- tween candidate connections and actual circuit paths. The line- Fig. 5: Output of the object detection for the example circuit with the YOLOv8s model.loss computation consists of two steps: Initially, each inter- node connection kis discretized with Nb,kequid... | https://arxiv.org/abs/2505.21582v1 |
effect individually before combining the results. This definition of circuit similarity forms the foundation for developing feature representations that can effectively cap- ture these characteristics for comparison purposes. In order to develop an effective feature representation, we formulate a classification problem... | https://arxiv.org/abs/2505.21582v1 |
The feature representations of all three functions are consolidated into a unified vector ⃗fm= [⃗fc,⃗fs,⃗fb]. To ensure consistent scaling, the elements of ⃗fmare normalized, constraining their sum to unity, and stored in ⃗f¯m. 2) Similarity Measure The effectiveness of graph embeddings for circuit repre- sentation is ... | https://arxiv.org/abs/2505.21582v1 |
same circuit as the reference descrip- tion. A corresponding prompt example for the baseline approach is shown in Fig. 9 The baseline accuracy results are presented in the first column of Table IV. Accuracy is quantified as the ratio of the total points achieved to the maximum possible total points. The smallest model,... | https://arxiv.org/abs/2505.21582v1 |
Create a description of how the electric current flows through the circuit from the first pole of the source to the second. (This point can be ignored for circuits with multiple sources) 3. Create a list of sub-circuits, such as series circuits, parallel circuits, or delta/star connections. 4. Create a description of t... | https://arxiv.org/abs/2505.21582v1 |
damentals of Electrical Engineering, in which, among other topics, resistance networks with direct current are examined. One or two tasks for a subset of circuit classes from Table III, with several subtasks, are evaluated. In order to make a precise statement about the capabilities of the models in relation to the cor... | https://arxiv.org/abs/2505.21582v1 |
chunks. OpenAI’s text-embedding-ada-002-v2 model was used to create the embeddings. To identify semantically relevant content in the vector database, a circuit description must be added to the prompt alongside the student’s question (e.g., ”How do I calculate the current I3”). The three most similar chunks are returned... | https://arxiv.org/abs/2505.21582v1 |
to suggest tutor-level expertise. 3) Multi-Representation Indexing for Improved Retrieval A key consideration for RAG is the chunking strategy. Instead of segmenting content based on a fixed number of tokens, the teaching material is structured into clearly defined units. From a didactic perspective, a unit represents ... | https://arxiv.org/abs/2505.21582v1 |
convention deviated from that implied or explicitly stated in the query, resulting in incorrect calculations using the superposition method. C. Evaluation of Didactic Competence A main goal of AITEE is to generate didactically valuable responses. This necessitates that the tutor guides students towards solutions, rathe... | https://arxiv.org/abs/2505.21582v1 |
w. Instructions Baseline w. Instructions Llama 3.1 8B 0/5 4/5 1/5 4/5 Llama 3.1 70B 0/5 5/5 3/5 5/5 Llama 3.1 405B 0/5 5/5 3/5 5/5 Claude 3.5 Sonnet 0/5 5/5 3/5 5/5 TABLE VI: Evaluation of fostering the learner autonomy and dialogue robustness for baselines models vs models with instruction prompts. To address these li... | https://arxiv.org/abs/2505.21582v1 |
materials. Regarding didac- tic competence, initial evaluations revealed a tendency for LLMs to provide direct solutions, which hindered learner autonomy. However, implementing instruction prompts that explicitly guide the LLMs to adopt Socratic questioning tech- niques significantly improved the system’s ability to fo... | https://arxiv.org/abs/2505.21582v1 |
Self-Efficacy, and Fear of Failure Interactions with How Novices Use LLMs to Solve Programming Problems,” in Proceedings of the 2024 on Innovation and Technology in Computer Science Education V. 1 , ser. ITiCSE 2024. New York, NY , USA: Association for Computing Machinery, Jul. 2024, pp. 276–282. [12] M. Negnevitsky, “... | https://arxiv.org/abs/2505.21582v1 |
Allen and T. Hospedales, “Analogies Explained: Towards Understand- ing Word Embeddings,” arXiv preprint arXiv:1901.09813 , May 2019. [27] G. Jocher, J. Qiu, and A. Chaurasia, “Ultralytics YOLO,” Jan. 2023. [Online]. Available: https://github.com/ultralytics/ultralytics [28] A. Vijayakumar and S. Vairavasundaram, “YOLO-... | https://arxiv.org/abs/2505.21582v1 |
Augmented Generation,” International Journal on Natural Language Computing , vol. 13, no. 1, pp. 37–47, Feb. 2024. [42] L. Gao, X. Ma, J. Lin, and J. Callan, “Precise Zero-Shot Dense Retrieval without Relevance Labels,” in Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1... | https://arxiv.org/abs/2505.21582v1 |
arXiv:2505.21584v1 [cs.LG] 27 May 2025Fairness in Federated Learning: Fairness for Whom? Afaf Taik1,2, Khaoula Chehbouni1,3, Golnoosh Farnadi1, 2, 3 1Mila - Quebec AI Institute 2Universit ´e de Montr ´eal 3McGill University afaf.taik@mila.quebec, khaoula.chehbouni@mila.quebec, farnadig@mila.quebec Abstract Fairness in ... | https://arxiv.org/abs/2505.21584v1 |
In this paper, we argue that much of the fairness in FL lit- erature suffers from a critical abstraction error (Selbst et al. 2019). Fairness is often defined and operationalized in terms of narrow system-level metrics, such as minimizing perfor- mance variance across clients or allocating rewards based on contribution... | https://arxiv.org/abs/2505.21584v1 |
both provide data and are affected by the model in cross-device FL, whereas in cross-silo FL, institu- tions (e.g., banks, hospitals) operate as the clients, but use their models to make decisions impacting their customers. While FL is often introduced as a privacy-preserving al- ternative to centralized ML, its distri... | https://arxiv.org/abs/2505.21584v1 |
Clients operate under diverse legal, institutional, or social conditions that shape what data can be collected or shared. Scarcity , meanwhile, refers to limited availability of data and compute resources—especially among clients serving minority populations or operating under infrastructural con- straints. Scarcity am... | https://arxiv.org/abs/2505.21584v1 |
regret refers to the dif- ference between the reward already received by the dataowner and what they are supposed to receive. •Participation fairness: Including under-represented and never-represented clients (Shi, Yu, and Leung 2024) in the training process. Client selection in FL (Cho, Wang, and Joshi 2022) is among ... | https://arxiv.org/abs/2505.21584v1 |
Figure 1: Fairness paradigms covered in our analysis. Annotation: For each paper, we identified (1) the motiva- tions of the paper and if it came with real-world examples, (2) the fairness definitions and the considered stakeholders, (3) the proposed interventions and which stages of the FL lifecycle they focus on; and... | https://arxiv.org/abs/2505.21584v1 |
community. 4.2 Abstract Evaluation: Synthetic Clients vs Real Harms There is a significant mismatch between the concept that re- searchers aim to measure (fairness) and the measurement methods they use to evaluate it. This mismatch appears in two ways: the creation of synthetic clients focused solely on data heterogene... | https://arxiv.org/abs/2505.21584v1 |
to miss the real impact of these solutions. Additionally, clients are modelled as interchangeable units (i.e., similar resources, contexts), as the focus is pre- dominantly on data heterogeneity. The abstraction of equally resourced, equally motivated, or equally impacted, hides some of the inequalities fairness mechan... | https://arxiv.org/abs/2505.21584v1 |
may propagate unchecked, and the criteria used to assess fairness may fail to capture the outcomes that mat- ter most. Our analysis reveals that few papers adopt a life- cycle perspective, which is essential to understanding how harms emerge and to designing effective and equitable in- terventions. Most interventions a... | https://arxiv.org/abs/2505.21584v1 |
within each silo? At the same time, institutions have competing interests. Supposing that the institutions are equally powerful, a so- called collaborative fairness mechanism may be considered to ensure that no institution intentionally degrades the per- formance for others. In this example, the fairness concerns are n... | https://arxiv.org/abs/2505.21584v1 |
inadequate proxies (Kleinberg et al. 2018)). Such biases can be am- plified during local training, introducing learning bias . In the following, we focus on the different steps where there is more control and visibility, as well as biases that are specif- ically tied to FL’s collaborative nature. Problem Formulation De... | https://arxiv.org/abs/2505.21584v1 |
are not under the developers’ control, especially in cross-device FL, some of the biases and harms found in data collection are shifted to client selection. In cross-device FL, the ideal scenario would include a full participation from all clients at each communication round. Nonetheless, clients are often offline, hav... | https://arxiv.org/abs/2505.21584v1 |
clients. The second decision is choosing the updates weight- ing scheme. The first proposed and most adopted method (McMahan et al. 2017) requires weighting models based on the size of each client’s dataset. Such technique ampli- fies aggregation bias by prioritizing dominant clients and suppressing minority contributi... | https://arxiv.org/abs/2505.21584v1 |
affected by the final model? What are the actual costs borne by clients? How would an incen- tive scheme shape participation and power within the fed- eration? Without answers to these questions, incentive de- sign risks introducing new harms—through collaborative bias where incentive mechanisms introduce misattributed... | https://arxiv.org/abs/2505.21584v1 |
introduce dis- parities and possible damages. In particular, the biases in the FL lifecycle translate into the following harms: •Quality of Service (QoS) Harms. These occur when theFL model performs unequally across clients. A typical cause is aggregation bias , where global model updates disproportionately reflect dat... | https://arxiv.org/abs/2505.21584v1 |
evolve to better address fairness and contextual needs. Following our analysis, We note some key takeaways for future work: 1.Fairness requires contextually grounded evaluation. The datasets most commonly used in FL fairness re- search are inherited from centralized ML and fail to re- flect the complexity of collaborat... | https://arxiv.org/abs/2505.21584v1 |
challenges that merit separate examination. Second, the coverage of our annotations may be constrained by the chosen library and search criteria, al- though we believe the sample is representative of the current research landscape. Finally, despite our effort to systematize the annotations, the process inevitably invol... | https://arxiv.org/abs/2505.21584v1 |
Elish, M. C.; Gabriel, I.; and Mohamed, S. 2022. Power to the people? Opportunities and challenges for participatory AI. InProceedings of the 2nd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization , 1–8. Bonawitz, K. 2019. Towards federated learning at scale: Syste m design. arXiv preprint ... | https://arxiv.org/abs/2505.21584v1 |
Learning Systems. CoRR , abs/2401.13366. ArXiv: 2401.13366. Goetz, J.; Malik, K.; Bui, D.; Moon, S.; Liu, H.; and Kumar, A. 2019. Active Federated Learning. ArXiv:1909.12641 [cs]. Green, B. 2021. The contestation of tech ethics: FA so- ciotechnical approach to technology ethics in practice. Jour- nal of Social Computin... | https://arxiv.org/abs/2505.21584v1 |
geneous Data Distributions. In IEEE International Confer- ence on Communications, ICC 2024, Denver, CO, USA, June 9-13, 2024 , 728–733. IEEE. Lim, W. Y . B.; Luong, N. C.; Hoang, D. T.; Jiao, Y .; Liang, Y .-C.; Yang, Q.; Niyato, D.; and Miao, C. 2020. Feder- ated learning in mobile edge networks: A comprehensive surve... | https://arxiv.org/abs/2505.21584v1 |
definitions. Annual review of statistics and its applica- tion, 8(1): 141–163. Mohri, M.; Sivek, G.; and Suresh, A. T. 2019. Agnostic federated learning. In International conference on machine learning , 4615–4625. PMLR. Molamohammadi, M.; Ta ¨ık, A.; Le Roux, N.; and Farnadi, G. 2023. Unraveling the Interconnected Axe... | https://arxiv.org/abs/2505.21584v1 |
2019. Energy de- mand prediction with federated learning for electric vehicle networks. In 2019 IEEE global communications conference (GLOBECOM) , 1–6. IEEE. Sattler, F.; M ¨uller, K.-R.; and Samek, W. 2020. Clustered federated learning: Model-agnostic distributed multitask op- timization under privacy constraints. IEE... | https://arxiv.org/abs/2505.21584v1 |
R.; Liu, J.; Cai, T.; and Zheng, Z. 2024a. Libra: A Fairness-Guaranteed Framework for Semi-Asynchronous Federated Learning. In 44th IEEE In- ternational Conference on Distributed Computing Systems, ICDCS 2024, Jersey City, NJ, USA, July 23-26, 2024 , 797– 808. IEEE. Wang, L.; Zhu, T.; Zhou, W.; and Yu, P. S. 2024b. Lin... | https://arxiv.org/abs/2505.21584v1 |
Federated Learning. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society , AIES ’20, 393–399. New York, NY , USA: Association for Com- puting Machinery. ISBN 978-1-4503-7110-0. Yue, X.; Nouiehed, M.; and Kontar, R. A. 2021. GIFAIR-FL: An Approach for Group and Individual Fairness in Feder- ated Learning... | https://arxiv.org/abs/2505.21584v1 |
CellCLAT: Preserving Topology and Trimming Redundancy in Self-Supervised Cellular Contrastive Learning Bin Qin∗ qinbin21@mails.ucas.ac.cn Institute of Software, Chinese Academy of Sciences Beijing, China University of Chinese Academy of Sciences Beijing, ChinaQirui Ji∗ jiqirui2022@iscas.ac.cn Institute of Software, Chi... | https://arxiv.org/abs/2505.21587v1 |
//doi.org/10.5281/zenodo.15514849. 1 Introduction Graph Neural Networks (GNNs) [ 46] model pairwise interactions in non-Euclidean graph structures, such as user relationship modeling in social networks [ 14], protein-protein interactions [ 44] in biology, and chemical or molecular property prediction [ 54]. GNNs achiev... | https://arxiv.org/abs/2505.21587v1 |
introduce semantic redundancy, which we term Cellular Topological Redundancy . As depicted in the Figure 1, we conduct exploratory experiments to validate this phenome- non by introducing a naive cellular complex contrastive learning paradigm1. Specifically, we observe that certain trimmed represen- tations can indeed ... | https://arxiv.org/abs/2505.21587v1 |
justification from a causal perspective to determine that the cellular topological redundancy is treated as a confounder. 4) We conduct extensive experiments across various benchmarks to empirically validate the superior performance of CellCLAT. 2 Related Work 2.1 Topological Deep Learning TDL extends the paradigm of M... | https://arxiv.org/abs/2505.21587v1 |
Gaussian noise, offering a more efficient approach to gener- ating augmented views. HTML [ 30] employs knowledge distillation by incorporating the learning of graph-level and subgraph-level topological isomorphism tasks into the objective function, thereby enhancing the performance of downstream tasks. Our method in- t... | https://arxiv.org/abs/2505.21587v1 |
the graph, as shown in Figure 2. Starting from the vertex set 𝑉, the 0-cells in 𝑋correspond to these vertices, i.e.,𝑉=𝑋{︃0}︃. Next, by gluing the endpoints of line segments to these vertices, we obtain the 1-cells 𝜎{︃1}︃ 𝛼 in𝑋, which correspond to the edges 𝐸. Therefore, the graph 𝐺can be seen as the one- dime... | https://arxiv.org/abs/2505.21587v1 |
augmentation strategies for each dataset or performing a tedious search for suit- able augmentation combinations, sometimes relying on expensive domain knowledge. These limitations become even more problem- atic when applied to cellular complex data structures. To address this, inspired by SimGRACE [ 56], we propose ge... | https://arxiv.org/abs/2505.21587v1 |
coupling cel- lular sparsification loss with contrastive learning task gradients through bi-level meta-learning. Cellular Trimming Scheduler Ψ{︃𝜏𝛼}︃.For a given cellular complex𝑋, the original distribution of 2-cells is denoted as 𝑃𝒞{︃𝑋}︃∈ 0,1 𝑁2, which includes the entire set of 2-cells 𝒞{︃2}︃= 𝜏{︃2}︃ 𝛼 𝑁2 ... | https://arxiv.org/abs/2505.21587v1 |
The goal is to guide Ψtowards suppressing the gradient contributions of higher-order 2-cells that contain task-irrelevant information in the contrastive learning task loss. To achieve this, the cellular sparsification loss is computed with respect to the performance of 𝑓{︃⋅;Θ}︃and𝑔{︃⋅;Φ}︃, which is measured using the... | https://arxiv.org/abs/2505.21587v1 |
causal inference techniques [ 42,43] to theoretically analyze the motivational experimental phenomena (Figure 1). By construct- ing a bottom-up structural causal model (SCM) [ 43] that underlies the participation of high-order cellular topology in the message- passing process, we demonstrate that redundant cellular top... | https://arxiv.org/abs/2505.21587v1 |
68.9 ±0.1 68.4±0.3 43.2 ±0.3 6.4 CellCLAT 70.4±0.5 71.8 ±0.1 69.5 ±0.3 68.5±0.4 44.0 ±0.3 1.8 Table 2: Semi-supervised representation learning classification accuracy (%) on TU datasets. Dataset NCI1 PROTEINS MUTAG NCI109 IMDB-B IMDB-M SimGRACE 79.1±0.4 75.3 ±0.1 89.0 ±1.3 78.4 ±0.4 71.3±0.8 49.1 ±0.8 CellCL 79.3±0.3 7... | https://arxiv.org/abs/2505.21587v1 |
1-cells (edges), and 2-cells (polygons), respectively. Interestingly, trimming 0-cell representations (CellCL w/ 0-CellTrim) and 1-cell representa- tions (CellCL w/ 1-CellTrim) leads to performance degradation to CellCLAT: Preserving Topology and Trimming Redundancy in Self-Supervised Cellular Contrastive Learning KDD ... | https://arxiv.org/abs/2505.21587v1 |
for understanding the higher- order topological structures on downstream tasks. Our approach offers the first self-supervised method capable of learning semantic- rich representations from cellular complexes. The promising results of our framework highlight its potential for more expressive mod- eling of complex data r... | https://arxiv.org/abs/2505.21587v1 |
learning through the lens of expressivity. arXiv preprint arXiv:2408.05486 (2024). [14] Wenqi Fan, Yao Ma, Qing Li, Yuan He, Eric Zhao, Jiliang Tang, and Dawei Yin. 2019. Graph neural networks for social recommendation. In The world wide web conference . 417–426. [15] Rui Ferreira, Roberto Grossi, Romeo Rizzi, Gustavo ... | https://arxiv.org/abs/2505.21587v1 |
(2018). [33] Shikun Liu, Andrew Davison, and Edward Johns. 2019. Self-supervised generali- sation with meta auxiliary learning. Advances in Neural Information Processing Systems 32 (2019). [34] Chris J Maddison, Andriy Mnih, and Yee Whye Teh. 2016. The concrete distri- bution: A continuous relaxation of discrete random... | https://arxiv.org/abs/2505.21587v1 |
nti, Series 2, 9 (1968), 12–16. [52] John HC Whitehead. 1949. Combinatorial homotopy I. Bull. Amer. Math. Soc 55, 3 (1949), 213–245. [53] Hanrui Wu, Andy Yip, Jinyi Long, Jia Zhang, and Michael K Ng. 2023. Simplicial complex neural networks. IEEE Transactions on Pattern Analysis and Machine Intelligence (2023). [54] Zh... | https://arxiv.org/abs/2505.21587v1 |
if and only if c𝑋1=c𝑋2. In a more intuitive sense, this definition implies that the num- ber of cells with dimension 𝑛and a given color in 𝑋1equals the number of cells with the same color and dimension 𝑛in𝑋2. Color equivalence reflects some structural similarities between the two complexes but does not imply isom... | https://arxiv.org/abs/2505.21587v1 |
and let two CR models aandbsatisfy a⪯b. Ifa𝑋1≠a𝑋2, then b𝑋1≠b𝑋2. Proof: By Lemma 1, when we traverse the sets of cells of different dimensions in 𝑋1and𝑋2, we obtain that if a𝑋1= a𝑋1𝜎⋃︀∀𝜎∈𝑋1 ≠ a𝑋2𝜏⋃︀∀𝜏∈𝑋2 =a𝑋2, then b𝑋1= b𝑋1𝜎⋃︀∀𝜎∈𝑋1 ≠ b𝑋2𝜏⋃︀∀𝜏∈ 𝑋2 =b𝑋2. ◻ Theorem 2 states that for a pair of non... | https://arxiv.org/abs/2505.21587v1 |
a𝑡+1{︃𝜎}︃=a𝑡+1{︃𝜏}︃, we establish that a𝑡+1⪯b𝑡+1. By the inductive hypothesis, we have proven that a𝑡⪯b𝑡. Further leveraging transitivity and Theorem 2, we conclude that c𝑡⪯b𝑡.◻ A.2 Causal Background Knowledge Structural Causal Models and Intervention. A structural causal model (SCM) [ 42] is a triple 𝑀=∐︀𝑋... | https://arxiv.org/abs/2505.21587v1 |
learning and semi-supervised learn- ing were obtained using a single NVIDIA V100 GPU with 32G of memory. We performed the experiments on Ubuntu 20.04 as our operating system. B.3 Additional Hyper-parameter Analysis Batch size. Figure 8(a) shows the classification accuracy of our models after training twenty epochs usin... | https://arxiv.org/abs/2505.21587v1 |
arXiv:2505.21588v1 [cs.MA] 27 May 2025Herd Behavior: Investigating Peer Influence in LLM-based Multi-Agent Systems Young-Min Cho, Sharath Chandra Guntuku, Lyle Ungar University of Pennsylvania jch0@seas.upenn.edu Abstract Recent advancements in Large Language Mod- els (LLMs) have enabled the emergence of multi-agent sy... | https://arxiv.org/abs/2505.21588v1 |
failures (Wu and Ito, 2025; Zhu et al., 2024). Understanding when herd behavior is beneficial and when it is detrimental is essen- tial for building trustworthy, adaptive, and resilient multi-agent LLM systems. However, the mechanisms underlying the emer- gence of herd behavior, as well as the factors that modulate its... | https://arxiv.org/abs/2505.21588v1 |
Definition 1: Confidence. Following the works of Xiao and Wang 2019, we define an agent’s con- fidence (preference) in its response to a question as the probability assigned by the generation dis- tribution P(r|C). Since the responses are fixed categorical choices, we treat each ras a single- token label, and define th... | https://arxiv.org/abs/2505.21588v1 |
ensures that each agent interacts with only one peer, allowing for clearer attribution of behavioral changes. From the agent’s original distribution P(r|q) over possible responses to question q, we manually select one of four types of responses to serve as the peer’s opinion rj: •1st: The most probable response, which ... | https://arxiv.org/abs/2505.21588v1 |
a clear pattern: flip rates are high- est when self-confidence is low and perceived peer confidence is high, indicating stronger herd behav- ior in such conditions. As self-confidence increases, individuals become less likely to switch their an-swers, even when peers appear confident. Con- versely, when perceived confi... | https://arxiv.org/abs/2505.21588v1 |
role of information format in af- fecting herd behavior, we conduct a series of exper- iments to assess how factors such as the number of agreeing or disagreeing agents, presentation meth- ods, and the presentation order affect the magnitude of herd behavior.4.1 Experiment Setting We extended the experimental design fr... | https://arxiv.org/abs/2505.21588v1 |
the average flip rate as a function of the number of agreeing and disagreeing agents. The choice of presentation format significantly influences herding behavior. In theCount andRatio formats, the heatmaps reveal a distinct separation into upper and lower triangles along the diagonal where the number of agreeing and di... | https://arxiv.org/abs/2505.21588v1 |
0.18 0.44 0.29 0.69 Strong Factors 0.63* 0.43 0.54 0.29* 0.59* 0.49 0.46 Weak Factors 0.36 0.28* 0.69* 0.16 0.23 0.22* 0.76* Strong Prompt 0.55 0.43 0.57 0.17 0.44 0.29 0.69 Weak Prompt 0.55 0.43 0.56 0.18 0.46 0.36 0.61 Table 3: The effects of different control conditions on herd behaviors across factual and opinionat... | https://arxiv.org/abs/2505.21588v1 |
datasets (0.63 on MMLU- Pro and 0.59 on GlobalOpinionQA), indicating that agents are more likely to revise their answers when exposed to highly persuasive peer input. This set- ting also leads to the highest group accuracy (0.29) on MMLU-Pro, even higher than CoT, suggesting that well-structured peer influence can impr... | https://arxiv.org/abs/2505.21588v1 |
also affects behavior, akin to first-impression bias, despite identical information. Moreover, prompt-based instructions have minimal effect compared to structural cues, indicating that agents are more influenced by framing than by explicit guidance. These findings point to bounded ratio- nality shaped by presentation ... | https://arxiv.org/abs/2505.21588v1 |
heterogeneous agent configurations, and integrat- ing adaptive learning mechanisms to better simu- late evolving social dynamics in collaborative AI systems.References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Ana... | https://arxiv.org/abs/2505.21588v1 |
Yuting Zeng, Lei Jiang, Hailong Yang, and Jing Li. 2024a. Groupdebate: Enhancing the efficiency of multi-agent debate using group discussion. arXiv preprint arXiv:2409.14051 . Xuan Liu, Jie Zhang, Haoyang Shang, Song Guo, Chengxu Yang, and Quanyan Zhu. 2024b. Explor- ing prosocial irrationality for llm agents: A social... | https://arxiv.org/abs/2505.21588v1 |
and Yi Wu. 2023. Language agents with reinforcement learn- ing for strategic play in the werewolf game. arXiv preprint arXiv:2310.18940 . Paul Zarnoth and Janet A Sniezek. 1997. The so- cial influence of confidence in group decision mak- ing. Journal of Experimental Social Psychology , 33(4):345–366. Xiaochen Zhu, Caiq... | https://arxiv.org/abs/2505.21588v1 |
and SOCIAL IQA is not under a license, and GlobalOpinionQA is under cc-by-nc-sa-4.0 license. Our use of the dataset is consistent with the intended use. The datasets do not contain personally identifying info or offensive content. All the datasets are in english. Factual Opinionated Presentation FormatAvg. Flip Rateρ(I... | https://arxiv.org/abs/2505.21588v1 |
arXiv:2505.21589v1 [cs.CV] 27 May 2025Do you see what I see? An Ambiguous Optical Illusion Dataset exposing limitations of Explainable AI Carina Newen TU Dortmund University , Research Center Trustworthy Data Science and Security carina.newen@cs.tu-dortmund.de Luca Hinkamp TU Dortmund University, Research Center Trustw... | https://arxiv.org/abs/2505.21589v1 |
concepts rather than highlighting pixels, which is the critical research gap we address in this paper. of the eye’s direction, the image may be interpreted as either a rabbit or a duck. If you were to use any current XAI algorithm to generate explanations of why it is a rabbit or a duck, the explanations could look exa... | https://arxiv.org/abs/2505.21589v1 |
Bontempelli et al. [2022], Yeh et al. [2020], Goyal et al. [2019]. This is why this was extended to automatic concept-based extraction, which relies on segmentation strategies that then employ importance scores to dismiss outliers Fel et al. [2023a], Ghorbani et al. [2019], Zhang et al. [2021], Fel et al. [2023b]. Howe... | https://arxiv.org/abs/2505.21589v1 |
et al. [2017] validate that gaze annotations can help improve the accuracies of classification approaches. However, both authors focused on annotating what humans look at and deem important. Kellnhofer et al. [2019] use gaze annotations to increase the generalizability of their models compared to other benchmark datase... | https://arxiv.org/abs/2505.21589v1 |
automatic detection of concepts in these ambiguous settings. While concept-based explanations in general require human-annotated concepts, ACE promises to automatically detect concepts using segmentation and clustering techniques using convolutional neural networks. When applying ACE to our dataset, it does not extract... | https://arxiv.org/abs/2505.21589v1 |
cannot find abstract concepts, such as gaze direction, because it clusters segmentations on a pixel-based level. We argue that we are currently missing concept-based XAI that goes beyond the grouping of pixels. •The Eye Position (e) : A 2D point representing the eye’s position in the plane, denoted as a vector [ex, ey]... | https://arxiv.org/abs/2505.21589v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.