text
string | source
string |
|---|---|
each 2D slice, and reconstructed the segmen- tations into a 3D volume using the method described in (Imran et al., 2024). The final prostate gland vol- ume (in mL) was computed from the reconstructed 3D model. 2.1.3. Slice-level labeling for model training Model training required slice-level annotations in- dicating the presence or absence of csPCa on each 2D micro-ultrasound (micro-US) slice. For csPCa- negative patients, all slices were labeled as negative. For csPCa-positive patients, operator-recorded needle trajectories were used to cognitively map each biopsy core to the corresponding region on the pre-biopsy micro-US scan. An expert urologist (WGB) manually reviewed slices surrounding these trajectories to as- sess csPCa extent, using sonographic features defined in the PRI-MUS protocol (Ma ffei et al., 2024). Slices with suspicious features were labeled as positive. All other slices in csPCa-positive cases were excluded from training, as their cancer status could not be con- fidently determined in the absence of histopathologi- cal confirmation. In total, 2,062 positive and 14,769 negative slices were included. Model evaluation was conducted at the patient level, using biopsy-confirmed csPCa status to reflect clinically meaningful outcomes. 2.2. Model development and evaluation 2.2.1. Micro-US image feature extraction We trained a convolutional autoencoder with self- supervised learning to extract high-level features from micro-US images. The autoencoder (Figure 1) consists of two symmetric components: a convolutional en- coder gϕand a decoder fϕ. The encoder compresses the input image xinto a lower-dimensional latent represen- tation z, while the decoder attempts to reconstruct the original image from z. The encoder includes five con- volutional layers with increasing channel dimensions (from 3 to 256), interleaved with ReLU activations and strided convolutions for spatial downsampling. The decoder mirrors this structure with transposed convolu- tions and corresponding upsampling layers to produce the reconstructed image. The autoencoder was trained to minimize the mean squared error between the input image xand the reconstructed output x′. After training, we used the encoder as a fixed feature extractor. Each 2D micro-US slice was passed through the encoder to generate a feature map, which was then reduced via adaptive average pooling to obtain a 256-dimensional feature vector. 2 Table 1: Baseline characteristics of our study cohort. Values are medians with interquartile ranges (IQRs) or counts with percentages. Characteristic Positive Cases Negative Cases Age (yr), median (IQR) 70 (66–74) 69 (63–71) PSA (ng /ml), median (IQR) 8.2 (5.8–13.1) 5.7 (3.3–7.5) DRE =1, n (%) 39 (49.4%) 6 (9.1%) DRE =0, n (%) 40 (50.6%) 60 (90.9%) Prostate volume (ml), median (IQR) 37.5 (31.5–49.4) 47.1 (39.0–55.4) Total 79 66 Figure 1: Architecture of the convolutional autoencoder used for fea- ture extraction. 2.2.2. Micro-US image classification We trained random forest classifiers to classify 2D micro-US slices as csPCa-positive or negative us- ing 256-dimensional feature vectors extracted by the autoencoder’s encoder, which captured key imaging characteristics such as texture, intensity, and shape. Slice-level predictions were aggregated to produce patient-level classifications. A patient was considered csPCa-positive if at least eight consecutive slices were predicted positive. This rule was based on retrospec- tive analysis of lesion length, which showed that csPCa typically spanned
|
https://arxiv.org/abs/2505.21355v1
|
eight adjacent slices on average. Re- quiring spatially contiguous predictions helped reduce false positives and improved specificity without com- promising sensitivity. 2.2.3. Classification with clinical biomarkers To assess the predictive value of commonly used screening tools, we trained a random forest model us- ing only non-imaging features: patient age, PSA level, prostate volume, and the binary outcome of DRE. Since the scikit-learn implementation supports in- ternal out-of-bag (OOB) validation, a separate valida- tion set was not required for hyperparameter tuning. 2.2.4. Cross-validation strategy We used five-fold cross-validation to ensure robust and unbiased model evaluation. The dataset, con- sisting of 79 csPCa-positive and 66 csPCa-negative patients, was partitioned into five mutually exclusive folds. Each patient appeared exactly once in the test set, once in the validation set, and in the training set for the remaining three folds. This approach ensured that all cases contributed to both training and evalu- ation while preventing data leakage. For each fold, training was monitored on the validation set to preventoverfitting, and the model checkpoint with the lowest validation loss was retained. 2.2.5. Implementation details All autoencoder models were implemented in PyTorch (v1.13) and trained on an NVIDIA A100 GPU using the Adam optimizer (learning rate =0.001, batch size =32). After training, the decoder was dis- carded and the encoder was used as a fixed feature ex- tractor. Each random forest model was trained with 1,000 trees, class-balanced weights, and stratified sam- pling to preserve the distribution of positive and neg- ative slices. Records with missing clinical values or duplicate patient entries were removed to ensure data integrity. 2.2.6. Peformance metrics We evaluated model performance at the patient level using the following metrics: area under the receiver operating characteristic curve (AUROC), accuracy, sensitivity, specificity, precision, and F1-score. All metrics were averaged across the five cross-validation folds. AUROC served as the primary evaluation metric, as it captures overall discriminatory perfor- mance across all classification thresholds. The remain- ing metrics were computed using a fixed probability threshold of 0.15, which was empirically selected to balance sensitivity and specificity in the training data. 3. Results 3.1. ROC-based comparison of imaging and clinical models Figure 2 shows the ROC curves comparing the per- formance of the imaging-based model and the clini- cal biomarker model. The clinical model, trained on age, PSA, DRE, and prostate volume, achieved a mean AUROC of 0.753, indicating moderate discriminative ability. In contrast, the imaging-based model achieved a higher AUROC of 0.871, reflecting a stronger ability to distinguish csPCa from non-csPCa based on deep features extracted from micro-US slices. 3.2. Comparison of classification metrics Table 2 summarizes the average classification met- rics across five cross-validation folds. The clinical 3 Figure 2: ROC curves comparing the imaging-based model and clin- ical biomarker-based model. The imaging model achieved a higher AUROC (0.871) than the clinical model (0.753). model achieved high sensitivity (96.2%) but low speci- ficity (27.3%), resulting in a high false-positive rate. Its precision (61.4%), F1-score (74.9%), and accu- racy (64.8%) were moderate. In contrast, the imag- ing model maintained high sensitivity (92.5%) while substantially improving specificity (68.1%). It
|
https://arxiv.org/abs/2505.21355v1
|
also achieved higher precision (77.8%), F1-score (84.5%), and accuracy (81.4%), demonstrating a better over- all balance between true-positive detection and false- positive control. Table 2: Threshold-based classification metrics (averaged over five folds) using a fixed decision threshold of 0.15. Model Sensitivity Specificity Accuracy Precision F1-Score Clinical 96.2% 27.3% 64.8% 61.4% 74.9% Micro-US 92.5% 68.1% 81.4% 77.8% 84.5% These results highlight that the AI-based imaging model o ffers a more favorable trade-o ffbetween sensi- tivity and specificity, reducing unnecessary false posi- tives while maintaining robust detection of clinically significant disease. This supports its potential as a more accurate and e fficient screening tool for csPCa. 4. Discussion This study demonstrates that an AI-enhanced micro- US model can significantly improve prostate can- cer screening performance compared to traditional biomarker-based approaches. While the clinical model, which incorporated PSA, DRE, prostate vol- ume, and age, achieved high sensitivity (96.2%), it exhibited poor specificity (27.3%), consistent with well-documented limitations of PSA-based screen- ing (Thompson et al., 2004; Schr ¨oder et al., 1998). In contrast, the imaging-based model maintained high sensitivity (92.5%) while substantially improving specificity (68.1%), resulting in better overall accuracy and precision. This improved balance is particularlyimportant in a screening context, where reducing false positives can lower the burden of unnecessary biopsies, overtreatment, and patient anxiety. Micro-US has been proposed as a lower-cost, point- of-care alternative to multiparametric MRI for prostate cancer detection (Klotz et al., 2020; Lughezzani et al., 2019). Prior studies, including the OPTIMUM ran- domized trial, have shown that micro-US is non- inferior to MRI for csPCa detection in biopsy set- tings (Kinnaird et al., 2025). However, its broader use in screening has been limited by high inter-operator variability and a steep learning curve (Zhou et al., 2024). Our results suggest that artificial intelligence can help overcome these barriers by enabling consis- tent, objective interpretation of micro-US images. The self-supervised autoencoder used in this study learned imaging features correlated with csPCa, which were then aggregated using a slice-level prediction frame- work to produce patient-level classifications. This ap- proach e ffectively eliminates reliance on subjective in- terpretation and may help standardize micro-US for widespread clinical adoption. Previous models, such as TRUSformer (Gilany et al., 2023) and TRUSWorthy (Harmanani et al., 2025), have focused on patch-level classification using weak labels, limiting their clinical interpretability. In contrast, our model aligned individual micro-US slices with biopsy-confirmed pathology and used an empiri- cally chosen aggregation rule to generate patient-level predictions. This method mirrors clinical decision- making and supports actionable, real-time screening decisions. The improved diagnostic performance, par- ticularly in specificity and F1-score, underscores the potential of AI-enhanced micro-US to function as a frontline screening modality that complements or even outperforms existing PSA- and DRE-based strategies. Despite encouraging results, this study has limita- tions. The cohort was drawn from a single academic center and consisted of patients already referred for biopsy, which may introduce selection bias. As such, performance in general screening populations remains to be validated. While five-fold cross-validation was used to minimize overfitting, external validation on in- dependent cohorts is necessary to assess generalizabil- ity. Additionally, the
|
https://arxiv.org/abs/2505.21355v1
|
threshold for patient-level clas- sification (eight consecutive positive slices) was em- pirically defined and may require adjustment in fu- ture prospective settings. Future work should include multicenter clinical trials, decision-curve analysis, and cost-e ffectiveness studies to evaluate the real-world impact of integrating AI-enhanced micro-US into rou- tine prostate cancer screening. If validated, this ap- proach could provide a scalable, a ffordable, and inter- pretable solution for early detection of clinically sig- nificant prostate cancer, bridging the gap between low- specificity biomarker screening and high-cost MRI- based diagnostics. 4 5. Conclusions This study indicates that AI-augmented micro- ultrasound can outperform traditional PSA- and DRE-based methods for screening clinically signif- icant prostate cancer. These results highlight the potential of micro-US, when interpreted by AI, to serve as a point-of-care screening tool that detects clinically significant cancers more accurately and reduces unnecessary biopsies. If validated in prospec- tive multi-center settings, AI-enhanced micro-US could transform early prostate cancer detection by enabling more precise, accessible, and cost-e ffective screening, ultimately improving patient outcomes while minimizing harm. Financial disclosures: Wei Shao certifies that all conflicts of interest, including specific financial interests and relationships and a ffiliations relevant to the subject matter or materials discussed in the manuscript (e.g., employment /affiliation, grants or funding, consultancies, honoraria, stock ownership or options, expert testimony, royalties, or patents filed, received, or pending), are the following: None. Funding /Support and role of the sponsor: This work was supported by the Department of Medicine and the Intelligent Clinical Care Center at the University of Florida College of Medicine. The authors express their sincere gratitude to the NVIDIA AI Technology Center at the University of Florida for their invaluable feed- back, technical guidance, and support throughout this project. References Ahmed, H.U., Bosaily, A.E.S., Brown, L.C., Gabe, R., Kaplan, R., Parmar, M.K., Collaco-Moraes, Y ., Ward, K., Hindley, R.G., Freeman, A., et al., 2017. Diagnostic accuracy of multi- parametric mri and trus biopsy in prostate cancer (promis): a paired validating confirmatory study. The Lancet 389, 815–822. Gilany, M., Wilson, P., Perera-Ortega, A., Jamzad, A., To, M.N.N., Fooladgar, F., Wodlinger, B., Abolmaesumi, P., Mousavi, P., 2023. Trusformer: improving prostate cancer detection from micro-ultrasound using attention and self-supervision. Interna- tional Journal of Computer Assisted Radiology and Surgery 18, 1193–1200. Harmanani, M., Wilson, P.F., To, M.N.N., Gilany, M., Jamzad, A., Fooladgar, F., Wodlinger, B., Abolmaesumi, P., Mousavi, P., 2025. Trusworthy: toward clinically applicable deep learning for confident detection of prostate cancer in micro-ultrasound. Inter- national Journal of Computer Assisted Radiology and Surgery , 1–9. Imran, M., Nguyen, B., Pensa, J., Falzarano, S.M., Sisk, A.E., Liang, M., DiBianco, J.M., Su, L.M., Zhou, Y ., Joseph, J.P., et al., 2024. Image registration of in vivo micro-ultrasound and ex vivo pseudo-whole mount histopathology images of the prostate: A proof-of-concept study. Biomedical Signal Processing and Con- trol 96, 106657. James, N.D., Tannock, I., N’Dow, J., Feng, F., Gillessen, S., Ali, S.A., Trujillo, B., Al-Lazikani, B., Attard, G., Bray, F., et al., 2024. The lancet commission on prostate cancer: planning for the surge in cases. The Lancet 403, 1683–1722.Jiang, H., Imran, M., Muralidharan, P.,
|
https://arxiv.org/abs/2505.21355v1
|
Patel, A., Pensa, J., Liang, M., Benidir, T., Grajo, J.R., Joseph, J.P., Terry, R., et al., 2024. Microsegnet: A deep learning approach for prostate segmentation on micro-ultrasound images. Computerized Medical Imaging and Graphics 112, 102326. Kinnaird, A., Luger, F., Cash, H., Ghai, S., Urdaneta-Salegui, L.F., Pavlovich, C.P., Brito, J., Shore, N.D., Struck, J.P., Schostak, M., et al., 2025. Microultrasonography-guided vs mri-guided biopsy for prostate cancer diagnosis: The optimum randomized clinical trial. JAMA . Klotz, L., Lughezzani, G., Ma ffei, D., S ´anchez, A., Pereira, J.G., Staerman, F., Cash, H., Luger, F., Lopez, L., Sanchez-Salas, R., et al., 2020. Comparison of micro-ultrasound and multiparamet- ric magnetic resonance imaging for prostate cancer: A multicen- ter, prospective analysis. Canadian Urological Association Jour- nal 15, E11. Lughezzani, G., Saita, A., Lazzeri, M., Paciotti, M., Ma ffei, D., Lista, G., Hurle, R., Bu ffi, N.M., Guazzoni, G., Casale, P., 2019. Comparison of the diagnostic accuracy of micro-ultrasound and magnetic resonance imaging /ultrasound fusion targeted biopsies for the diagnosis of clinically significant prostate cancer. Euro- pean urology oncology 2, 329–332. Maffei, D., Avolio, P.P., Moretto, S., Piccolini, A., Aljoulani, M., Dagnino, F., De Carne, F., Fasulo, V ., Marco, P., Saita, A.R., et al., 2024. Mp49-15 evaluating the role of pri-mus protocol in identifying clinically significant prostate cancer: A high-volume experience on microultrasound. Journal of Urology 211, e788. Schr ¨oder, F.H., Kruger, A.B., Rietbergen, J., Kranse, R., Maas, P.v.d., Beemsterboer, P., Hoedemaeker, R., 1998. Evaluation of the digital rectal examination as a screening test for prostate can- cer. Journal of the National Cancer Institute 90, 1817–1823. Society, A.C., 2025. Cancer facts & figures 2025. atlanta: American cancer society; 2025. Thompson, I.M., Pauler, D.K., Goodman, P.J., Tangen, C.M., Lucia, M.S., Parnes, H.L., Minasian, L.M., Ford, L.G., Lippman, S.M., Crawford, E.D., et al., 2004. Prevalence of prostate cancer among men with a prostate-specific antigen level ≤4.0 ng per milliliter. New England Journal of Medicine 350, 2239–2246. Zhou, S.R., Choi, M.H., Vesal, S., Kinnaird, A., Brisbane, W.G., Lughezzani, G., Ma ffei, D., Fasulo, V ., Albers, P., Zhang, L., et al., 2024. Inter-reader agreement for prostate cancer detec- tion using micro-ultrasound: a multi-institutional study. Euro- pean Urology Open Science 66, 93–100. 5
|
https://arxiv.org/abs/2505.21355v1
|
arXiv:2505.21362v1 [cs.CL] 27 May 2025Evaluating LLM Adaptation to Sociodemographic Factors: User Profile vs. Dialogue History Qishuai Zhong1Zongmin Li1Siqi Fan2Aixin Sun1 1Nanyang Technological University, Singapore 2University of Electronic Science and Technology of China, Chengdu, China Abstract Effective engagement by large language mod- els (LLMs) requires adapting responses to users’ sociodemographic characteristics, such as age, occupation, and education level. While many real-world applications leverage dialogue history for contextualization, existing evalua- tions of LLMs’ behavioral adaptation often fo- cus on single-turn prompts. In this paper, we propose a framework to evaluate LLM adapta- tion when attributes are introduced either (1) explicitly via user profiles in the prompt or (2) implicitly through multi-turn dialogue history. We assess the consistency of model behavior across these modalities. Using a multi-agent pipeline, we construct a synthetic dataset pair- ing dialogue histories with distinct user profiles and employ questions from the Value Survey Module (VSM 2013) (Hofstede and Hofstede, 2016) to probe value expression. Our findings indicate that most models adjust their expressed values in response to demographic changes, par- ticularly in age and education level, but consis- tency varies. Models with stronger reasoning capabilities demonstrate greater alignment, in- dicating the importance of reasoning in robust sociodemographic adaptation. 1 Introduction Large Language Models (LLMs) and their chat- bot applications have garnered significant attention since the release of ChatGPT (OpenAI et al., 2024; Dam et al., 2024). Numerous open-source frame- works (Wolf et al., 2020; Kwon et al., 2023) now enable organizations to deploy LLM on-premises across various domains. For AI service providers, hosted models are expected to align responses ap- propriately with users’ sociodemographic charac- teristics, as well as their cultural and ethical prefer- ences, to enhance engagement and interaction qual- ity (Sicilia et al., 2024; Altenburger et al., 2024). Unlike humans, who tend to maintain consistent values, ethics, and social norms across contexts, I'm a 23 -year-old software engineer. How important is it to have enough time for personal or home life? Having time for your personal or home life is very important, even at 23 and early in your career I'm a 23 -year-old software engineer. Can you give me some short -term career advice? I have a Bachelor's degree in Computer Science. Should I go for a Master's? How important is it to have enough time for personal or home life? As a young software engineer... Whether you should apply for a Master's depends on... It's very important — even at 23 and early in your software engineering career.Explicit profile Profile derived from dialogue Is the LLM being consistent in replies?Figure 1: We evaluate whether the model can adjust response values according to identical user attributes presented in different formats, and assess the consis- tency across these formats. studies have shown that LLMs exhibit variabil- ity in their expressed values, which are learned from human-generated training data and shaped by contextual cues (Kharchenko et al., 2024; Ko- vaˇc et al., 2023). This variability presents a risk: LLMs may inadvertently perpetuate harmful stereo- types, such as labeling Generation Z as “Digital Addicts” (Twenge, 2017). To mitigate such issues
|
https://arxiv.org/abs/2505.21362v1
|
and foster user trust, LLMs should dynamically tai- lor their responses to reflect user expectations—a capability we refer to as behavioral adaptation . Sociodemographic attributes of user profiles (e.g., age, education, occupation, nationality) are strongly correlated with cultural norms and val- ues related to family, authority, and social behav- ior (Fung et al., 2016; Lomazzi and Seddig, 2020; Gelfand and Raver, 2011). Recent work has ex- plored value alignment between LLMs and user profiles (Yao et al., 2024; Zhang et al., 2024; Suki- ennik et al., 2025). However, these studies largely focus on single-turn inputs where user profiles are 1 explicitly provided in the prompt (see Figure 1, top). This leaves a gap in understanding whether LLMs maintain behavioral consistency when profiles are instead inferred implicitly through dialogue (see Figure 1, bottom). We identify two key challenges in such dialogue- based evaluation: (i) Can LLMs accurately infer demographic attributes from chat history? (ii) If so, can they adapt their responses accordingly? Dialogue history provides essential context for identifying user traits in real-world applica- tions (Dam et al., 2024). Prior work has devel- oped datasets to evaluate this capability. In our study, we leverage the FaithfulPersonaChat bench- mark (Jandaghi et al., 2023) to assess persona recognition in Llama3.1-8B-Instruct as a rep- resentative LLM (see Appendix A). Our findings confirm that the model can reliably infer at least one persona attribute from multi-turn dialogue, par- tially addressing challenge (i). Building on this, we propose a novel evalua- tion framework to quantify how LLMs adapt their value expression when presented with demographic information either explicitly (via user profile) or implicitly (via dialogue), to address challenge (ii). However, existing benchmarks lack dialogues an- notated with demographic attributes, which are cru- cial for controlled comparisons. To address this, we introduce an agent-based generation pipeline that constructs an evaluation dataset with aligned sociodemographic attributes across both input for- mats. In summary, our contributions are threefold: •We introduce an evaluation framework that assesses LLM value adaptation across two input formats, (i) single-turn prompts with explicit user profiles, and (ii) multi-turn dia- logues where profiles are embedded implicitly, and measures consistency across both. •We present a novel, agent-based dataset construction method for generating profile- aligned dialogue data. •We evaluate multiple open-source LLMs using the Value Survey Module (VSM 2013) (Hofstede and Hofstede, 2016) to mea- sure value expression. Our experiments show that most models adjust their expressed values in response to demographic changes, especially in age and education level. Moreover, the degree of value adjustment is pos- itively correlated with the magnitude of attributechange. However, consistency across input formats varies by model. Smaller models exhibit greater variability, while larger models with stronger rea- soning capabilities show better alignment across formats. Notably, reasoning-augmented models likeQwQ-32B (Qwen et al., 2025) achieve the high- est consistency, underscoring the critical role of reasoning in robust sociodemographic adaptation. 2 Literature Review This study examines LLM behavior adaptation in multi-turn human-model interactions and assesses value consistency across profile presented condi- tions. Given its intersection with persona attribute extraction and cultural value alignment in LLMs, we survey
|
https://arxiv.org/abs/2505.21362v1
|
related work in both fields. 2.1 Persona Attributes Understanding Evaluations of language models’ understanding of persona attributes typically center on two tasks: next-utterance prediction and persona expansion. Standard benchmarks such as PersonaChat (Zhang et al., 2018), RealPersonaChat (Yamashita et al., 2023), and FaithfulPersonaChat (Jandaghi et al., 2023) provide dialogues annotated with descriptive persona statements ( e.g., “I’m a pet lover”) for these tasks. Other efforts, like Pchatbot (Qian et al., 2021), compile large-scale Chinese dialogues from Weibo and judicial forums but lack explicit demo- graphic mappings. LiveChat (Gao et al., 2023) augments live-stream conversations with streamer personas that include demographic attributes, yet this information serves only as auxiliary context for next-utterance prediction. Despite these resources, most datasets consist of human–human dialogues embedding persona descriptions rather than demographic profiles, hin- dering controlled analysis of LLM adaptation to quantifiable attributes. For example, grouping by age is straightforward, but contrasting more ab- stract traits—such as “running enthusiast” versus “someone who lost a dog”—yields unreliable com- parisons. To overcome this limitation, we construct a synthetic dataset specifically tailored for rigorous evaluation of sociodemographic adaptation. 2.2 Evaluating Values of Models Several studies have evaluated LLMs on how they express social and cultural values in response to dif- ferent prompts. A common approach involves us- ing research instruments like Hofstede’s Value Sur- vey Module (VSM) (Hofstede and Hofstede, 2016), 2 which has been applied in prior work (Kharchenko et al., 2024; Arora et al., 2023; Masoud et al., 2024) to assess whether models align their responses with cultural contexts. Despite differences in methodol- ogy, findings consistently show that LLMs adjust their value expressions based on contextual cues. Other studies have constructed evaluation datasets based on the World Values Sur- vey (Haerpfer et al., 2020), including GlobalOp- inionQA (DURMUS et al., 2024) and WorldVal- ueBench (Zhao et al., 2024). The former also in- corporates value questions from Pew1finding that most LLMs tend to favor Western perspectives. The latter focuses on evaluating models’ aware- ness of demographic contexts, revealing that even advanced LLMs struggle to capture the nuances of multicultural value systems. Recent studies have investigated model values within specific contexts. BiasLens (Li et al., 2024) systematically examines social biases in LLMs through role-playing scenarios, while Moore et al. (2024) evaluates value consistency across prompt variations, revealing generally stable model outputs. Liu et al. (2025) introduces a benchmark dataset to assess LLMs’ ability to infer implicit cultural values from natural conversational contexts, empha- sizing the challenges of nuanced attitude detection and open-ended cultural reasoning. Unlike prior studies, we analyze LLMs’ value expression patterns using multi-turn human–model interactions, which better reflect real-world chatbot inputs. We also evaluate behavioral consistency when sociodemographic attributes are provided ex- plicitly versus implicitly within the dialogue. 3 Research Targets We define behavior adaptation capability as a model’s ability to adjust response values and tone in accordance with users’ sociodemographic at- tributes. A key focus of our study is to examine whether this adaptation remains consistent when the same demographic information is supplied ex- plicitly in a single-turn prompt versus implicitly via earlier dialogue (see Figure 1).
|
https://arxiv.org/abs/2505.21362v1
|
We use the Value Survey Module (VSM 2013) (Hofstede and Hofstede, 2016), grounded in Hofstede’s Cultural Dimensions Theory (Gerlach and Eriksson, 2021), to quantify cultural values. This questionnaire features multiple-choice items on workplace dynamics and decision-making, each 1https://www.pewresearch.org/ Out-of-Context Detector User s … 1 2 3 User Simulato r QA LLMDialogue Generation Controlle rFigure 2: Dataset generation framework architecture. Each iteration: (i) user_simulator LLM is queried to generate a question simulating the user’s perspective based on their profile, (ii) out-of-context detector vali- dates the question to ensure consistency with the user’s profile, and (iii) qa_llm responds to the question. with five options (IDs 1–5). From the original 24 items, we select an 18-question subset Q, omit- ting emotional and health-related items. We apply this survey to evaluate model behavior adaptation across below experimental scenarios. Behavior Adaptation to User Profile ( BA_user ): This scenario evaluates whether models can adjust their responses based on explicit user profile con- sisting of sociodemographic attributes presented in the context, e.g., “Answer questions based on the given user profile: age: 23, job title: data scientist, gender: male, education: Bachelor’s degree.” Behavior Adaptation to Dialogue History (BA_dialogue ): Instead of relying on explicit at- tributes, models are tested on their ability to in- fer and adapt from dialogue history (Gupta et al., 2024). This scenario mimics real-world interac- tions, where the model must interpret user intent and context from prior exchanges. Consistency Across Profile and Dialogue His- tory ( Consistency ): Beyond behavioral adapta- tion, our framework also evaluates whether mod- els maintain behavioral consistency when process- ing equivalent user attributes presented in different representational formats. Specifically, we expect models to respond similarly to the value survey when the same demographic attributes are provided through explicit user profiles or implicitly inferred from dialogue history. 4 Dialogue Dataset Generation While BA_user assessment is straightforward, eval- uating BA_dialogue andConsistency is hindered by the absence of datasets meeting two key criteria: (1) human–model dialogues with organically em- bedded demographic attributes, and (2) explicit 3 mappings between each dialogue and its corre- sponding user profile. Drawing inspiration from prior works (Ab- dullin et al., 2024; Chen et al., 2024), we de- sign a multi -agent workflow to generate synthetic, career -advice dialogues from user profiles sourced from a curated simulated dataset.2Each dialogue maps to a unique profile, embedding demographic attributes—age, education, occupation, and nation- ality—within contextually grounded interactions. This career -focused domain aligns with our value assessment framework, enhancing interpretability in downstream evaluations. The dialogue is generated iteratively by the work- flow under the supervision of a generation con- troller, as illustrated in Figure 2. The controller orchestrates three key LLM components: User Simulator ( user_simulator )We employ Gpt-4o-2024-08-06 (OpenAI et al., 2024) to emulate a user seeking career advice via ques- tion–answer (QA) interactions with an LLM. Each query generated by the simulator is guided by: (1) user demographic attributes for personalization, (2) instructions and preceding dialogue, guiding the generation of contextually relevant queries, and (3) predefined conversation objectives and termination criteria. The simulator ends the dialogue once the specified termination
|
https://arxiv.org/abs/2505.21362v1
|
condition is satisfied. Out-of-Context Detector ( ooc_detector ): We employ Gpt-4o-mini-2024-07-18 (OpenAI et al., 2024) to validate the questions generated by the user simulator. It ensures that each question aligns with the user’s profile and maintains consistent first- person framing. If inconsistencies are detected, the ooc_detector directly revises the question. Question Answering LLM ( qa_llm ): The LLM responds to the simulated user’s queries with its default configuration to ensure natural interactions. To replicate real-world chatbot behavior, past dia- logue history is always prepended to the latest user question, following standard practices for context injection in human-LLM communication. Each generation loop terminates when the user simulator decides to conclude the dialogue or the maximum iteration limit ( max_runs ) is reached. More details on the prompt design for each com- ponent are provided in Figure 6 (Appendix B), and the complete conversation generation procedure is outlined in Algorithm 1 (Appendix C). In total, 2https://www.kaggle.com/datasets/ ravindrasinghrana/employeedataset/dataDimensions LLM Judge Human Attribute Coverage 4.14 3.64 Attribute Correctness 4.76 4.97 Question Diversity 4.52 4.65 Relevance 4.63 4.26 Table 1: Overall ratings for generated dialogues by the LLM judge and 50 human-rated samples. Despite stricter human judgments, high scores across all four dimensions confirm dialogue quality. 1000 dialogue sets, denoted by D, are generated,3 each mapped to a unique user from the seed dataset. The full user set is denoted by U. 4.1 Dataset Evaluation To assess the quality of the generated dataset, we conduct both human and LLM-based evaluations. LLM evaluation follows the widely adopted “LLM- as-a-judge” methodology (Zheng et al., 2023; Gu et al., 2025). Only questions generated by the user_simulator are assessed, as the qa_llm role- plays as itself to respond to these simulated queries. Assessments are conducted across four dimensions: Attribute Coverage. The number of demographic attributes explicitly mentioned (up to 5). Attribute Correctness. The number of correctly referenced demographic values. For example, if the user’s age, gender, and job title are mentioned, but an incorrect age is used, the score for Attribute Coverage is 3 and for Attribute Correctness is 2. Question Diversity. The variety of topics covered, reflects the simulator’s ability to generate distinct, contextually rich questions. For example, if four questions are generated but all focus solely on short- term career advice, then the score is 1. Relevance. The extent to which the questions re- main contextually appropriate for career advice, and align with the qa_llm ’s prior responses to maintain coherent conversational flow. For human evaluation, three postgraduate an- notators score the generated questions on a 0 to 5 scale, with higher values indicating better qual- ity. All annotators independently assess the same subset of 50 randomly selected samples. For au- tomated evaluation, we use gpt-4o-2024-08-06 as the judge to score all samples. Identical scoring guidelines are supplied to both human and LLM raters (Appendix D), in accordance with best prac- tices from Leng (2023). To reduce variability, the 3https://github.com/FerdinandZhong/model_ behavior_adaption 4 LLM judge evaluates each sample with 10 different random seeds, and we report the average score. We assess alignment between average human and LLM judge ratings on
|
https://arxiv.org/abs/2505.21362v1
|
the shared subset using the Pearson correlation coefficient (Freedman et al., 2007) and the two-way mixed-effects intraclass cor- relation coefficient (ICC(3, k)) (Shrout and Fleiss, 1979). We omit Fleiss’ Kappa due to its sensitivity to category prevalence in our skewed data (Hoehler, 2000). Results (Appendix E) demonstrate strong concordance between human and automated evalu- ations. Summary statistics for all four evaluation dimensions are presented in Table 1, confirming the high quality of the generated dataset. 5 Behavior Adaptation Evaluation Using our synthetic dataset, we assess LLM behav- ior adaptation via two scenarios. In the BA_user scenario, each time a model is queried with a VSM question qj∈Qand an explicit user profile u∈U, where qjde- notes the jth question. The model must return aselected_option_id (1–5) corresponding to the question’s option_ids —the IDs of available choices—along with a justification and the log probability distribution over option_ids . We de- note this composite response by rj u. In the BA_dialogue scenario, each time a model is presented with a synthetic dialogue d∈Dfol- lowed by a VSM question qj, it generates a re- sponse rj dunder the same requirements. The com- plete sets of responses {rj u}and{rj d}for each model are denoted by RUandRD, respectively. The querying workflow is illustrated in Figure 3. 5.1 Distance Definition The core component of each response ris a nor- malized probability distribution Pover all possi- bleoption_ids . This distribution is computed from the model’s log probability outputs (see Ap- pendix G for details). For instance, if a response rhas a selected option_id of 2, the distribu- tion might be [0.1,0.7,0.05,0.0,0.15], where each value represents the relative likelihood of the cor- responding option_id . After normalization, the total probability sums up to 1. We quantify the unit divergence between two responses r’s by calculating the Jensen-Shannon di- vergence (JSD) between their corresponding prob- ability distribution P’s. That is, JSD(Pj u∥Pj u′) donates the divergence between rj uandrj u′whereuandu′refer to two distinct user profiles. To assess BA_user andBA_dialogue , both U andDare partitioned into groups gbased on at- tributes including age, education, occupation, and nationality. For each group and question index j, we compute the Jensen–Shannon centroid cj—the distribution minimizing the total JSD to all group responses Pj i(Nielsen, 2020). Starting from the mean distribution ¯Pj, the centroid is obtained by cj= arg min cnX i=1JSD c∥Pj i , where Pj iis the ith response distribution for ques- tionjin a group of size n. The overall divergence between two groups g andg′is then defined as: Distance (g, g′) =1 |J||J|X j=1JSD(cj g∥cj g′), This centroid-based approach helps mitigate the effects of outliers and uneven group sizes. We establish baseline values for both BA_user andBA_dialogue using a consistent methodology. To illustrate, consider the baseline for BA_user evaluation. After grouping RUby sociodemo- graphic attribute and computing a centroid for each group, we define the global centroid as the Jensen–Shannon centroid of all responses in RU. The baseline is then calculated as the average dis- tance between each group centroid and the global centroid. Comparing this baseline to the inter- group divergences allows us to
|
https://arxiv.org/abs/2505.21362v1
|
quantify the model’s sensitivity to demographic variation. For exam- ple, when grouping RUby age, the divergence between the “<30” and “>60” cohorts should no- ticeably exceed the baseline, while the divergence between “<30” and “30–40” should remain below it—illustrating that greater age gaps drive greater variation in generated responses. To evaluate the Consistency scenario, we com- pare responses RuandRdcorresponding to the same user, by using their selected option_id sequences rather than the full probability dis- tributions. Let Su= [s1, s2, . . . , s m], and Sd= [t1, t2, . . . , t m], where sjis the selected option_id in 1 to 5 for the jth question (the same applies to tj), and mis the total number of ques- tions, and m= 18 in our setting. Since the option_ids are ordinal and questions are independent, we employ the Earth Mover’s Dis- 5 𝑢 𝑑 𝑅𝑑𝑅𝑢𝑃𝑢1=[0.7,0.2,0.04,0.01,0.05]Figure 3: Model querying workflow with key components. Here, udenotes a user profile, and dis a synthetic dialogue. Each line r∈Rrepresents the response to a VSM question, which includes a normalized probability distribution Pover the 5 option_ids . tance (EMD) (Rubner et al., 1998) to quantify align- ment between SuandSd. Let hu(k)andhd(k) denote the frequencies of value kinSuandSd, respectively, and let K= 5be the total number of discrete bins. Then EMD( Su, Sd) =K−1X x=1 xX k=1hu(k) m−xX k=1hd(k) m . We define a model’s Consistency as the mean EMD across all matched user profile uand his/her dialogue dpairs or (u, d)’s, with a baseline com- puted over an equivalent number of random (u, d) pairings i.e.,randomly matching a user profile with another user’s dialogue. This metric enables direct comparison of consistency when the same profile is presented in different formats. Table 2 provides the details of the measured di- vergences and baseline values for all scenarios. 5.2 Experiment Setting Following the workflow outlined in Figure 3, we evaluate multiple open-source LLMs, including the Qwen2.5 family, Llama3.1 family, DeepSeek-V3 , and the reasoning model QwQ-32B (Qwen et al., 2025; Dubey and et al., 2024; DeepSeek-AI, 2024), to generate RUandRD. Model outputs are struc- tured in JSON format using XGrammar (Dong et al., 2024) for streamlined processing. Each model is queried once per prompt, except for the reasoning model QwQ-32B , which is queried twice. The first query is unconstrained to allow for rea- soning content generation, which is then appended to the second query to elicit a structured response. Full prompt designs are listed in Appendix F. 6 Evaluation Outcomes Recall that the selected ID is the one assigned the highest probability by the model. The probability BA_user BA_dialogue 0.0 0.2 0.4 0.6 0.8 1.0Llama3.1-8B-Instruct Llama3.1-70B-Instruct DeepSeek-V3 Qwen2.5-7B-Instruct Qwen2.5-72B-Instruct QwQ-32B0.71 0.58 0.75 0.72 0.96 0.98 0.83 0.82 0.91 0.91 0.94 0.91 Averaged Probability of "selected_option_id"Figure 4: Mean probability of the “selected_option_id ” inBA_user andBA_dialogue , reflecting model confidence. Most models show similar decisiveness across both scenarios. value associated with this ID reflects the model’s confidence in its selection. We treat this probability as the confidence score for the selected ID and use the average of these
|
https://arxiv.org/abs/2505.21362v1
|
scores across all responses to estimate the model’s overall confidence. As illus- trated in Figure 4, all models—except Llama3.1- 8B-Instruct , which shows slightly reduced confi- dence in the dialogue setting—exhibit consistently high confidence across both contexts, supporting the interpretation that their selections reflect gen- uine preferences and reinforcing the validity of our subsequent analyses. 6.1 Behavior Adaptation Next, we assess behavior adaptation by varying demographic attributes. Responses in RUandRD are grouped by age, education, occupation, and na- tionality, and we compute inter-group divergences and corresponding baselines as specified in Table 2. Adaptation is quantified as the ratio of each diver- gence to its baseline (baseline=1). Detailed anal- yses for age and education appear below; further results are available in Appendix H. Age: We group responses RUandRDinto 10- 6 Scenarios Measured Distance Baseline BA_user1 |J|P|J| j=1JSD cj g∥cj g′ , g, g′⊆U1 |J|1 |G|P|J| j=1P|G| g=1JSD cj g∥cj U BA_dialogue1 |J|P|J| j=1JSD cj g∥cj g′ , g, g′⊆D1 |J|1 |G|P|J| j=1P|G| g=1JSD cj g∥cj D Consistency1 |U|P|U| i=1EMD (Su, Sd)1 |U|P|U| i=1I(ui̸→di)EMD (Su, Sd) Table 2: List of quantified divergences and baselines across evaluation scenarios. For BA_user andBA_dialogue , responses are partitioned into attribute-based groups g(age,education level ,occupation , and country ) to assess their influence on model behavior. Scenario Consistency measures the alignment between user profiles uand their corresponding dialogue histories d.Jrepresents the set of questions. ui̸→dimeans the user profile uiand dialogue diare not from a same user. Models Measured Distance ↓Baseline Distance /Baseline ↓ Llama3.1-8B-Instruct 0.305 0.312 0.978 Llama3.1-70B-Instruct 0.214 0.225 0.951 Qwen2.5-7B-Instruct 0.276 0.276 1.000 Qwen2.5-72B-Instruct 0.176 0.190 0.926 DeepSeek-V3 0.118 0.128 0.922 QwQ-32B 0.112 0.125 0.896 Table 3: Results of evaluating LLMs’ capability to maintain response consistency across different context formats. Both absolute distances and relative divergence ratios over the baseline are displayed in the table. QwQ-32B has both the lowest absolute distance values and the lowest relative value of distance over baseline. year brackets ( e.g., “30–40”), with “ <30” and “ >60” as boundary categories. Figure 5 (first row) presents results for both BA_user and BA_dialogue . In both scenarios, models exhibit a positive correlation between age disparity and behavioral divergence ( e.g., maximum divergence between “ <30” and “ >60”). Notably, models ex- hibit greater consistency in the BA_dialogue sce- nario than in BA_user , indicating that age infor- mation conveyed through dialogue enables more stable adaptation across age groups. Education Level: Educational attainment provides a precise grouping criterion. Figure 5 (second row) reports the divergences and baselines for both BA_user andBA_dialogue . As with age, models exhibit greater divergence in responses’ values as educational disparity increases (e.g., highest diver- gence between high school and doctoral degrees), whether the attribute is provided explicitly or via dialogue. These results further demonstrate mod- els’ capacity to incorporate educational background into value and nuance adaptation. Additional evaluations using other attribute- based groupings are presented in Appendix H. These findings show that most LLMs accurately infer sociodemographic attributes and adjust their responses accordingly, with larger attribute differ- ences producing greater shifts in expressed values. Crucially, despite dialogue history being
|
https://arxiv.org/abs/2505.21362v1
|
less ex-plicit than user profiles, models still adapt their behavior to align with user characteristics. 6.2 Consistency across Context Formats We next evaluate each model’s behavioral consistency across context formats (scenario Consistency ). Following Section 5.1, we com- pute, for each model, both the distance and its base- line. While absolute distances quantify response variability across formats, they may conflate be- havioral alignment with format-induced noise (He et al., 2024). To disentangle these effects, we also report the ratio of measured distance to baseline: lower ratios signify stronger consistency in adapt- ing to demographic cues across differing formats. Table 3 summarizes these metrics. Smaller tested models (with parameters smaller than 10B) exhibit both higher absolute EMDs and ratios near or equal to 1, indicating that prompt-format vari- ability outweighs demographic alignment. In con- trast, larger models achieve lower ratios, reflect- ing greater robustness to format changes and im- proved consistency in behavior adaptation. Further- more, we observe a positive correlation between the measured consistency and benchmark perfor- mance on language understanding and reasoning tasks (e.g., MMLU Pro (Wang et al., 2024), Big- Bench Hard (Suzgun et al., 2022)): models with su- perior cognitive capabilities tend to maintain higher 7 Llama3.1-8B-Instruct Llama3.1-70B-Instruct DeepSeek-V3 Qwen2.5-7B-Instruct Qwen2.5-72B-Instruct QwQ-32B 30-40 vs 40-5030-40 vs 50-6030-40 vs >60 40-50 vs 50-60 40-50 vs >60 50-60 vs >60 <30 vs 30-40 <30 vs 40-50 <30 vs 50-60<30 vs >600.01.12.23.34.55.66.7(a)BA_user grouped by “age” 30-40 vs 40-5030-40 vs 50-6030-40 vs >60 40-50 vs 50-60 40-50 vs >60 50-60 vs >60 <30 vs 30-40 <30 vs 40-50 <30 vs 50-60<30 vs >600.01.12.33.44.55.76.8 (b)BA_dialogue grouped by “age” Bachelor's_Degree vs High_SchoolBachelor's_Degree vs Master's_Degree Bachelor's_Degree vs PhD High_School vs Master's_Degree High_School vs PhD Master's_Degree vs PhD0.01.02.03.04.15.16.1 (c)BA_user grouped by “education evel” Bachelor's_Degree vs High_SchoolBachelor's_Degree vs Master's_Degree Bachelor's_Degree vs PhD High_School vs Master's_Degree High_School vs PhD Master's_Degree vs PhD0.01.12.33.44.55.76.8 (d)BA_dialogue grouped by “education evel” Figure 5: The measurement results for BA_user andBA_dialogue are shown below. The first row compares groups by “Age,” while the second row presents results for “Education Level.” Most models exhibit a positive correlation between computed distances and demographic differences. consistency across formats. The reasoning-augmented model QwQ-32B , specifically trained through reinforcement learn- ing for reasoning, achieves the highest consistency, despite its relatively smaller size. We attribute this superior consistency to its enhanced reason- ing capabilities. To substantiate this, we recorded and analyzed its reasoning traces, revealing that QwQ-32B systematically revisits and integrates the provided sociodemographic attributes, whether ex- plicitly stated or inferred from dialogue history. This iterative process enables the model to select responses more precisely aligned with the given attributes, promoting consistency across context formats. The ability to methodically revisit and incorporate attribute details likely contributes to its superior alignment performance. Reasoning pro- cess samples are provided in Appendix I.7 Conclusion This study presents a novel framework for evalu- ating how LLMs adapt their behavioral outputs when sociodemographic attributes are provided through two distinct interaction formats: (i) ex- plicit single-turn prompts and (ii) multi-turn dia- logue integration. We then systematically measure cross-format consistency to assess robust sociode- mographic adaptation. Our findings
|
https://arxiv.org/abs/2505.21362v1
|
indicate that most models adjust effectively to single-format at- tribute changes, particularly in attributes like age and education level, with the degree of value ad- justment positively correlated with the magnitude of attribute change. However, significant discrepan- cies arise in cross-format scenarios. Smaller mod- els often struggle to maintain consistent alignment across formats, while larger, reasoning-augmented models demonstrate more stable performance. 8 Limitations This study has a few limitations that require further investigation in future research. •Prompt Design: Prompts for querying mod- els to determine the most suitable response are designed based on our experience and ex- periments with sample cases, given the lack of established guidelines for optimizing prompts in value survey question answering. Future re- search could investigate how prompt content variations (besides the embedded information) impact models’ behavior adaptation. •Limited Scope of Value Survey: The value survey used to assess model behavior consists of a relatively small number of questions from the VSM 2013 survey, which has been crit- icized for its simplicity and limited scope. However, it aligns well with our generated dataset, as it focuses on values reflected in career-related questions. Future studies could enhance the evaluation by incorporating larger and more diverse question sets. •Single Source of Dialogue: The evaluation of BA_dialogue is based on dialogues generated by GPT-4O as the QA bot. When providing the tested models with the dialogue history, GPT’s specific response styles may influence the behavior of the tested models, which could be further explored in future work. Ethical Considerations Our study examines how LLMs adapt outputs to users’ sociodemographic contexts through explicit (profile -based) and implicit (dialogue -embedded) input formats. Misalignment or inconsistencies in model outputs can reinforce stereotypes or erode user trust in cross -cultural interactions. To enable controlled, privacy -preserving evaluations, we cu- rate and open -source a synthetic dataset, with all user profiles derived from a publicly available syn- thetic dataset (see Section 4), thereby eliminating any risk of personal identifying information (PII) exposure or consent violations. However, given the dataset’s controlled topical focus, excessive reliance on these examples without careful con- sideration of their suitability for use cases may introduce bias and overlook the broader diversity of real-world interactions.References Yelaman Abdullin, Diego Molla-Aliod, Bahadorreza Ofoghi, John Yearwood, and Qingyang Li. 2024. Synthetic dialogue dataset generation using llm agents. Preprint , arXiv:2401.17461. Kristen M. Altenburger, Hongda Jiang, Robert E. Kraut, Yi-Chia Wang, and Jane Dwivedi-Yu. 2024. Exam- ining the role of relationship alignment in large lan- guage models. Preprint , arXiv:2410.01708. Arnav Arora, Lucie-Aimée Kaffee, and Isabelle Augen- stein. 2023. Probing pre-trained language models for cross-cultural differences in values. Preprint , arXiv:2203.13722. Kedi Chen, Qin Chen, Jie Zhou, He Yishen, and Liang He. 2024. DiaHalu: A dialogue-level hallucination evaluation benchmark for large language models. In Findings of the Association for Computational Lin- guistics: EMNLP 2024 , pages 9057–9079, Miami, Florida, USA. Association for Computational Lin- guistics. Sumit Kumar Dam, Choong Seon Hong, Yu Qiao, and Chaoning Zhang. 2024. A complete survey on llm- based ai chatbots. Preprint , arXiv:2406.16937. DeepSeek-AI. 2024. Deepseek-v3 technical report. Preprint , arXiv:2412.19437. Yixin
|
https://arxiv.org/abs/2505.21362v1
|
Dong, Charlie F. Ruan, Yaxing Cai, Ruihang Lai, Ziyi Xu, Yilong Zhao, and Tianqi Chen. 2024. Xgrammar: Flexible and efficient structured gener- ation engine for large language models. Preprint , arXiv:2411.15100. Abhimanyu Dubey and Abhinav Jauhri et al. 2024. The llama 3 herd of models. Preprint , arXiv:2407.21783. Esin DURMUS, Karina Nguyen, Thomas Liao, Nicholas Schiefer, Amanda Askell, Anton Bakhtin, Carol Chen, Zac Hatfield-Dodds, Danny Hernan- dez, Nicholas Joseph, Liane Lovitt, Sam McCan- dlish, Orowa Sikder, Alex Tamkin, Janel Thamkul, Jared Kaplan, Jack Clark, and Deep Ganguli. 2024. Towards measuring the representation of subjective global opinions in language models. In First Confer- ence on Language Modeling . David Freedman, Robert Pisani, and Roger Purves. 2007. Statistics (international student edition). Pisani, R. Purves, 4th edn. WW Norton & Company, New York . Helene H Fung, Yuan Wan Ho, Rui Zhang, Xin Zhang, Kimberly A Noels, and Kim-Pong Tam. 2016. Age differences in personal values: Universal or cultural specific? Psychol. Aging , 31(3):274–286. Jingsheng Gao, Yixin Lian, Ziyi Zhou, Yuzhuo Fu, and Baoyuan Wang. 2023. LiveChat: A large-scale per- sonalized dialogue dataset automatically constructed from live streaming. In Proceedings of the 61st An- nual Meeting of the Association for Computational 9 Linguistics (Volume 1: Long Papers) , pages 15387– 15405, Toronto, Canada. Association for Computa- tional Linguistics. Michele J Gelfand and Jana L et al. Raver. 2011. Differ- ences between tight and loose cultures: a 33-nation study. Science , 332(6033):1100–1104. Philipp Gerlach and Kimmo Eriksson. 2021. Measuring cultural dimensions: External validity and internal consistency of hofstede’s VSM 2013 scales. Front. Psychol. , 12:662604. Jiawei Gu, Xuhui Jiang, Zhichao Shi, Hexiang Tan, Xuehao Zhai, Chengjin Xu, Wei Li, Yinghan Shen, Shengjie Ma, Honghao Liu, Saizhuo Wang, Kun Zhang, Yuanzhuo Wang, Wen Gao, Lionel Ni, and Jian Guo. 2025. A survey on llm-as-a-judge. Preprint , arXiv:2411.15594. Akash Gupta, Ivaxi Sheth, Vyas Raina, Mark Gales, and Mario Fritz. 2024. LLM task interference: An initial study on the impact of task-switch in conversational history. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 14633–14652, Miami, Florida, USA. Associa- tion for Computational Linguistics. C Haerpfer, R Inglehart, A Moreno, C Welzel, K Kizilova, J Diez-Medrano, M Lagos, and P Norris. 2020. World Values Survey: Round Seven - Country- Pooled Datafile . JD Systems Institute & WVSA Sec- retariat, Madrid, Spain & Vienna, Austria. Jia He, Mukund Rungta, David Koleczek, Arshdeep Sekhon, Franklin X Wang, and Sadid Hasan. 2024. Does prompt formatting have any impact on llm per- formance? Preprint , arXiv:2411.10541. Fred K. Hoehler. 2000. Bias and prevalence effects on kappa viewed in terms of sensitivity and specificity. Journal of clinical epidemiology , 53 5:499–503. G Hofstede and G. J. Hofstede. 2016. VSM 2013. https://geerthofstede.com/ research-and-vsm/vsm-2013/ . Accessed: 2024-1-11. Pegah Jandaghi, XiangHai Sheng, Xinyi Bai, Jay Pujara, and Hakim Sidahmed. 2023. Faithful persona-based conversational dataset generation with large language models. Preprint , arXiv:2312.10007. Julia Kharchenko, Tanya Roosta, Aman Chadha, and Chirag Shah. 2024. How well do llms represent values across cultures? empirical analysis of llm responses based on hofstede cultural dimensions. Preprint ,
|
https://arxiv.org/abs/2505.21362v1
|
arXiv:2406.14805. Grgur Kova ˇc, Masataka Sawayama, Rémy Portelas, Cédric Colas, Peter Ford Dominey, and Pierre- Yves Oudeyer. 2023. Large language models as superpositions of cultural perspectives. Preprint , arXiv:2307.07870.Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. 2023. Effi- cient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles . Quinn Leng. 2023. Best practices for LLM evaluation of RAG applications: A case study on the databricks documentation bot. https://www.databricks.com/blog/ LLM-auto-eval-best-practices-RAG . Accessed: 2025-04-13. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2019. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and com- prehension. CoRR , abs/1910.13461. Xinyue Li, Zhenpeng Chen, Jie M. Zhang, Yiling Lou, Tianlin Li, Weisong Sun, Yang Liu, and Xuanzhe Liu. 2024. Benchmarking bias in large language models during role-playing. Preprint , arXiv:2411.00585. Ziyi Liu, Priyanka Dey, Zhenyu Zhao, Jen tse Huang, Rahul Gupta, Yang Liu, and Jieyu Zhao. 2025. Can llms grasp implicit cultural values? benchmarking llms’ metacognitive cultural intelligence with cq- bench. Preprint , arXiv:2504.01127. Vera Lomazzi and Daniel Seddig. 2020. Gender role attitudes in the international social survey pro- gramme: Cross-national comparability and relation- ships to cultural values. Cross-Cultural Research , 54:106939712091545. Reem I. Masoud, Ziquan Liu, Martin Ferianc, Philip Treleaven, and Miguel Rodrigues. 2024. Cultural alignment in large language models: An explana- tory analysis based on hofstede’s cultural dimensions. Preprint , arXiv:2309.12342. Jared Moore, Tanvi Deshpande, and Diyi Yang. 2024. Are large language models consistent over value- laden questions? In Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 15185–15221, Miami, Florida, USA. Association for Computational Linguistics. Frank Nielsen. 2020. On a generalization of the jensen–shannon divergence and the jensen–shannon centroid. Entropy , 22(2). OpenAI, Josh Achiam, and Steven Adler et al. 2024. Gpt-4 technical report. Preprint , arXiv:2303.08774. Hongjin Qian, Xiaohe Li, Hanxun Zhong, Yu Guo, Yueyuan Ma, Yutao Zhu, Zhanliang Liu, Zhicheng Dou, and Ji-Rong Wen. 2021. Pchatbot: A large- scale dataset for personalized chatbot. In Proceed- ings of the 44th International ACM SIGIR Confer- ence on Research and Development in Information Retrieval , SIGIR ’21, page 2470–2477, New York, NY , USA. Association for Computing Machinery. 10 Qwen, :, and An Yang et al. 2025. Qwen2.5 technical report. Preprint , arXiv:2412.15115. Y . Rubner, C. Tomasi, and L.J. Guibas. 1998. A metric for distributions with applications to image databases. InSixth International Conference on Computer Vi- sion (IEEE Cat. No.98CH36271) , pages 59–66. P E Shrout and J L Fleiss. 1979. Intraclass correla- tions: uses in assessing rater reliability. Psychol. Bull. , 86(2):420–428. Anthony Sicilia, Jennifer C. Gates, and Malihe Alikhani. 2024. Humbel: A human-in-the-loop approach for evaluating demographic factors of language mod- els in human-machine conversations. Preprint , arXiv:2305.14195. Nicholas Sukiennik, Chen Gao, Fengli Xu, and Yong Li. 2025. An evaluation of cultural value alignment in llm. Preprint , arXiv:2504.08863. Mirac Suzgun, Nathan Scales, Nathanael Schärli, Se- bastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha
|
https://arxiv.org/abs/2505.21362v1
|
Chowdhery, Quoc V . Le, Ed H. Chi, Denny Zhou, and Jason Wei. 2022. Challenging big-bench tasks and whether chain-of-thought can solve them. Preprint , arXiv:2210.09261. Jean M. Twenge. 2017. Have smartphones destroyed a generation? The Atlantic . Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, Shiguang Guo, Weiming Ren, Aaran Arulraj, Xuan He, Ziyan Jiang, Tianle Li, Max Ku, Kai Wang, Alex Zhuang, Rongqi Fan, Xiang Yue, and Wenhu Chen. 2024. Mmlu-pro: A more robust and challenging multi-task language understanding benchmark. Preprint , arXiv:2406.01574. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers) , pages 1112–1122. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Rémi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Hug- gingface’s transformers: State-of-the-art natural lan- guage processing. Preprint , arXiv:1910.03771. Sanae Yamashita, Koji Inoue, Ao Guo, Shota Mochizuki, Tatsuya Kawahara, and Ryuichiro Hi- gashinaka. 2023. RealPersonaChat: A realistic per- sona chat corpus with interlocutors’ own personal- ities. In Proceedings of the 37th Pacific Asia Con- ference on Language, Information and Computation , pages 852–861, Hong Kong, China. Association for Computational Linguistics.Jing Yao, Xiaoyuan Yi, and Xing Xie. 2024. Clave: An adaptive framework for evaluating values of llm generated responses. Preprint , arXiv:2407.10725. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Per- sonalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers) , pages 2204–2213, Melbourne, Australia. Association for Computational Linguistics. Zhaowei Zhang, Ceyao Zhang, Nian Liu, Siyuan Qi, Ziqi Rong, Song-Chun Zhu, Shuguang Cui, and Yaodong Yang. 2024. Heterogeneous value align- ment evaluation for large language models. Preprint , arXiv:2305.17147. Wenlong Zhao, Debanjan Mondal, Niket Tandon, Dan- ica Dillion, Kurt Gray, and Yuling Gu. 2024. World- valuesbench: A large-scale benchmark dataset for multi-cultural value awareness of language models. Preprint , arXiv:2404.16308. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. 2023. Judging llm-as-a-judge with mt-bench and chatbot arena. In Proceedings of the 37th International Conference on Neural Information Processing Systems , NIPS ’23, Red Hook, NY , USA. Curran Associates Inc. 11 A Capability of LLMs for Recognizing Persona Attributes As a preliminary stage of this study, we select Llama3.1-8B-Instruct as a representative model to verify its ability to extract user attributes from di- alogue. This model is one of the smallest evaluated in our study. For evaluation, we utilize the Faith- fulPersonaChat (Jandaghi et al., 2023) synthetic dataset, which comprises 5,648 unique synthetic personas and
|
https://arxiv.org/abs/2505.21362v1
|
11,001 conversations. Each conver- sation takes place between two users (User1 and User2), and the persona profiles of both partici- pants are provided alongside the dialogue. Every synthetic user is assigned five persona attributes, with at least one of these attributes explicitly men- tioned in their dialogues. Given the dataset structure, we prompt the tested model with a conversation and ask it to identify one mentioned persona attribute for each user (evalu- ated one at a time). The model selects from a list of ten candidate attributes, which includes the user’s five correct attributes and five randomly sam- pled attributes from other users. The selected at- tribute is then evaluated against two reference set- tings: (1) the ground truth, i.e., the user’s complete set of five correct attributes, and (2) the output from gpt-4o-mini (OpenAI et al., 2024), which is prompted to extract all identifiable persona at- tributes from the conversation. If the selected at- tribute appears in either reference, the model re- ceives a score of 1; otherwise, the score is 0. The final score is calculated by aggregating the hit rate across all 11,001 conversations for each user. We test the model under different prompting strategies, ranging from zero-shot to ten-shot settings. The results, shown in Table 4, confirm that the model can effectively identify at least one persona attribute from dialogues. This validation supports our main research objective: to investigate whether models can adapt their responses based on the rec- ognized attributes. BPrompts Design for Dataset Generation Agent User Simulator ( user_simulator ): We design two types of prompts—initial and subsequent—as illustrated in Figure 6 to guide the LLM in sim- ulating the user for question generation. The ini- tial prompt starts the conversation, while subse- quent prompts incorporate chat history to main- tain context. This iterative approach enables thegeneration of relevant follow-up questions and ensures dialogue progression. To prevent the user_simulator from misinterpreting its role, the generated dialogue is embedded within a single message block in subsequent prompts. Question Answering LLM ( qa_llm ): No addi- tional prompt design is applied when querying the LLM in the role of a QA agent. It is treated as a standard chatbot, receiving the question generated by the user_simulator directly. Out-of-Context Detector ( ooc_detector ): We specify two criteria for evaluating the generated question in the system prompt for ooc_detector : (1) whether the question accurately reflects the user profile information and (2) whether it is framed in the first person. The user profile and generated question are provided in the user prompt. This design is illustrated in Figure 7. C Pseudocode for Dataset Generation The complete procedure for generating the dialogue dataset is outlined in the pseudocode presented in Algorithm 1. During dialogue generation, we set the threshold dialogue _max _runs to 5. D Prompts Design for LLM Judge To enhance the reliability of the LLM judge when evaluating questions generated by user_simulator , we provide detailed scor- ing instructions in the prompt, including explicit criteria for each possible score. Additionally, we require the model to articulate its reasoning behind
|
https://arxiv.org/abs/2505.21362v1
|
each score, encouraging a more deliberate evaluation process. Prompts of all 4 dimensions are listed in Figures 8, 9, 10, 11 E Alignment between LLM Judge and Human Rates We measure the alignment between the scores as- signed by the LLM judge and the average hu- man ratings across 50 sampled dialogues. The results, presented in Table 5, show strong align- ment for Dimensions 1, 3, and 4. The relatively lower alignment for Attribute Correctness is likely due to high overall scores and low variance in this dimension, as the correctness of attributes in the generated dialogues is largely ensured by the robust user_simulator and additional validation performed by the out-of-context detector in the dialogue generation pipeline. 12 Prompt Type ModelsHit Rate vsGround Truth vsgpt-4o-mini-2024-07-18 User1 User2 User1 User2 Zero-shotLlama3-8B-Instruct 0.826 0.864 0.732 0.784 Llama3.1-8B-Instruct 0.821 0.864 0.731 0.796 One-shotLlama3-8B-Instruct 0.849 0.895 0.775 0.839 Llama3.1-8B-Instruct 0.837 0.888 0.756 0.827 Five-shotLlama3-8B-Instruct 0.851 0.906 0.784 0.854 Llama3.1-8B-Instruct 0.848 0.905 0.774 0.853 Ten-shotLlama3-8B-Instruct 0.856 0.910 0.783 0.855 Llama3.1-8B-Instruct 0.852 0.905 0.775 0.853 Random – 0.5 0.5 0.18 0.19 Table 4: Recognition accuracy of persona attributes from dialogue by LLMs. We evaluate both “Llama3.1-8B- Instruct” and “Llama3-8B-Instruct”, which belong to the same Llama model family. Model scores are compared against the expected accuracy of randomly selecting one correct attribute from the list of 10 candidates. Procedure 1 Generation of Dialogues Require: seed _dataset ,user _simulator ,qa_llm,ooc_detector ,dialogue _max _runs 1:initialize dialogues 2:foreachuser _profile inseed _dataset do 3: initialize conversation _history 4: user _question ←user _simulator (initial _prompt ∪user _profile ) 5: ifooc_detector (user _question, user _profile )then 6: continue ▷Skip this profile if out-of-context (OOC) 7: end if 8: conversation _history.append (user _question ) 9: llm_output ←qa_llm(user _question ) 10: conversation _history.append (llm_output ) 11: while True do 12: user _question, end _conversation ← user _simulator (conversation _history ∪ following _prompt ∪user _profile ) 13: ifend_conversation orlen(conversation _history )≥dialogue _max _runs then 14: dialogues.append (conversation _history ) 15: break ▷End conversation or exceed max runs 16: end if 17: ifooc_detector (user _question, user _profile )then 18: break ▷Skip this profile if OOC detected 19: end if 20: conversation _history.append (user _question ) 21: llm_output ←qa_llm(user _question ) 22: conversation _history.append (llm_output ) 23: end while 24:end for 13 Dimensions ICC(3, k)Pearson Correlation Attribute Coverage 0.85 0.75 Attribute Correctness 0.16 0.19 Question Diversity 0.63 0.55 Relevance 0.80 0.67 Table 5: Alignment scores, measured using ICC(3, k) and the Pearson Correlation Coefficient, were computed between the LLM judge and human annotators across four evaluation dimensions, based on a sample of 50 dialogues. You are role-playing as a user seeking career advice from a chatbot. Always respond using the JSO N form at System Here are the details of the user you are sim ulating: { Age, Education Level, Job Title, Country} You aim to engage with a chatbot to explore career guidance in the following areas: 1. Career direction in the next 5 years 2. Career direction in the next 10 years 3. Essential skills for career growth 4. Relevant certifications to obtain Start the conversation by asking
|
https://arxiv.org/abs/2505.21362v1
|
for short-term career suggestions, either explicitly m entioning your age and job title or subtly hinting at them . As the discussion evolves, progressively share m ore personal details to obtain tailored advice and deeper insights. Always respond using the following JSO N form at: { "proposed_question": …} User You are role-playing as a user seeking career advice from a chatbot. Review the conversation history between you (the user) and the chatbot, then continue the role-play. Always respond in JSON format. System Here are the details of the user you are simulating: { Age, Education Level, Job Title, Country} Through the conversation, your goal is to seek career guidance in the following areas: 1. Career direction for the next 5 years 2. Career direction for the next 10 years 3. Essential skills for career growth 4. Relevant certifications to pursue If all four areas are addressed, conclude the conversation. Otherwise, generate a follow-up question to gather more insights. Feel free to share additional details about yourself in your follow-up questions, either based on the chatbot’s responses or as needed for more tailored advice. Always respond using the following JSON format: { "proposed_question": ...,"end_conversation": true/false} User Conversation history: User: ... Chatbot: ... User Subsequent Prompt Initial Prompt Figure 6: The prompts guiding the user_simulator are structured as follows: the green text is the system prompt defining the LLM’s role; the blue text specifies the user profile, conversation objectives, termination conditions, and response format; the yellow text depicts the chat history between the simulated user and the QA LLM. You are an out-of-context detector. Compare the details in a user-provided question with the user profile. Ensure the question meets the following criteria: 1. Reflects only information present in the profile. 2. Is written in the first-person perspective and seeks career suggestions. If the question fails to meet these criteria, rewrite it accordingly. Respond in JSON format: {"has_out_of_context": true/false, "updated_question": "..."}. Leave "updated_question" empty if no discrepancies are found. System User profile: { Age, Education Level, Job Title, Country} Question: As someone working in a leadership role in technology, what are some short-term career suggestions for continuing growth and advancement? User User Figure 7: The prompt guiding the ooc_detector is structured as follows: the green text represents the system prompt, which defines the criteria for the LLM to check, while the blue text contains the user profile and the generated question. 14 You will be given a a set of questions and a persona attributes. Your task is to provide a 'total rating' scoring how many attributes mentioned in the questions. The score should be given based on the exact number of mentioned attributes. Give your answer as a float on a scale of 0 to 5, you don't need to care about the correctness of the attribute values. Scoring details: * no attributes mentioned --> 0 * mentioning 1 attribute --> 1 * mentioning 2 attributes --> 2 * mentioning 3 attributes --> 3 * mentioning 4 attributes --> 4 * mentioning 5 attributes --> 5 Always respond using the following JSON format: {{
|
https://arxiv.org/abs/2505.21362v1
|
"rating": int, "reason": str}}' System User profile: { Age, Education Level, Job Title, Country} User Here are the questions: Question 1: ... Question 2:... ... User Figure 8: The prompt designed for guiding the LLM judge to score the generated dialogues from the perspective of “Attribute Coverage” You will be given a a set of questions and a persona attributes. Your task is to provide a 'total rating' scoring if attributes values are correctly mentioned in the questions. Initial score is 5, for every wrong attribute value detected in the questions, deducting one point. You don't need to deduct point for missing attributes. Give your answer as a float on a scale of 0 to 5. Scoring details: * all mentioned attributes are having the correct values --> 5 * one attribute value is wrong --> 4 * two attribute values are wrong --> 3 * three attribute values are wrong --> 2 * four attribute values are wrong --> 1 * five attribute values are wrong --> 0 Always respond using the following JSON format: {{ "rating": int, "reason": str}}' System User profile: { Age, Education Level, Job Title, Country} User Here are the questions: Question 1: ... Question 2:... ... User Figure 9: The prompt designed for guiding the LLM judge to score the generated dialogues from the perspective of “Attribute Correctness” You will be given a a set of questions and a persona attributes. Your task is to provide a 'total rating' scoring if questions are repeating the similar content, more repetitions, lower the score. Please give a score based on the number of unique questions. Give your answer as a float on a scale of 1 to 5. Scoring details: * all five questions are having the unique contents --> 5 * 4 unique contents, and 1 repeated question --> 4 * 3 unique contents and 2 repeated questions --> 3 * only 2 unique contents among 5 questions, 3 questions are having the repeated contents --> 2 * all five questions are having the same content --> 1 Always respond using the following JSON format: {{ "rating": int, "reason": str}}' Write your reason concisely. System User profile: { Age, Education Level, Job Title, Country} User Here are the questions: Question 1: ... Question 2:... ... User Figure 10: The prompt designed for guiding the LLM judge to score the generated dialogues from the perspective of “Question Diversity” 15 You will be given a a set of questions and a persona attributes. Your task is to provide a 'total rating' for the readability of the questions. All questions are about seeking the career advice from the chatbot. Please give a score based on the readability of each question. Give your answer as a float on a scale of 1 to 5. Scoring details: * Start with the score as 3. * If there's any reactions to the chatbot's response in the questions --> plus 1 to the score. * If the langauge is natural and the questions are organized in a progressive manner --> plus 1 to the
|
https://arxiv.org/abs/2505.21362v1
|
score. * If the language is not natural and the following questions keep repeating the user's self introduction --> minus 1 from the score. * If the language is too polite and sounds like seeking the advice from seniors --> minus 1 from the socre. Always respond using the following JSON format: {{ "rating": int, "reason": str}}' Write your reason concisely. System User profile: { Age, Education Level, Job Title, Country} User Here are the questions: Question 1: ... Question 2:... ... User Figure 11: The prompt designed for guiding the LLM judge to score the generated dialogues from the perspective of “Relevance” F Prompts Design for Querying Models To ensure prompt naturalness, we devised two scenario-specific templates that vary in system prompt framing, information presentation, and question phrasing. The BA_user template directs the model: “ You are a chatbot designed to provide precise, personalized answers based on the given user profile ,” whereas the BA_dialogue template frames the model as the user: “ Assume you are me; help me find the most suitable answer to the fol- lowing question .” Full examples of both templates are provided in Figure 12. As described, reasoning models are queried twice per question to ensure a structured reasoning process. All prompts are crafted to naturally guide models in understanding embedded user attributes and making selections accordingly. To maintain coherence, questions con- taining explicit user attributes are presented in the third-person perspective, while those based on di- alogue history adopt the first-person perspective, aligning with the conversational format. G Probability Normalization for option_ids The probability distribution Pover option_ids for each response ris derived from the model’s log probabilities by applying the exponentiation function. Given that only the top 5 log probabil- ities from the full vocabulary are output by the model, any option_ids not included in this set are assigned a probability of 0.0. The resulting values are then normalized by dividing each probabilityby the sum of all five option_ids probabilities, ensuring that the final distribution satisfies the con- dition: 5X i=1p(i) = 1 . H Extra Evaluation Outcomes for BA_user andBA_dialogue Nationality : Unlike attributes such as “Age” and “Education Level,” “Nationality” encompasses a large number of unique values. To address this, we categorize RUandRDbased on each country’s “Development Level,” which we argue effectively captures cultural and value differences when incor- porating country information into the model. The grouping is derived from a mapping list generated by the “gpt-4o-2024-08-06”. All tested models exhibit a strong awareness of country-based differ- ences when classified into “Developed,” “Devel- oping,” and “Third World” categories. Figure 13 visualizes this evaluation alongside analyses of al- ternative country-grouping methods. Job Category : The original dataset contains hundreds of distinct job titles. To facilitate analysis, we apply zero-shot classification using bart-large-mnli (Lewis et al., 2019), trained on MultiNLI (Williams et al., 2018), to map each ti- tle to a predefined set of job categories. We then group responses by these categories, as illustrated in Figure 14. While this approach introduces an 16 You are a chatbot designed to provide precise and personalized answers
|
https://arxiv.org/abs/2505.21362v1
|
to questions based on the given user profile. Analyze the question carefully and tailor your response to match the user's context. System Here are the details of the user profile: { Age, Education Level, Job Title, Country} Below is the question: { Question } { Option List } Answer the question in json format: {”option_id”: int, “reason”: str} User User User (a) Prompt Design for BA_user As someone working in a leadership role ... 1. **Stay Updated with Trends**: ... User Assistant You are a chatbot designed to provide precise and personalized answers to questions. Analyze the question carefully and tailor your response to match the user's context. System Below is the question: { Question } { Option List } User Answer the question in json format: {”option_id”: int, “reason”: str} User Assume you are me, help me find the most suitbale answer to the following question. User Generated Dialogue ...... (b) Prompt Design for BA_dialogue You are a chatbot designed to provide precise and personalized answers to questions based on the given user profile. Analyze the question carefully and tailor your response to match the user's context. System Here are the details of the user profile: { Age, Education Level, Job Title, Country} Answer the question in json format: {”option_id”: int, “reason”: str} User User User Assistant Below is the question: { Question } { Option List } <think>Model’s reasoning</think> Query Once for Reasoning (c) Prompt Design for BA_user As someone working in a leadership role ... 1. **Stay Updated with Trends**: ... User Assistant You are a chatbot designed to provide precise and personalized answers to questions. Analyze the question carefully and tailor your response to match the user's context. System Below is the question: { Question } { Option List } User Answer the question in json format: {”option_id”: int, “reason”: str} User Assume you are me, help me find the most suitbale answer to the following question. User Generated Dialogue ...... Assistant <think>Model’s reasoning</think> Query Once for Reasoning (d) Samples for BA_dialogue grouped by Education Figure 12: Prompt designs for querying models to answer value-based questions given contextual information. Reasoning models are queried twice: the first query allows free-form reasoning responses, while the second query, with the reasoning content appended, enforces a structured output format. additional layer of abstraction—potentially reduc- ing the sensitivity to inter-group differences com- pared to attributes like age or education level—a consistent pattern emerges: responses to individ- uals in Science and Technology roles exhibit the greatest response similarity across both BA_user andBA_dialogue scenarios for most models. Con- versely, when models identify users from Business- related sectors, the human values reflected in their responses tend to diverge more significantly from those associated with other sectors. Position Level : Following a similar approach to grouping responses by job categories, we use bart-large-mnli to map job titles to predefined position levels, organizing the results accordingly. The outcomes are presented in Figure 15. Com- pared to the “Job Category” grouping, the posi- tion level analysis shows a more consistent patternacross BA_user andBA_dialogue for all models. Notably, responses for
|
https://arxiv.org/abs/2505.21362v1
|
“Entry Level” and “Senior Management” users exhibit the most pronounced differences, while the distances between “Entry Level” and “C-Suite” users are smaller than ex- pected. This is likely because the classifier tends to assign only titles like “Chief of Staff” to the “Se- nior Management” category, while the “C-Suite” group contains a broader range of titles, resulting in a more heterogeneous sample. I Reasoning Samples In our evaluation, we observe that the reasoning- capable model exhibits greater consistency in their responses across different input formats. Our analysis suggests that the model would review all retrieved demographic attributes when they are “reasoning”–regardless of whether they are ex- 17 Llama3.1-8B-Instruct Llama3.1-70B-Instruct DeepSeek-V3 Qwen2.5-7B-Instruct Qwen2.5-72B-Instruct QwQ-32B Developed vs DevelopingDeveloped vs Third_World Developing vs Third_World0.01.02.03.04.05.06.07.0(a)BA_user grouped by “devlopment level” Developed vs DevelopingDeveloped vs Third_World Developing vs Third_World0.01.02.13.14.15.26.27.2 (b)BA_dialogue grouped by “devlopment level” Figure 13: The visualized measurement results for BA_user andBA_dialogue grouped by the “development level” of the user’s nationality. plicitly stated or implicitly embedded in dialogue history–during the reasoning process. This reflec- tive step helps align their responses more closely with the given context. Figure 16 illustrates two examples: one where the user profile is explicitly provided, and another where it is inferred from prior dialogue. 18 Llama3.1-8B-Instruct Llama3.1-70B-Instruct DeepSeek-V3 Qwen2.5-7B-Instruct Qwen2.5-72B-Instruct QwQ-32B Business vs CreativeBusiness vs EducationBusiness vs HealthBusiness vs HospitalityBusiness vs ScienceBusiness vs TechCreative vs Education Creative vs Health Creative vs Hospitality Creative vs Science Creative vs Tech Education vs Health Education vs Hospitality Education vs Science Education vs Tech Health vs HospitalityHealth vs ScienceHealth vs TechHospitality vs ScienceHospitality vs TechScience vs Tech0.01.12.23.24.35.46.57.5(a)BA_user grouped by “job category” Business vs CreativeBusiness vs EducationBusiness vs HealthBusiness vs HospitalityBusiness vs ScienceBusiness vs TechCreative vs Education Creative vs Health Creative vs Hospitality Creative vs Science Creative vs Tech Education vs Health Education vs Hospitality Education vs Science Education vs Tech Health vs HospitalityHealth vs ScienceHealth vs TechHospitality vs ScienceHospitality vs TechScience vs Tech0.01.02.13.14.25.26.27.3 (b)BA_dialogue grouped by “job category” Figure 14: Visualized measurement results for BA_user andBA_dialogue grouped by “job category,” derived from job titles classified using the pre-trained bart-large-mnli model (Lewis et al., 2019). 19 Llama3.1-8B-Instruct Llama3.1-70B-Instruct DeepSeek-V3 Qwen2.5-7B-Instruct Qwen2.5-72B-Instruct QwQ-32B C-Suite vs Entry_LevelC-Suite vs ManagementC-Suite vs Professional C-Suite vs Senior_Management Entry_Level vs Management Entry_Level vs Professional Entry_Level vs Senior_Management Management vs Professional Management vs Senior_ManagementProfessional vs Senior_Management0.01.22.43.6(a)BA_user grouped by “position level” C-Suite vs Entry_LevelC-Suite vs ManagementC-Suite vs Professional C-Suite vs Senior_Management Entry_Level vs Management Entry_Level vs Professional Entry_Level vs Senior_Management Management vs Professional Management vs Senior_ManagementProfessional vs Senior_Management0.01.22.53.75.0 (b)BA_dialogue grouped by “position level” Figure 15: Visualized measurement results for BA_user andBA_dialogue grouped by “position level,” derived from job titles classified using the pre-trained bart-large-mnli model (Lewis et al., 2019). 20 <think>Okay, so I need to figure out the best answer for this question about how important personal or home life time is for an ideal job, assuming I'm the CTO from Micronesia with a high school education. Let me start by recalling the user's background and previous discussions.\n\nThe user is a CTO, which is a leadership role in tech. They've been advised to focus on various
|
https://arxiv.org/abs/2505.21362v1
|
certifications and emerging tech areas like AI, cybersecurity, cloud computing, etc. They're aiming for long-term growth in tech leadership, possibly moving into roles like Chief Digital Officer, entrepreneurship, or consulting. \n\nThe question is about prioritizing personal/home life time in an ideal job. The options range from 1 (utmost) to 5 (no importance). Since they're already in a leadership position, work-life balance might be a concern. Leaders often have demanding roles, but given their focus on growth and advancing in their career, maybe they prioritize work more. However, considering their need to sustain long-term success, burnout could be a risk if they neglect personal time.\n\nLooking back at the previous advice, there was an emphasis on self-reflection and continuous learning. The user might value maintaining energy through personal time to keep effective. Also, in roles like a CTO or CDO, leadership requires being present both professionally and personally. \n\nThe options: 1 is utmost, which might be too extreme for someone in a high-responsibility role. 2 (very important) seems balanced. 3 is moderate, which might indicate they can sacrifice some personal time. Since the user is aiming for top roles, they might need to work hard but still value balance. \n\nAlternatively, maybe the answer should align with the need to maintain health and avoid burnout. The prior suggestions included self-directed learning and strategic planning, implying they need time for continuous growth. \n\nTherefore, the best answer is probably 2: very important. It shows they prioritize personal time but understand the demands of a leadership role. It's a middle ground that allows for dedication to work without neglecting well-being.\n</think> <think>Okay, so I need to figure out how important having sufficient time for personal or home life would be for this user when considering an ideal job, other than their current one. Let me start by looking at their profile.\n\nThe user is a 33-year-old male from Micronesia. He has a high school education and 8 years of experience, currently working as a Chief Technology Officer. Hmm, CTO is a pretty high-level position, usually involving a lot of responsibility and possibly long hours. But the question is about an ideal job, not his current one. So maybe he's looking for something different?\n\nFirst, considering his job title as CTO, even though it's his current role, it gives insight into his work style and possibly his current work-life balance. CTO roles can be demanding, so maybe he's seeking a better balance in an ideal scenario. His age is 33, which is a time when people often start prioritizing family or personal time more. Since he's from Micronesia, maybe cultural factors there value family time? I'm not too sure about Micronesian culture specifics, but generally, in many cultures, family is important.\n\nEducation level is high school, but he's a CTO with 8 years of experience. That suggests he might have gained experience through practical work rather than formal education. Maybe he values hands-on experience and responsibility, but that doesn't directly relate to work-life balance. \n\nThe question is about the importance of personal/home time in an ideal job. Since he's in a high-stress role
|
https://arxiv.org/abs/2505.21362v1
|
now, perhaps he would prioritize more balance. The options are from 1 (utmost) to 5 (no importance). \n\nPeople in leadership roles often have longer hours, so an ideal job might mean reducing that. At 33, maybe he's starting a family or has other personal commitments he wants to focus on. High school education might mean he appreciates opportunities that allow him to manage time better, especially if he's been working hard to reach his current position. \n\nI think the most likely answers are either 1 or 2. Maybe 1 if he's really stressed now, but perhaps 2 as a safer bet. Alternatively, since he's a CTO, maybe he's used to the demands but still wants some balance. Let's see... The options are \"of utmost importance\" (1) versus \"very important\" (2). \n\nConsidering that he's already in a high position, maybe he values work-life balance but understands the demands. However, since it's an ideal job, he might prioritize it highly. I think the answer is either 1 or 2. Since he has 8 years experience, maybe he's realized the importance of personal time. Let me go with 1, but maybe the user might think 2. Hmm. Alternatively, maybe moderate? No, that seems less likely for someone in a high-stress job. \n\nWait, the options are 1 to 5. Let me think again. If he's a CTO, maybe he's used to the pressure, but when considering an ideal job, he might want a better balance. So \"of utmost importance\" (1) would be the top priority. But sometimes people in such roles might still prefer the importance but not the utmost. Maybe \"very important\" (2). \n\nAlternatively, maybe the user's location in Micronesia affects this. Maybe in Micronesia, family is central, so he would choose 1. Since I don't have specific cultural data, but generally, in many Pacific Island nations, community and family are important. So perhaps he would prioritize it highly. \n\nI think the best answer here is 1. But I'm not 100% sure. Alternatively, maybe 2. Let me check again. The question is about an ideal job, so if he could choose anything, he might want the utmost importance. So I'll go with 1.\n</think> With User Profile With Dialogue History Figure 16: Samples of the model’s reasoning given the same context information in different formats. 21
|
https://arxiv.org/abs/2505.21362v1
|
arXiv:2505.21363v1 [cs.LG] 27 May 2025Subgroups Matter for Robust Bias Mitigation Anissa Alloula1Charles Jones2Ben Glocker2Bartłomiej W. Papie ˙z1 Abstract Despite the constant development of new bias mit- igation methods for machine learning, no method consistently succeeds, and a fundamental question remains unanswered: when and why do bias miti- gation techniques fail? In this paper, we hypothe- sise that a key factor may be the often-overlooked but crucial step shared by many bias mitigation methods: the definition of subgroups. To investi- gate this, we conduct a comprehensive evaluation of state-of-the-art bias mitigation methods across multiple vision and language classification tasks, systematically varying subgroup definitions, in- cluding coarse, fine-grained, intersectional, and noisy subgroups. Our results reveal that subgroup choice significantly impacts performance, with certain groupings paradoxically leading to worse outcomes than no mitigation at all. Our find- ings suggest that observing a disparity between a set of subgroups is not a sufficient reason to use those subgroups for mitigation. Through theoret- ical analysis, we explain these phenomena and uncover a counter-intuitive insight that, in some cases, improving fairness with respect to a par- ticular set of subgroups is best achieved by using a different set of subgroups for mitigation. Our work highlights the importance of careful sub- group definition in bias mitigation and suggest it as a alternative lever for improving the robustness and fairness of machine learning models. 1. Introduction A significant barrier to the wider deployment of machine learning (ML) models is their tendency to fail when tested on distributions that differ from their training data. One particularly concerning manifestation of this issue is perfor- mance degradation for population subgroups, often caused 1University of Oxford, UK2Imperial College London, UK. Cor- respondence to: Anissa Alloula <anissa.alloula@dtc.ox.ac.uk >. Proceedings of the 42ndInternational Conference on Machine Learning , Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).by bias in training data such as spurious correlations (SC), under-representation of certain subgroups, or shifts in the presentation of the target Y(Jones et al., 2024). Bias miti- gation methods aim to address these issues by training more robust models which are less susceptible to these biases, thereby improving generalisation. These methods generally adapt model training to improve the performance of some disadvantaged subgroups within the training data. Despite the number of bias mitigation methods which have been proposed, benchmarks are increasingly reporting that the performance of these methods is inconsistent when tested in new settings, and that they often fail to surpass the empirical risk minimisation (ERM) baseline (Zong et al., 2023; Zietlow et al., 2022; Chen et al., 2023; Shrestha et al., 2022; Alloula et al., 2024). Some efforts have been made to begin to elucidate the conditions under which certain bias mitigation methods might be valid, such as the work of Jones et al. (2025) in fair representation learning and Schrouff et al. (2024) in data balancing. However, the choice of an appro- priate mitigation method is only one aspect of the problem. To successfully mitigate bias, we must also select which subgroups we wish to apply the methods on – a question which very little
|
https://arxiv.org/abs/2505.21363v1
|
work has explicitly addressed. Indeed, most bias mitigation methods rely on some form of grouping to first identify disadvantaged subgroups within the training data and then to implement group-based strate- gies aimed at improving generalisation or fairness. This can be as simple as observing a disparity in model performance between men and women, and trying to fix this by rebal- ancing the training data such that both subgroups are more uniformly distributed (Weng et al., 2023), or noticing that a model performs poorly on data coming from a specific type of scanner, and applying a robust learning strategy such as adversarial training to prevent learning of scanner-specific but task-irrelevant information (Ganin & Lempitsky, 2015). To date, the literature has predominantly focused on simple and coarse subgroups, for example blonde hair and non- blonde hair in the CelebA dataset (Liu et al., 2015b) or waterbirds and landbirds in the waterbird dataset (Sagawa* et al., 2020). In medical applications, the subgroups are often “white” or “non-white”, “men” or “women”, or some coarse binning of age (Ricci Lara et al., 2022). Subgroup choice is often motivated by two factors: a) practical con- straints, for instance only having annotations for common 1 Subgroups Matter for Robust Bias Mitigation attributes, and b) ethical or societal goals to achieve fairness with respect to specific subgroups. However these sub- groups may poorly capture the underlying cause of model underperformance, thus obscuring critical information for bias mitigation methods. In this work, we aim to better understand whether we can optimise this crucial step of subgroup definition in the same way that new bias mitigation methods are optimised. We in- vestigate the role of subgroup definition on the performance of these methods, and whether poor subgroup definition might explain why these methods often fail. We construct a setting of bias inspired by a real-world chest radiograph example (Olesen et al., 2024) where there is a spurious cor- relation during training but which is absent during testing, and which is present in different proportions across sub- groups, resulting in disparities in model performance. We consider realistic ways in which subgroups might be gen- erated based on relevant attributes, and explore how they impact the performance of four state-of-the-art bias miti- gation methods in four semi-synthetic vision and language datasets. We identify key patterns in the performance of these methods across groupings. Certain groupings lead to a large improvement over the ERM baseline, while oth- ers substantially lower performance relative to the baseline. We propose that the effectiveness of a given subgrouping strategy is linked to its ability to recover the unbiased test distribution. We summarise the key contributions of this work as follows: •We show that the groupings used for bias mitigation strongly affect how well each method works, and pro- vide insights on optimal grouping strategies. •We argue that observing a disparity in model perfor- mance across a set of subgroups does not justify using those subgroups for mitigation, and may in fact, make matters worse. •We provide a possible explanation for the differences in subgroup effectiveness based on the minimum
|
https://arxiv.org/abs/2505.21363v1
|
KL divergence between the subgroup-weighted biased dis- tribution and the unbiased test distribution. •We challenge the conventional assumption that the best way to obtain “fairness” with respect to a specific set of subgroups is always achieved by using those same subgroups for bias mitigation. 2. Related work 2.1. Bias identification Research on bias detection has increasingly focused on refin- ing subgroup definitions to capture complex patterns of un- fairness. While individual fairness, as introduced by Dworket al. (2012), offers a theoretically elegant approach by eval- uating fairness at the individual level, its practical challenges have limited its adoption. As a result, group-based analy- ses remain the dominant paradigm. More recently, efforts have been made to move beyond traditional binary cate- gories (e.g., men/women or white/non-white) to identify disparities that such coarse classifications may obscure. For instance, Kearns et al. (2018) and Buolamwini & Gebru (2018) illustrate how failing to account for intersections of attributes can entirely hide performance disparities. Xu et al. (2024) discuss the difficulty of identifying intersec- tional bias when there are a number of attributes at play, and propose a generative approach to discover high-bias inter- sections amongst many possibilities. Similarly, Movva et al. (2023) demonstrate that in clinical risk prediction models, variation in performance within 4 commonly-used ethnic groups often exceeds the variation between these coarse groups, advocating for the use of much more precise cat- egories to describe ethnicity. These works highlight how heterogenous subgroups on which unfairness is observed can be, yet most of them do not consider how this translates to conducting bias mitigation. 2.2. Subgroup definition for bias mitigation A smaller body of work has considered how this problem of subgroup definition extends to bias mitigation. For example, Awasthi et al. (2020), Wang et al. (2020a), and Stromberg et al. (2024) explore how noise in subgroup annotations may impact post-processing, distributionally robust optimi- sation (DRO), and last-layer-retraining respectively. They find that fairness is not guaranteed under some perturbations of the true attributes. Ghosal & Li (2023) also extend group DRO (gDRO) with probabilistic subgroup labels if there is uncertainty with respect to the subgroup annotations. Li et al. (2023); Kim et al. (2024) show the difficulty of mit- igating bias when there are multiple spurious correlations and annotated subgroups. Perhaps the work most closely related to ours is Zhou et al. (2021), which considers a set- ting where a spurious correlation is causing bias, and show that gDRO fails in an example where the subgroups do not directly account for the spurious correlation. They attribute this failure to the inability to upweight bias-conflicting sam- ples effectively, as their constructed “imperfect” groups also include spuriously correlated samples - though their sub- group construction appears somewhat unrealistic. All of these works point to a sensitivity of bias mitigation methods to the subgroups defined, however they each only explore one possible flaw in subgroup annotations and restrict their scope to a single bias mitigation method. 2.3. Bias mitigation without subgroups While the above works explore some possible failures in subgroup definition for common bias mitigation methods, 2
|
https://arxiv.org/abs/2505.21363v1
|
Subgroups Matter for Robust Bias Mitigation others have developed new methods altogether which do not require subgroups to be defined in the traditional way. For instance, Kearns et al. (2018) and Hebert-Johnson et al. (2018) propose algorithms which aim to achieve fairness across all identifiable or richly structured subgroup classes. Bias discovery methods bypass the question of pre-defining subgroups altogether, and are useful in settings where sub- group annotations are missing. Some methods decouple bias discovery from bias mitigation by first “discovering” biases through inferring subgroup annotations with an ex- ternal model (Han & Zou, 2024; Marani et al., 2024) or by clustering points based on their feature representations (Krishnakumar et al., 2021), and subsequently performing bias mitigation. Other methods forgo explicit subgroup iden- tification and instead rely on some form of regularisation or upweighting of misclassified samples (Ahn et al., 2022; Park et al., 2024; Liu et al., 2021). We restrict our main analysis to established mitigation meth- ods which use subgroup annotations because (a), they often serve as the upper bound for mitigation methods (Zhou et al., 2021; Pezeshki et al., 2021; Bayasi et al., 2024), (b), many nominally label-free methods still require subgroup anno- tations for validation or hyperparameter tuning (Pezeshki et al., 2024), and (c) label-free methods frequently infer subgroups, making it still essential to understand their role. 3. Background 3.1. Overview of bias mitigation methods 3.1.1. E MPIRICAL RISK MINIMISATION Traditional deep learning models conduct empirical risk min- imisation ( ERM ), where, for given inputs x∈ X and labels y∈ Y from a distribution P, the objective is to find a model θ∈Θ, that minimises the expected loss EP[ℓ(θ; (x, y))]. However, in practice, only a subset of P, denoted Ptrain , is available, so the loss over all samples in Ptrain is min- imised: ˆθERM:= arg min θ∈ΘE(x,y)∼Ptrain[ℓ(θ; (x, y))]. (1) 3.1.2. C OMMON BIAS MITIGATION METHODS Minimising the average loss over Ptrain often results in poor generalisation and poor performance on minority sub- groups in the data. To address these shortcomings, bias mitigation methods have been proposed as alternatives to ERM. In this work, we focus on 4 commonly used bias mit- igation methods which have demonstrated state-of-the-art results in certain tasks and represent the broad spectrum of existing methods. We define kdisjoint subgroups Pg indexed by G={1, ..., k}, which partition Ptrain . We as- sume each training sample is annotated with its subgroup labelg, thus giving the tuples (x, y, g ); however, subgroupinformation may be unavailable at inference time. Group distributionally robust optimisation (gDRO) (Sagawa* et al., 2020) reweights samples during loss calcu- lation. The objective is to minimise the worst-case expected loss across each subgroup Pg, (Equation (2)). In practice, an online optimisation algorithm is used to assign higher weight to high-loss subgroups in each batch. ˆθgDRO:= arg min θ∈Θn max g∈GE(x,y)∼Ptrain g[ℓ(θ; (x, y))]o . (2) Resampling (Idrissi et al., 2022) also relies on reweighting but achieves it by adjusting the sampling probability of each subgroup Pgat the batch level so that each batch is balanced across subgroups. The
|
https://arxiv.org/abs/2505.21363v1
|
loss for resampling is equivalent to: ˆθresampling := arg min θ∈ΘkX g=01 kE(x,y)∼Ptrain g[ℓ(θ; (x, y))]. (3) Domain Independent (DomainInd) learning adjusts the model architecture by replacing the single classifier head withkseparate classifier heads, each corresponding to a subgroup, such that although each sample is passed through the same encoder, the decoder can be fine-tuned to each subgroup (Wang et al., 2020b). At inference, the classifier head with the largest activation makes the final prediction. Conditional learning of fair representations (CFair) aims to learn fair and robust representations of the target label independent of any subgroup information (Zhao et al., 2020). This is achieved by aligning conditional representa- tions of samples from different subgroups. 3.1.3. SUBGROUP DEFINITION ACROSS THESE METHODS In this work, we distinguish between two categories of bias mitigation methods: reweighting (gDRO and resampling), and model-based methods (DomainInd and CFair). This is because of differences in how subgroups are generally defined. While in model-based methods subgroups are de- fined solely based on attribute(s) A, reweighting methods can additionally define subgroups over A × Y . This is be- cause the latter methods perform best if the each subgroup contains both positive and negative samples. We expand on this distinction and its implications in Appendix A.2. 3.2. Problem setting We frame the task according to the fairness paradigm de- scribed in Jones et al. (2025), whereby the objective is to generalise from a biased training distribution to an unbiased 3 Subgroups Matter for Robust Bias Mitigation SXS SelectionAXAYXYBiased train/val distributionUnbiased test distribution S = 0S = 1Y = 0Y = 1A = 0A = 1Legend: SXS AXAYXYPtrainPtest Figure 1. Causal graphs representing interactions between Y,A, and Svariables in the training data and in the unbiased test data. Conditioning on selection in the training data results in spurious correlations between Y,A, andS. Coloured bars also illustrate the proportions of Y,A, andScombinations in both settings. testing distribution. Dataset bias can take many forms (as discussed in Jones et al. (2024)), but here, we focus on bias arising from spurious correlations between certain attributes of the data and the class label Ywhich disappear in the un- biased deployment setting. We define two binary attributes AandSwhose information is encoded in Xin the form of latent features XAandXSrespectively. Features relating to the true class Yare represented as XY. In the unbiased test setting, attribute-related information is independent of Y, such that P(Y) =P(Y|A) =P(Y|S). However, inPtrain , mechanisms like data selection may lead to the violation of this independence. Causal graphs illustrating this scenario are shown in Figure 1. In particular, in this study, we consider an example of a spurious correlation between AandY. Furthermore, we consider that the spuriously correlated samples are unevenly distributed across the second attribute S. This is inspired from a real world medical imaging example from Olesen et al. (2024). Specifically, they describe a chest X-ray di- agnosis model which shows better performance in one sex subgroup (denoted here by S). Previous attempts to reduce this disparity, such as resampling of the data across S, and manipulation of sex-specific regions of
|
https://arxiv.org/abs/2505.21363v1
|
the X-rays, were ineffective, leading to the hypothesis that sex-specific dif- ferences were not causing the disparity (Weng et al., 2023). Subsequent work revealed that men and women presented different proportions of chest drains and ECG wires, which the model used as spurious correlations to predict disease, and that, balancing the test data with respect to these arte- facts (corresponding to Ain this work) resulted in equal performance across sexes. To mimic this example, we use a semi-synthetic setting where the subgroup S= 0 con- tains 95% spuriously correlated samples, while the sub- group S= 1 contains a smaller, though still substantial, proportion of spuriosly correlated samples (80%). With this setup, there would be a substantial difference in a model’s performance across subgroups if it only correctly classifiedsamples satisfying this correlation. We represent each distribution as probability vectors in R8 with each element corresponding to the probability of sam- pling one (Y, S, A )subgroup. The unbiased distribution is defined to be uniform, i.e., Punbiased =1 8,1 8, . . . ,1 8 , while Ptrain= [0.95 4,0.05 4,0.8 4,0.2 4,0.05 4,0.95 4,0.2 4,0.8 4]. Fur- ther details are presented in Table 1 and Figure 1. Table 1. Probability distributions across Y,A, andSin the biased train and validation dataset and the unbiased test set. Probability distributions Ptrain Punbiased P(Y= 1) 0.5 0.5 P(S= 1) 0.5 0.5 P(A= 1) 0.5 0.5 P(Y= 0|S= 0) = P(Y= 0|S= 1) 0.5 0.5 P(Y= 0|A= 0) = P(Y= 1|A= 1) 0.875 0.5 P(Y= 0, A= 0|S= 0) = P(Y= 1, A= 1|S= 0) 0.95 0.5 P(Y= 0, A= 0|S= 1) = P(Y= 1, A= 1|S= 1) 0.8 0.5 4. Experiments 4.1. Subgroup generation To understand how different subgroup definitions affect the generalisation performance of bias mitigation methods, we construct multiple sets of subgroups based on A,S, and Yand train each model with them. Our goal is to simu- late realistic scenarios, for instance where one may only have access to certain variables or to noisy subgroup an- notations, or when deciding between the use of coarse or fine-grained subgroups (e.g. a level of ethnicity categorisa- tion, or discretising on a continuous variable like age). We denote subgroups constructed as the intersection of multiple variables a×bas(a, b). For data-based methods, these include: 1) subgroups based on a single variable: A,Y, andS2) subgroups based on the 4 Subgroups Matter for Robust Bias Mitigation intersection of two or three variables: (A, Y),(S, Y), and (Y, S, A )3) SC/no-SC subgroups where we group the two bias-aligned (A, Y)subgroups together and the two bias- conflicting (A, Y)subgroups together 4) subgroups based on the random splitting of existing subgroups: (A, Y)8, (S, Y)8which split each (A, Y)and(S, Y)subgroup in two such that there are 8 subgroups in total 5) 4 completely random subgroups. For model-based methods, we do not include subgroupings based on Yand also add A4,S4(A andSsubgroups randomly split in two respectively), and (A, S)subgroups. Finally, we explore 6) the impact of noise in subgroup annotations by injecting noise (in the form of mislabelling) into 1 - 50% of the
|
https://arxiv.org/abs/2505.21363v1
|
(A, Y)andA subgroup annotations for data- and model-based methods respectively. This noise does not affect the class labels Y. For Civil comments, in addition to the synthetic granular subgroups, we directly explore the impact of granularity on real subgroups, as the dataset contains subgroup information of multiple hierarchy levels. In total, we consider 15 subgroup combinations for reweighting-based methods and 12 for model-based meth- ods. We illustrate these subgroups in Figure A6. This comprehensive set of subgroup definitions allows us to sys- tematically investigate the impact of subgroup choices on the effectiveness of bias mitigation methods. 4.2. Datasets and tasks We evaluate performance in image classification tasks in four datasets which we construct to satisfy the distributions specified in Figure 1. We summarise all details in Table A5. We adapt the MNIST dataset (Lecun et al., 1998) by bina- rising the classification task into predicting whether a digit is even or odd ( Y). We add additional attributes by modi- fying the image background colour ( A) to black or white, and colouring the foreground ( S) as red or green. This controlled setting allows for clear evaluation of subgroup influences in a simple task. To explore a more challenging and realistic setting, we re- peat the experiments with chest X-ray images from the CheXpert dataset ( CXP ) (Irvin et al., 2019). The task is classification of the presence of pleural effusion ( Y).Sis the sex of the patient and Athe presence of a pacemaker, us- ing annotations provided in (Anthony & Kamnitsas, 2023). We explore another real vision dataset commonly used in fair ML research, CelebA (Liu et al., 2015a). The task is binary classification of whether the individual has blonde hair (Y) with additional attributes A, perceived gender, and S, whether the individual is smiling. Finally, we explore whether our findings extend to the text modality through the use of the Civil comments dataset, also commonly used in fair ML (Borkan et al., 2019). Thetarget Yis toxicity prediction, with Abeing any mention of gender, and Sany mention of religion. Additionally, as this dataset contains multiple levels of subgroup annotations, we do a real experiment on the impact of subgroup granularity (instead of randomly splitting subgroups in two). We com- pare mitigation on the Agroups to mitigation on granular A groups (e.g. any mention of males, any mention of another gender, and no mention of any gender), and likewise for S (any mention of the Christian religion specifically). 4.3. Implementation details We implement and train models with each of the gDRO, resampling, DomainInd, and CFair bias mitigation methods. We apply each method to each of our generated subgroups and average the results over three random seeds. We repeat this process for the four datasets, comparing performance of the bias mitigation methods with the baseline ERM method. In total, we train 306 models, with ∼40 NVIDIA A100 hours of compute. The training strategy, hyperparameters, architectures etc. are the same across all models, as detailed in Table A7, except for necessary adjustments to apply each bias mitigation method. The code is available
|
https://arxiv.org/abs/2505.21363v1
|
here. We report the mean and standard deviation of the aggregate area under receiver operating characteristic curve (AUC) on the unbiased test set, alongside worst-group accuracy and accuracy gap across subgroups. We select these mea- sures for their directness and simplicity compared to other fairness criteria. We do notvary subgroup definition for evaluation and only evaluate accuracy with respect to Sand Asubgroups, as we are simply interested in how subgroup definition affects the mitigation process. 5. Results and discussion 5.1. ERM performance drops on the unbiased test set The baseline model, trained with no bias mitigation, shows a sharp drop in performance when tested on the unbiased test set, with a decrease of 0.10 to 0.25 in AUC for all datasets (Table 2). This drop occurs because the model has learned to rely on the spurious attribute Aas a proxy for Y, and this correlation is absent in the unbiased test set. Moreover, the ERM model exhibits disparities in performance. This is particularly pronounced for the Ssubgroups, both on the biased validation set and on the unbiased test set. A standard approach would therefore have been to apply mitigation to theSor(S, Y)subgroups. In the following sections, we explore whether various bias mitigation methods, used with different groupings, can improve test set performance and reduce these disparities. 5 Subgroups Matter for Robust Bias Mitigation Table 2. The baseline model performance shows a sharp decrease on the unbiased test set for all datasets. Subgroup-wise accuracy also reveals large disparities with respect to Sin both datasets. Mean and standard deviation across three random seeds are shown. AccuracyMNIST CXP Civil comments CelebA Val Test Val Test Val Test Val Test Overall 0.943±0.012 0.698 ±0.054 0.898±0.007 0.659 ±0.007 0.886±0.003 0.726 ±0.024 0.954±0.003 0.865 ±0.007 Min ( A) 0.936±0.025 0.694 ±0.003 0.869±0.013 0.554 ±0.020 0.878±0.006 0.724 ±0.037 0.952±0.002 0.838 ±0.019 Gap ( A) 0.014±0.026 0.008 ±0.112 0.058±0.014 0.220 ±0.028 0.016±0.013 0.004 ±0.047 0.005±0.007 0.055 ±0.025 Min ( S) 0.917±0.024 0.599 ±0.061 0.841±0.010 0.633 ±0.020 0.837±0.002 0.726 ±0.029 0.940±0.004 0.861 ±0.002 Gap ( S) 0.052±0.025 0.194 ±0.080 0.116±0.015 0.052 ±0.026 0.095±0.009 0.006 ±0.047 0.029±0.008 0.009 ±0.014 Subgroups which improve upon ERMSubgroups which yield worse results than ERM Figure 2. Performance on the Punbiased in gDRO and resampling is highly dependent on the subgroups used across all four datasets. Bars represent overall change in AUC relative to the ERM baseline, with error bars indicating the standard deviation across 3 random seeds. 5.2.Groupings used for mitigation strongly impact bias mitigation performance We find that test set performance is highly dependent on the subgroups used for mitigation, with some subgroups boost- ing overall performance by more than 0.10, while others reduce it by up to 0.04 (Figures 2 and A7). We first note that the four datasets reveal similar patterns across groupings for reweighting-based bias mitigation methods and model- based methods, suggesting that there are universal trends relating to the bias setting which generalise outside of the specific dataset and task. For reweighting-based methods, subgroups based on (A, Y), i.e.,(A, Y),(A, Y)8,(Y, S, A ), (A, Y)with small levels of noise, and
|
https://arxiv.org/abs/2505.21363v1
|
SC/no-SC subgroups, enable the model to focus on the (A, Y)pairs without the spurious correlation during training. Therefore the model learns to predict Yindependently of A, leading to better generalisation performance. Conversely, subgroups which do not take (A, Y)information into account tend to result in worse performance than the baseline model (for instance S,(S, Y),(S, Y)8,Y,A, and Random subgroups), as they fail to guide the model away from relying on the spuriousattribute A. For model-based methods, a similar pattern is evident (Figure B7); Asubgroups generally present the best performance (analogous to (A, Y)in data-based methods). For instance, with Asubgroups in DomainInd, performance is increased by 0.07 in the CXP dataset. On the other hand, Ssubgroups consistently harm performance, decreasing it by up to 0.14 for CFair in MNIST. Applying bias mitigation with certain groupings can lead to worse outcomes than ERM. For instance, Ssubgroups are detrimental to generalisation performance in 10 out of 16 experiments, and have no effect in the remaining 4 (Fig- ures 2, A7). They are detrimental despite the fact that there is a substantial disparity in performance between them (Ta- ble 2). This observation suggests that in the absence of information about the underlying mechanism causing bias, it may be better to refrain from using mitigation methods altogether, even for simple approaches like data balancing. We believe that this finding may partly explain the failures of bias mitigation methods reported in recent studies (Zong et al., 2023; Zietlow et al., 2022; Chen et al., 2023; Shrestha 6 Subgroups Matter for Robust Bias Mitigation et al., 2022), where inadequate subgroup definitions may have caused counter-productive bias mitigation. Increasing subgroup granularity has no significant effect on performance. For example, there is little difference be- tween results for (A, Y)vs(A, Y)8and(S, Y)vs(S, Y)8 (Figure 2). This is most likely because the bias mitigation methods tested here are designed to maintain stability if there are many subgroups. In practice, this suggests that dividing subgroups into finer subgroups is unlikely to harm performance, and may even be beneficial if one is unsure which specific attributes may be responsible for underper- formance. This insight could be relevant in settings where there are multiple levels of granularity to choose from (such as continuous data e.g. age, or multi-level categories e.g. ethnicity). We validate this with the real coarse and granular subgroups in Civil comments, and indeed find no signifi- cant difference in unbiased generalisation for resampling and gDRO, as shown in Figure 3. AY gDRO (gender) AY resampling (gender) SY gDRO (religion) SY resampling (religion)0.50.60.70.80.91.0AUC on unbiased test setCoarse subgroups Granular subgroups Figure 3. Performance on Punbiased is similar for the coarse and granular subgroups in Civil comments (mention of gender v.s. a specific male/female mention ( A) and mention of any religion v.s. a specific mention of Christianity ( S)). Error bars show standard deviation across 3 random seeds. Mitigation methods are relatively robust to noise. As shown in Figure 4, performance of gDRO and resampling degrades when noise in the (A, Y)annotations exceeds 10%. However, even under these conditions, these methods still outperform
|
https://arxiv.org/abs/2505.21363v1
|
the baseline model, indicating that they are rela- tively robust to annotation noise affecting a minority of sub- group annotations. This aligns with findings from Awasthi et al. (2020) and (Stromberg et al., 2024) who explore the impact of noise in post-processing and last-layer retraining respectively. A similar trend is observed for DomainInd in the CXP dataset, although the effect is less clear for other model-based experiments, where performance frequently falls below the baseline across all noise levels (Figure A8). Random subgroups are generally detrimental. Across the four datasets and mitigation methods we also find that using completely random subgroups generally worsens per- 0.0 0.1 0.2 0.3 0.4 0.5 Noise in AY subgroup labels0.06 0.04 0.02 0.000.02 AUC relative to mitigation with clean subgroup labels MNIST - gDRO MNIST - Resampling CXP - gDRO CXP - Resampling CelebA - gDRO CelebA - Resampling CivilComments - gDRO CivilComments - ResamplingFigure 4. Noise in (A, Y)subgroup labels leads to a degradation in AUC for gDRO and resampling. Each dot represents mean performance on the unbiased test set for a specific grouping, with error bars indicating the standard deviation across 3 random seeds. formance relative to ERM. This is most likely because the methods themselves are sub-optimal relative to ERM on the same distribution. This suggests that when one has no idea if the attributes for which they have annotations are related to possible biases, mitigation should not be conducted. 5.3. The ability to recover the unbiased distribution is key to mitigation success We next aim to explain the observed variation in perfor- mance across different groupings. Inspired by (Zhou et al., 2021), we explore whether it is possible to recover the unbi- ased distribution by weighting the chosen subgroups. Our hypothesis is that the closer the weighted Ptrain , which we denote Pw train, is to the test distribution Punbiased , the better the model will perform on Punbiased . This reflects the broader consensus in the generalisation literature that aligning train and test distributions (e.g. through methods like data balancing) can reduce generalisation error in non- i.i.d. settings (Mansour et al., 2009; Dong et al., 2024; Wang et al., 2023). For example, Ben-David et al. (Ben-David et al., 2010) show that for any hypothesis h, assuming the same labelling function in both source and target domains, the test error is bounded as follows: errtest(h)≤errtrain(h) +DTV(Ptrain,Ptest) where DTV(·,·)denotes the total variation divergence. We therefore choose to measure the divergence between Pw trainandPunbiased for each subgrouping to see whether this is a predictor of performance on the unbiased distri- bution. We model the divergence as a Kulback-Leibler divergence following prior work which give upper bounds (Aminian et al., 2024; Masiha et al., 2021; Wu et al., 2024; Nguyen et al., 2022) and lower bounds (Masiha et al., 2021) for expected generalisation error. In our case, Pinkser’s 7 Subgroups Matter for Robust Bias Mitigation inequality implies that test error can be bounded as: errunbiased (h)≤errtrain(h) +r 1 2KL(Pw train∥ P unbiased ). Since all our models reach a similarly low train error, we posit that the
|
https://arxiv.org/abs/2505.21363v1
|
differences in upper bound are largely driven by the divergence between both distributions1. We explore whether the divergences achieved for each subgrouping cor- relate with generalisation error. We assume that the difference between both distributions is attributable to differences in probabilities of sampling each (Y, S, A )subgroup. Thus we represent each Pw trainas proba- bility vectors in R8, as defined in Section 3.2. This yields an initial KL divergence KL(Ptrain∥ P unbiased )≈0.527. We next explore whether, for each possible subgrouping, by either resampling or gDRO, this divergence can be re- duced. For resampling, each subgroup in the chosen sub- grouping is resampled such that they are uniformly dis- tributed. For instance, resampling across (Y, S, A )would givePw train=1 8,1 8, . . . ,1 8 . For gDRO, the distribution of the subgroups is learned during training by determining what weights to apply to each subgroup. We determine the theoretically optimal weights wby solving a convex optimisation problem: min w∈∆mKL(Pw train∥ P unbiased ), where ∆kdenotes the probability simplex over the selected ksubgroups. We present the divergences obtained for all groupings in Table 3 with full explanations and calculations provided in the Appendix E. Table 3. Minimum KL(Pw train∥ P unbiased )achievable by reweight- ing subgroups with gDRO and resampling. KL divergence to PunbiasedGrouping gDRO Resampling A 0.527 0.527 Y 0.527 0.527 S 0.527 0.527 AY 0.113 0.113 SY 0.527 0.527 YSA 0.000 0.000 SC/no-SC 0.113 0.113 AY8 0.113 0.113 SY8 0.527 0.527 Random 0.527 0.527 Noisy AY0.01 0.113 0.113 Noisy AY0.05 0.113 0.114 Noisy AY0.10 0.113 0.116 Noisy AY0.25 0.114 0.131 Noisy AY0.50 0.118 0.189 We observe a high correlation (between 0.73 and 0.97) be- tween the minimum achievable divergence and the perfor- mance of each model across the four datasets for resampling 1We would ideally like to directly estimate generalisation under distribution shift (instead of just having an upper bound), but this would require very strong assumptions (Chuang et al., 2020).and gDRO respectively (Figures 5, F10). This aligns with our hypothesis that the extent to which the unbiased distribu- tion can be restored during training significantly influences generalisation performance. Of course, other factors may still impact performance, such as inherent differences in task difficulty across subgroups (Petersen et al., 2023). We also note that we are only able to do this analysis by as- suming that any divergence between distributions is fully attributable to differences in P(Y, S, A )and because we have full knowledge of how these distributions change at test time. These assumptions would rarely be validated in practical settings. Despite this, our results suggest that as- sessing whether an unbiased distribution can be recovered provides a useful starting point for defining subgroups in bias mitigation. Thus, defining subgroups based on the cause of generalisation error may be more effective than simply defining subgroups based on observed disparities . These divergences also explain previous observations, such as the similarity of results when subgroup granularity in- creases (KL divergence is also unchanged), robustness to noise (KL divergences show little change), and the similar- ity between results
|
https://arxiv.org/abs/2505.21363v1
|
for gDRO and resampling (also similar divergences). Moreover, it is interesting to note that in- corporating Sinto the (A, Y)groups (to get (Y, S, A )) is the most optimal grouping, as we observe empirically for MNIST (Figure 2). Although Sis not involved in the (A, Y) spurious correlation and not a cause for poor generalisation performance, simply reweighting the four (A, Y)groups induces an unwanted correlation between SandY, since the (A, Y)groups are imbalanced with respect to S(Table 1). 5.4. Subgroup choice also impacts disparities In addition to overall generalisation performance, we ex- plore the impact of subgroup choice on disparities, specifi- cally between Ssubgroups (Table 4 and D9). Fairness re- sults largely align with overall generalisation performance. We make the surprising observation that the best results in terms of disparities for a given grouping are not necessar- ily achieved by using that grouping in the bias mitigation process. For example, (A, Y)subgroups show far better worst-group-accuracy and smaller accuracy gaps across S subgroups than when the Sattribute is used for grouping (e.g.S,SY,SY8) for both datasets and methods. The mini- mum accuracy is up to 0.21 higher and an accuracy gap up to 10 times smaller. This is because the (A, Y)spurious corre- lation is causing the disparity in performance across S, and it can only be unlearned with certain groupings. This finding contradicts the general assumption that in order to improve fairness with respect to a certain subgroup, this subgroup should be used in the bias mitigation process (Mehrabi et al., 2021). To clarify, we do not advocate using different sub- groups for evaluation, but rather that alternative subgroup definitions in mitigation may better improve fairness. 8 Subgroups Matter for Robust Bias Mitigation 0.0 0.1 0.2 0.3 0.4 0.5 KL divergence to unbiased 0.930.940.950.960.97AUC on unbiased R = -0.825, p < 0.001CelebA gDRO 0.0 0.1 0.2 0.3 0.4 0.5 KL divergence to unbiased R = -0.930, p < 0.001CelebA Resampling Figure 5. The ability to recover the unbiased test distribution ( Punbiased ) is a significant predictor of overall generalisation performance onPunbiased for gDRO and resampling. Each dot represents mean AUC on the unbiased test set for a specific grouping, with error bars indicating the standard deviation across 3 random seeds. Pearson’s correlation coefficients and associated p-values are also shown. Table 4. Worst group accuracy and accuracy gap across S groups, averaged for gDRO and resampling. While some groupings lead to improved fairness relative to the baseline (green), others are detrimental (red). Most groupings involving Shave the opposite of their intended effect and decrease fairness with respect to S. We report mean and standard deviation across three random seeds. Grouping min Acc. Acc. gap baseline 0.705±0.025 0.064 ±0.045 A 0.696±0.019 0.074±0.034 Y 0.696±0.022 0.077±0.037 S 0.694±0.024 0.072±0.039 AY 0.779±0.02 0.032±0.03 SY 0.699±0.016 0.074±0.026 YSA 0.792±0.013 0.02±0.022 SC/no-SC 0.772±0.02 0.04±0.03 AY8 0.778±0.022 0.041±0.032 SY8 0.7±0.021 0.072±0.032 Random 0.693±0.02 0.075±0.035 NoisyAY0.50 0.741±0.016 0.056±0.029 5.5. Subgroup choice is similarly impactful in other settings Finally, we conduct various additional experiments to verify that our results hold in less restrictive settings. We im- plement Just-train-twice
|
https://arxiv.org/abs/2505.21363v1
|
(JTT), a method which does not require subgroup labels at training, but still requires some for model selection, and find that, again, its success is de- pendent on subgroup choice (Appendix C: Table C8 and Figure C9). We also repeat the MNIST experiments in a setting where the spurious correlation is weaker such that overall there are 77.5% spuriously correlated samples (in- stead of 87.5%). We find that while all results are higher overall, the same trends still appear, as shown in Figure G11. We also explore whether the relatively small size of the datasets we use (as we are constrained by the availability of each (Y, S, A )combination) might impact our findingsby comparing our results on the full 60K MNIST dataset to a downsampled 7K version. We find largely similar trends, with an example shown in Figure G12. 5.6. Limitations and future work This work is limited to a specific setting of bias i.e. spu- rious correlations, but many other types exist, including bias caused by differences in the manifestation of Yacross subgroups, differences in annotation of Yacross subgroups, and under-representation of certain subgroups. We hope that our research provides a solid foundation for further exploration into how subgroups should be defined for bias mitigation across a wide range of real bias settings. Another limitation is that, in practice, outside of (semi-)synthetic scenarios like ours, one often lacks extra annotations on other attributes, and is not perfectly aware of what might be causing underperformance. While we advocate for data collectors to gather as much metadata as possible to enable precise analyses, we recognise that this is not always fea- sible. Therefore, we encourage practitioners to carefully analyse their models’ errors and thoughtfully investigate potential causes of underperformance before implementing bias mitigation strategies for specific subgroups. 6. Conclusions To our knowledge, this is the first work to specifically and comprehensively consider how subgroup definition can im- pact existing bias mitigation methods. We demonstrate the extent to which certain subgroup definitions can “make or break” bias mitigation methods, and provide an explanation as to why this occurs. We urge practitioners to carefully consider possible causes of bias rather than indiscriminately applying bias mitigation techniques to any underperforming group. Our work enables more consistent and effective bias mitigation in real-world applications. 9 Subgroups Matter for Robust Bias Mitigation Acknowledgements A.A. is supported by the EPSRC grant number EP/S024093/1, the Centre for Doctoral Training SABS: R3, University of Oxford, and by GE Healthcare. C.J. is supported by Microsoft Research, EPSRC, and The Alan Turing Institute through a Microsoft PhD scholarship and a Turing PhD enrichment award. B.G. received support from the Royal Academy of Engineering as part of his Khe- iron/RAEng Research Chair. The computational aspects of this research were supported by the Wellcome Trust Core Award Grant Number 203141/Z/16/Z and the NIHR Ox- ford BRC. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the De- partment of Health. The authors would also like to thank the reviewers of this paper whose comments contributed
|
https://arxiv.org/abs/2505.21363v1
|
to substantially improving the work. Impact Statement This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here. References Ahn, S., Kim, S., and Yun, S.-Y . Mitigating dataset bias by using per-sample gradient. In NeurIPS 2022 Workshop on Distribution Shifts: Connecting Methods and Appli- cations , 2022. URL https://openreview.net/ forum?id=ihlU5X4SEE . Alloula, A., Mustafa, R., McGowan, D. R., and Papie ˙z, B. W. On biases in a uk biobank-based retinal image classifica- tion model. In Puyol-Ant ´on, E., Zamzmi, G., Feragen, A., King, A. P., Cheplygina, V ., Ganz-Benjaminsen, M., Ferrante, E., Glocker, B., Petersen, E., Baxter, J. S. H., Rekik, I., and Eagleson, R. (eds.), Ethics and Fairness in Medical Imaging , pp. 140–150, Cham, 2024. Springer Nature Switzerland. ISBN 978-3-031-72787-0. Aminian, G., Masiha, S., Toni, L., and Rodrigues, M. R. D. Learning algorithm generalization error bounds via auxil- iary distributions, 2024. URL https://arxiv.org/ abs/2210.00483 . Anthony, H. and Kamnitsas, K. On the use of mahalanobis distance for out-of-distribution detection with neural net- works for medical imaging. In Uncertainty for Safe Utilization of Machine Learning in Medical Imaging , pp. 136–146. Springer Nature Switzerland, 2023. doi: 10.1007/978-3-031-44336-7 14. URL https://doi. org/10.1007%2F978-3-031-44336-7_14 . Awasthi, P., Kleindessner, M., and Morgenstern, J. Equal-ized odds postprocessing under imperfect group informa- tion. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics , pp. 1770–1780. PMLR, June 2020. Bayasi, N., Fayyad, J., Bissoto, A., Hamarneh, G., and Garbi, R. BiasPruner: Debiased Continual Learning for Medical Image Classification. In MICCAI , 2024. Ben-David, S., Blitzer, J., Crammer, K., Kulesza, A., Pereira, F., and Vaughan, J. A theory of learning from different domains. Machine Learning , 79:151–175, 2010. URL http://www.springerlink.com/ content/q6qk230685577n52/ . Borkan, D., Dixon, L., Sorensen, J., Thain, N., and Vasserman, L. Nuanced metrics for measuring unin- tended bias with real data for text classification. CoRR , abs/1903.04561, 2019. URL http://arxiv.org/ abs/1903.04561 . Buolamwini, J. and Gebru, T. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classifica- tion. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency , pp. 77–91. PMLR, Jan- uary 2018. Chen, Z., Zhang, J. M., Sarro, F., and Harman, M. A comprehensive empirical study of bias mitigation meth- ods for machine learning classifiers. ACM Trans. Softw. Eng. Methodol. , 32(4), May 2023. ISSN 1049-331X. doi: 10.1145/3583561. URL https://doi.org/10. 1145/3583561 . Chuang, C.-Y ., Torralba, A., and Jegelka, S. Estimating gen- eralization under distribution shifts via domain-invariant representations. In Proceedings of the 37th International Conference on Machine Learning , ICML’20. JMLR.org, 2020. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vi- sion and Pattern Recognition , pp. 248–255, 2009. doi: 10.1109/CVPR.2009.5206848. Devlin, J., Chang, M., Lee, K., and Toutanova, K. BERT: pre-training of deep bidirectional transformers for lan- guage understanding. CoRR , abs/1810.04805, 2018. URL http://arxiv.org/abs/1810.04805 . Dong, Y ., Gong, T., Chen, H., Song, S., Zhang, W., and Li,
|
https://arxiv.org/abs/2505.21363v1
|
C. How does distribution matching help domain generalization: An information-theoretic analysis, 2024. URL https://arxiv.org/abs/2406.09745 . 10 Subgroups Matter for Robust Bias Mitigation Dwork, C., Hardt, M., Pitassi, T., Reingold, O., and Zemel, R. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference , ITCS ’12, pp. 214–226, New York, NY , USA, January 2012. Association for Computing Machinery. ISBN 978- 1-4503-1115-1. doi: 10.1145/2090236.2090255. Ganin, Y . and Lempitsky, V . Unsupervised domain adapta- tion by backpropagation. In Bach, F. and Blei, D. (eds.), Proceedings of the 32nd International Conference on Ma- chine Learning , volume 37 of Proceedings of Machine Learning Research , pp. 1180–1189, Lille, France, 07– 09 Jul 2015. PMLR. URL https://proceedings. mlr.press/v37/ganin15.html . Ghosal, S. S. and Li, Y . Distributionally Robust Opti- mization with Probabilistic Group. Proceedings of the AAAI Conference on Artificial Intelligence , 37(10):11809– 11817, June 2023. ISSN 2374-3468. doi: 10.1609/aaai. v37i10.26394. Han, Y . and Zou, D. Improving group robustness on spuri- ous correlation requires preciser group inference. In Forty- first International Conference on Machine Learning , 2024. URL https://openreview.net/forum? id=KycvgOCBBR . He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learn- ing for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition , pp. 770–778, 2016. Hebert-Johnson, U., Kim, M., Reingold, O., and Roth- blum, G. Multicalibration: Calibration for the (Computationally-identifiable) masses. In Dy, J. and Krause, A. (eds.), Proceedings of the 35th In- ternational Conference on Machine Learning , vol- ume 80 of Proceedings of Machine Learning Re- search , pp. 1939–1948. PMLR, 10–15 Jul 2018. URLhttps://proceedings.mlr.press/v80/ hebert-johnson18a.html . Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K. Q. Densely connected convolutional networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , pp. 2261–2269, 2017. doi: 10.1109/ CVPR.2017.243. Idrissi, B. Y ., Arjovsky, M., Pezeshki, M., and Lopez-Paz, D. Simple data balancing achieves competitive worst- group-accuracy. In Proceedings of the First Conference on Causal Learning and Reasoning , pp. 336–351. PMLR, June 2022. Irvin, J., Rajpurkar, P., Ko, M., Yu, Y ., Ciurea-Ilcus, S., Chute, C., Marklund, H., Haghgoo, B., Ball, R., Sh- panskaya, K., Seekins, J., Mong, D. A., Halabi, S. S.,Sandberg, J. K., Jones, R., Larson, D. B., Langlotz, C. P., Patel, B. N., Lungren, M. P., and Ng, A. Y . Chexpert: a large chest radiograph dataset with un- certainty labels and expert comparison. In Proceed- ings of the Thirty-Third AAAI Conference on Artifi- cial Intelligence and Thirty-First Innovative Applica- tions of Artificial Intelligence Conference and Ninth AAAI Symposium on Educational Advances in Artifi- cial Intelligence , AAAI’19/IAAI’19/EAAI’19. AAAI Press, 2019. ISBN 978-1-57735-809-1. doi: 10.1609/ aaai.v33i01.3301590. URL https://doi.org/10. 1609/aaai.v33i01.3301590 . Izmailov, P., Kirichenko, P., Gruver, N., and Wilson, A. G. On feature learning in the presence of spurious correla- tions. In Proceedings of the 36th International Confer- ence on Neural Information Processing Systems , NIPS ’22, Red Hook, NY , USA, 2022. Curran Associates Inc. ISBN 9781713871088. Jain, S., Hamidieh, K., Georgiev, K., Ilyas, A., Ghassemi, M., and Madry, A. Improving subgroup robustness
|
https://arxiv.org/abs/2505.21363v1
|
via data selection. In Globerson, A., Mackey, L., Belgrave, D., Fan, A., Paquet, U., Tomczak, J., and Zhang, C. (eds.), Advances in Neural Information Processing Systems , vol- ume 37, pp. 94490–94511. Curran Associates, Inc., 2024. Jones, C., Castro, D. C., De Sousa Ribeiro, F., Oktay, O., McCradden, M., and Glocker, B. A causal perspective on dataset bias in machine learning for medical imaging. Na- ture Machine Intelligence , 6(2):138–146, February 2024. ISSN 2522-5839. doi: 10.1038/s42256-024-00797-8. Jones, C., Ribeiro, F. d. S., Roschewitz, M., Castro, D. C., and Glocker, B. Rethinking fair representation learn- ing for performance-sensitive tasks. In The Thirteenth International Conference on Learning Representations , 2025. URL https://openreview.net/forum? id=pBZntPrdrI . Kearns, M., Neel, S., Roth, A., and Wu, Z. S. Prevent- ing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness. In Proceedings of the 35th Interna- tional Conference on Machine Learning , pp. 2564–2572. PMLR, July 2018. Kim, N., Kang, J., Ahn, S., Ok, J., and Kwak, S. Im- proving Robustness to Multiple Spurious Correlations by Multi-Objective Optimization. In Proceedings of the 41st International Conference on Machine Learning , pp. 24040–24058. PMLR, July 2024. Kirichenko, P., Izmailov, P., and Wilson, A. G. Last layer re-training is sufficient for robustness to spurious corre- lations. In ICLR , 2023. URL https://arxiv.org/ abs/2204.02937 . 11 Subgroups Matter for Robust Bias Mitigation Krishnakumar, A., Prabhu, V ., Sudhakar, S., and Hoffman, J. Udis: Unsupervised discov- ery of bias in deep visual recognition mod- els. In BMVC , pp. 143, 2021. URL https: //www.bmvc2021-virtualconference.com/ assets/papers/0362.pdf . Lecun, Y ., Bottou, L., Bengio, Y ., and Haffner, P. Gradient- based learning applied to document recognition. Proceed- ings of the IEEE , 86(11):2278–2324, November 1998. ISSN 1558-2256. doi: 10.1109/5.726791. Li, Z., Evtimov, I., Gordo, A., Hazirbas, C., Hassner, T., Ferrer, C. C., Xu, C., and Ibrahim, M. A Whac-A-Mole Dilemma: Shortcuts Come in Multiples Where Mitigating One Amplifies Others. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pp. 20071–20082, June 2023. doi: 10.1109/CVPR52729. 2023.01922. Liu, E. Z., Haghgoo, B., Chen, A. S., Raghunathan, A., Koh, P. W., Sagawa, S., Liang, P., and Finn, C. Just train twice: Improving group robustness without training group information. In Meila, M. and Zhang, T. (eds.), Proceedings of the 38th International Conference on Ma- chine Learning , volume 139 of Proceedings of Machine Learning Research , pp. 6781–6792. PMLR, 18–24 Jul 2021. URL https://proceedings.mlr.press/ v139/liu21f.html . Liu, Z., Luo, P., Wang, X., and Tang, X. Deep learning face attributes in the wild. In Proceedings of Interna- tional Conference on Computer Vision (ICCV) , December 2015a. Liu, Z., Luo, P., Wang, X., and Tang, X. Deep learn- ing face attributes in the wild. In Proceedings of the 2015 IEEE International Conference on Computer Vi- sion (ICCV) , ICCV ’15, pp. 3730–3738, USA, 2015b. IEEE Computer Society. ISBN 9781467383912. doi: 10.1109/ICCV .2015.425. URL https://doi.org/ 10.1109/ICCV.2015.425 . Mansour, Y ., Mohri, M., and Rostamizadeh, A. Domain adaptation: Learning bounds and algorithms. CoRR , abs/0902.3430, 2009. URL http://arxiv.org/ abs/0902.3430 . Marani, B.-E., Hanini, M., Malayarukil, N., Christodoulidis, S., Vakalopoulou, M., and Ferrante, E.
|
https://arxiv.org/abs/2505.21363v1
|
ViG-Bias: Visually Grounded Bias Discovery and Mitigation , pp. 414–429. Springer Nature Switzerland, Novem- ber 2024. ISBN 9783031732027. doi: 10.1007/ 978-3-031-73202-7 24. URL http://dx.doi. org/10.1007/978-3-031-73202-7_24 .Masiha, M. S., Gohari, A., Yassaee, M. H., and Aref, M. R. Learning under distribution mismatch and model misspecification. CoRR , abs/2102.05695, 2021. URL https://arxiv.org/abs/2102.05695 . Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., and Galstyan, A. A survey on bias and fairness in machine learning. ACM Comput. Surv. , 54(6), July 2021. ISSN 0360-0300. doi: 10.1145/3457607. URL https:// doi.org/10.1145/3457607 . Movva, R., Shanmugam, D., Hou, K., Pathak, P., Guttag, J., Garg, N., and Pierson, E. Coarse race data conceals disparities in clinical risk score performance. In Ma- chine Learning for Healthcare Conference , pp. 443–472. PMLR, 2023. Nguyen, A. T., Tran, T., Gal, Y ., Torr, P. H. S., and Baydin, A. G. Kl guided domain adaptation, 2022. URL https: //arxiv.org/abs/2106.07780 . Olesen, V ., Weng, N., Feragen, A., and Petersen, E. Slic- ing Through Bias: Explaining Performance Gaps in Medical Image Analysis Using Slice Discovery Meth- ods, pp. 3–13. Springer Nature Switzerland, Octo- ber 2024. ISBN 9783031727870. doi: 10.1007/ 978-3-031-72787-0 1. URL http://dx.doi.org/ 10.1007/978-3-031-72787-0_1 . Park, G. Y ., Jung, C., Lee, S., Ye, J. C., and Lee, S. W. Self-supervised debiasing using low rank regulariza- tion. In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , 2024. doi: 10.48550/ARXIV .2210.05248. Petersen, E., aPETERSEN2023100790nd Melanie Ganz, S. H., and Feragen, A. The path toward equal performance in medical machine learning. Pat- terns , 4(7):100790, 2023. ISSN 2666-3899. doi: https://doi.org/10.1016/j.patter.2023.100790. URL https://www.sciencedirect.com/ science/article/pii/S2666389923001459 . Pezeshki, M., Kaba, S.-O., Bengio, Y ., Courville, A., Precup, D., and Lajoie, G. Gradient Starvation: A Learning Proclivity in Neural Networks, November 2021. Pezeshki, M., Bouchacourt, D., Ibrahim, M., Ballas, N., Vincent, P., and Lopez-Paz, D. Discovering environ- ments with XRM. In ICML , 2024. URL https: //openreview.net/forum?id=IhWtRwIbos . Ricci Lara, M. A., Echeveste, R., and Ferrante, E. Address- ing fairness in artificial intelligence for medical imaging. Nature Communications , 13(1):4581, August 2022. ISSN 2041-1723. doi: 10.1038/s41467-022-32186-3. 12 Subgroups Matter for Robust Bias Mitigation Sagawa*, S., Koh*, P. W., Hashimoto, T. B., and Liang, P. Distributionally robust neural networks. In In- ternational Conference on Learning Representations , 2020. URL https://openreview.net/forum? id=ryxGuJrFvS . Schrouff, J., Bellot, A., Rannen-Triki, A., Malek, A., Albu- querque, I., Gretton, A., D’Amour, A. N., and Chiappa, S. Mind the graph when balancing data for fairness or robust- ness. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. URL https: //openreview.net/forum?id=LQR22jM5l3 . Shrestha, R., Kafle, K., and Kanan, C. An Investigation of Critical Issues in Bias Mitigation Techniques. In 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) , pp. 2512–2523, Waikoloa, HI, USA, January 2022. IEEE. ISBN 978-1-66540-915-5. doi: 10.1109/WACV51458.2022.00257. Stromberg, N., Ayyagari, R., Welfert, M., Koyejo, S., Nock, R., and Sankar, L. For robust worst-group accuracy, ig- nore group annotations. Transactions on Machine Learn- ing Research , 2024. ISSN 2835-8856. URL https: //openreview.net/forum?id=l8E68fD6yp . Wang, S., Guo, W., Narasimhan, H., Cotter, A., Gupta, M., and Jordan, M. Robust Optimization for
|
https://arxiv.org/abs/2505.21363v1
|
Fairness with Noisy Protected Groups. In Advances in Neural Information Processing Systems , volume 33, pp. 5190– 5203. Curran Associates, Inc., 2020a. Wang, X., Saxon, M., Li, J., Zhang, H., Zhang, K., and Wang, W. Y . Causal balancing for domain generalization. InICLR , 2023. URL https://arxiv.org/abs/ 2206.05263 . Wang, Z., Qinami, K., Karakozis, I. C., Genova, K., Nair, P., Hata, K., and Russakovsky, O. Towards Fairness in Visual Recognition: Effective Strategies for Bias Mitiga- tion. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pp. 8916–8925, Seattle, WA, USA, June 2020b. IEEE. ISBN 978-1-72817-168-5. doi: 10.1109/CVPR42600.2020.00894. Weng, N., Bigdeli, S., Petersen, E., and Feragen, A. Are sex- based physiological differences the cause of gender bias for chest x-ray diagnosis? In Clinical Image-Based Proce- dures, Fairness of AI in Medical Imaging, and Ethical and Philosophical Issues in Medical Imaging: 12th Interna- tional Workshop, CLIP 2023 1st International Workshop, FAIMI 2023 and 2nd International Workshop, EPIMI 2023 Vancouver, BC, Canada, October 8 and October 12, 2023 Proceedings , pp. 142–152, Berlin, Heidelberg, 2023. Springer-Verlag. ISBN 978-3-031-45248-2. doi: 10.1007/978-3-031-45249-9 14. URL https://doi. org/10.1007/978-3-031-45249-9_14 .Wu, X., Manton, J. H., Aickelin, U., and Zhu, J. On the gen- eralization for transfer learning: An information-theoretic analysis, 2024. URL https://arxiv.org/abs/ 2207.05377 . Xu, G., Chen, Q., Ling, C., Wang, B., and Shui, C. Inter- sectional Unfairness Discovery. In Proceedings of the 41st International Conference on Machine Learning , pp. 54888–54917. PMLR, July 2024. Zhao, H., Coston, A., Adel, T., and Gordon, G. J. Conditional learning of fair representations. In In- ternational Conference on Learning Representations , 2020. URL https://openreview.net/forum? id=Hkekl0NFPr . Zhou, C., Ma, X., Michel, P., and Neubig, G. Examining and Combating Spurious Features under Distribution Shift. InProceedings of the 38th International Conference on Machine Learning , pp. 12857–12867. PMLR, July 2021. Zietlow, D., Lohaus, M., Balakrishnan, G., Klein- dessner, M., Locatello, F., Scholkopf, B., and Russell, C. Leveling down in computer vi- sion: Pareto inefficiencies in fair deep classifiers. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pp. 10400–10411, 2022. URL https://api.semanticscholar. org/CorpusID:247319023 . Zong, Y ., Yang, Y ., and Hospedales, T. Medfair: Bench- marking fairness for medical imaging. In International Conference on Learning Representations (ICLR) , 2023. 13 Subgroups Matter for Robust Bias Mitigation This appendix provides additional details and experiments that support the main text. It is structured as follows: • A Supplementary experimental details on the datasets used, subgroups constructed, and model implementations. • B Mitigation results for more bias mitigation methods: DomainInd and CFair. • C Mitigation without subgroup labels (Just Train Twice). • D Supplementary results for gDRO and resampling. • E Explaining results through the divergence between Pw trainandPunbiased . •F Correlation between KL divergence to the unbiased distribution and unbiased generalisation across all four datasets. • G Ablations on strength of SC and size of dataset. A. Supplementary experimental details A.1. Dataset details Table 5. Details on the datasets used for mitigation experiments. Dataset MNIST CheXPert CelebA Civil comments Y Even/odd digit Pleural effusion Blonde hair Toxicity A Background colour Presence of
|
https://arxiv.org/abs/2505.21363v1
|
a pacemaker Perceived gender Gender S Foreground colour Sex Smiling Religion Dataset size 60000 3225 12500 8900 We downsample some of the datasets from their original size because we are constrained by the availability of each (Y, S, A ) combination. For example, for CheXPert, pacemaker annotations are only available for 4862 images, and we have to further downsample the dataset to make it balanced with respect to disease ( Y) and sex ( S). A.2. Subgroup construction `PtrainS = 0S = 1Y = 0Y = 1A = 0A = 1Legend: ` ` noisenoisenoisenoise Figure 6. Subgroup construction for our experiments. 14 Subgroups Matter for Robust Bias Mitigation We show the subgroups used for each method and a visualisation of some example subgroups in Figure 6. For model-based methods (DomainInd and CFair), we do not use Yto construct subgroups because for these methods to work best, each subgroup should contain both positive and negative classes. This is because methods like DomainInd and CFair learn representations for each subgroup separately. DomainInd trains a separate classifier for each subgroup, so it would not make sense to train a separate classification head for positive and negative classes. Similarly, CFair seeks to align subgroup representations, so it would not make sense to align representations of one subgroup containing only positive images to another subgroup containing only negative images, as this would defeat the point of training a discriminative classifier. On the other hand, for reweighting based methods, including the Yin the subgroups helps to balance the final reweighted dataset with respect to class, and therefore improves results, especially in our case where the spurious correlation involves the class Y. This explains why we find that the subgroups which work well for DomainInd and CFair (e.g. A) are just a merged version of the ones which work well for gDRO and resampling (e.g. (A, Y)). To the best of our knowledge, no papers have explicitly discussed this distinction despite its practical importance. A.3. Implementation details Table 7. Implementation details for all models. Training strategy MNIST CXP CelebA Civil comments Backbone 2-layer CNN DenseNet121 (Huang et al., 2017) ResNet50 (He et al., 2016) BERTClassifier (uncased) (Devlin et al., 2018) Pre-training None ImageNet (Deng et al., 2009) ImageNet (Deng et al., 2009) Bookcorpus, Wikipedia (English) Batch size 128 256 256 32 Image size 3x28x28 3x299x299 3x256x256 NA Augmentation Flip, rotation, Gaussian blur Flip, rotation, color jitter, affine transformation, crop Flip, rotation, color jitter, affine transformation, crop None Optimiser Adam Adam Adam AdamW Loss Binary cross-entropy Binary cross-entropy Binary cross-entropy Binary cross-entropy Learning rate 0.001 0.0005 0.001 0.00005 Learning scheduler StepLR ( γ= 0.1 and µ= 10) StepLR ( γ= 0.1 and µ= 10) StepLR ( γ= 0.1 and µ= 10) StepLR ( γ= 0.1 and µ= 10) Weight decay 0.0001 0.0001 0.0001 0.0001 Max epochs 50 100 (early stopping after 10) 10 (early stopping after 5) 10 (early stopping after 5) We conducted hyperparameter tuning on the baseline ERM model within the ranges below, and selected the model with the highest validation AUC. The choice of backbones was based
|
https://arxiv.org/abs/2505.21363v1
|
on their strong performance in previous similar work (Irvin et al., 2019; Jain et al., 2024; Izmailov et al., 2022; Kirichenko et al., 2023; Idrissi et al., 2022). •Backbones for vision models : ResNet18, ResNet50, DenseNet121 (not for MNIST images) •Batch size : 32, 64, 128, 256, 512 •Learning rate : [1e-5:1e-3] •Weight decay : [1e-5:1e-4] We also specify additional hyperparameters for the mitigation methods: a step size of 0.01 and a size adjustment factor of 1 was used for gDRO, and a µcoefficient of 0.1 was used for the adversarial loss of CFair, following the MEDFAIR implementation (Zong et al., 2023). 15 Subgroups Matter for Robust Bias Mitigation B. Results for more mitigation methods: DomainInd and CFair Overall, CFair and DomainInd show less improvement on the unbiased test set than reweighting based-methods (Figure 7). Despite this, we still observe similar trends as for the reweighting based methods, such as S-related groups being clearly detrimental to performance. Learning independent models for Agroups also boosts performance for DomainInd in CXP, CelebA, and Civil Comments, and while it does not significantly change performance relative to the baseline in MNIST, it is still higher than for any other grouping. Moreover, as shown in Figure 8, DomainInd appears sensitive to even low levels of noise, while CFair’s performance is less degraded by noise in the Asubgroup labels (except for in MNIST, where both methods are ineffective, as performance stays close to the baseline for all subgroupings). A_4 AS A Noisy_A_0.5 S_4 S Random SC/no-SC Subgroup0.30 0.25 0.20 0.15 0.10 0.05 0.000.050.10AUC relative to baseline MNIST DomainInd MNIST CFair CXP DomainInd CXP CFair CelebA DomainInd CelebA CFair CivilComments DomainInd CivilComments CFair Figure 7. Relative performance on Punbiased for different groupings in DomainInd and CFair across all four datasets. Similar trends to gDRO and resampling can be seen, where subgroupings constructed around Agenerally improve performance as they prevent the SC from being learnt, while other subgroups are generally detrimental. Error bars indicate the standard deviation across 3 random seeds. Figure 8. Effect of noise in Asubgroup labels on AUC for DomainInd and CFair. Each dot represents mean performance on the unbiased test set for a specific grouping, with error bars indicating the standard deviation across 3 random seeds. 16 Subgroups Matter for Robust Bias Mitigation C. Subgroup discovery methods We implement Just Train Twice (JTT) as proposed by Liu et al. (2021). JTT consists of a two-stage process, where first a standard ERM model is trained for several epochs, and then a second model that upweights the training examples that the first model misclassified is trained. Although it does not require subgroup labels for training, to select a final model to use (e.g. based on the JTT-specific hyperparameters), subgroup labels do need to be used. As shown in Table 8 and Figure 9, we find that with validation subgroup labels to guide model and hyperparameter selection JTT performs mostly on par with our other methods, however, performance is again highly dependent on the choice of subgroups. When no subgroup annotations are used (i.e. model selection is done by
|
https://arxiv.org/abs/2505.21363v1
|
overall validation accuracy), the method does not improve over ERM (except for on MNIST where JTT works remarkably effectively, most likely due to the simplicity of the task). Table 8. Just train twice generalisation performance on unbiased test set across the four datasets is highly variable depending on the validation set subgroups used for model/hyper-parameter selection. We colour the experiments which improve over the baseline (no mitigation) in green and those do not in red. We report mean AUC and standard deviation across three random seeds. Subgroup MNIST CXP CelebA civil comments Baseline 0.792 ± 0.057 0.740 ± 0.002 0.943 ± 0.002 0.805 ± 0.017 SY 0.89 ± 0.002 0.791 ± 0.009 0.947 ± 0.005 0.786 ± 0.028 AY 0.925 ± 0.019 0.791 ± 0.009 0.943 ± 0.012 0.831 ± 0.004 A 0.89 ± 0.002 0.695 ± 0.018 0.948 ± 0.003 0.786 ± 0.011 AY8 0.919 ± 0.012 0.791 ± 0.009 0.943 ± 0.012 0.812 ± 0.021 S 0.89 ± 0.002 0.791 ± 0.009 0.948 ± 0.003 0.786 ± 0.028 SY8 0.89 ± 0.002 0.734 ± 0.035 0.943 ± 0.012 0.786 ± 0.028 Y 0.89 ± 0.002 0.734 ± 0.035 0.947 ± 0.005 0.776 ± 0.019 Noisy AY0.5 0.919 ± 0.012 0.705 ± 0.042 0.944 ± 0.004 0.831 ± 0.004 Random 0.925 ± 0.019 0.734 ± 0.035 0.948 ± 0.003 0.786 ± 0.028 SC/no-SC 0.936 ± 0.009 0.791 ± 0.009 0.943 ± 0.012 0.831 ± 0.004 YSA 0.922 ± 0.006 0.682 ± 0.034 0.943 ± 0.012 0.831 ± 0.004 No val subgroups 0.925 ± 0.019 0.734 ± 0.035 0.948 ± 0.003 0.786 ± 0.028 YSA AY AY_8 SC/no-SC Noisy_AY_0.5 SY_8 SY S A Y Random Subgroup0.10 0.05 0.000.050.100.15AUC relative to baseline MNIST gDRO MNIST Resampling MNIST JTT CXP gDRO CXP Resampling CXP JTT CelebA gDRO CelebA Resampling CelebA JTT CivilComments gDRO CivilComments Resampling CivilComments JTT Figure 9. Performance on the unbiased test set in gDRO and resampling and JTT is highly dependent on the subgroups used. Bars represent overall change in AUC relative to the ERM baseline, with error bars indicating the standard deviation across 3 random seeds. 17 Subgroups Matter for Robust Bias Mitigation D. Supplementary results for resampling and gDRO 18 Subgroups Matter for Robust Bias Mitigation E. KL divergence between Pw trainandPunbiased E.1. Method overview Our objective is to measure the minimum KL divergence which can be achieved to Punbiased by partitioning Ptraininto subgroups and re-weighting these subgroups. We define each distribution as probability vectors in R8with each element corresponding to the probability of sampling one(Y, S, A )subgroup. The unbiased distribution is defined to be uniform, i.e., Punbiased =1 8,1 8, . . . ,1 8 , while Ptrain= [0.95 4,0.05 4,0.8 4,0.2 4,0.05 4,0.95 4,0.2 4,0.8 4]. Initially, KL(Ptrain∥ P unbiased )≈0.527. Our aim is to see whether different subgroupings can reduce this divergence. LetG={G1, . . . , G k}be a partition of the 8 atomic subgroups into kdisjoint groups. For a set of weights w= [w1, . . . , w k]∈∆kover these groups, we define a new weighted distribution Pw train∈R8as follows: Pw train[j]
|
https://arxiv.org/abs/2505.21363v1
|
=wi·Ptrain[j]P l∈GiPtrain[l]forj∈Gi. Let the atomic subgroup indices correspond to (Y, S, A )combinations in order [0,1,2,3,4,5,6,7]. For the subgroups we constructed, we therefore have: •Y:{{0,1,2,3},{4,5,6,7}} •A:{{0,2,4,6},{1,3,5,7}} •S:{{0,1,4,5},{2,3,6,7}} •(A, Y):{{0,2},{1,3},{4,6},{5,7}} •(S, Y):{{0,1},{2,3},{4,5},{6,7}} •(Y, S, A ):{0},{1},{2},{3},{4},{5},{6},{7} • (SC,no-SC): {{0,2,5,7},{1,3,4,6}} • Random: {{0,1,2,3,4,5,6,7}} •AY8:{{0,2},{0,2},{1,3},{1,3},{4,6},{4,6},{5,7},{5,7}} •SY8:{{0,1},{0,1},{2,3},{2,3},{4,5},{4,5},{6,7},{6,7}} For the NoisyAYsubgroups, we have the same construction as the (A, Y)subgroups, except that bfraction of the (A, Y) subgroup annotations are misannotated following the original Ptrain distribution, while the remaining (1−b)are consistent with the original (A, Y)subgroups. Figure 6 provides an illustration of some of these subgroups. E.2. Resampling For resampling, these weights are uniformly distributed such that w= [1/k, ..., 1/k]∈∆k. We can therefore calculate Pw train)for each grouping keeping the relative proportions of the (Y, S, A )combinations constant within a group. For example, for Y, the two groups have probabilities which sum to PG1=PG2=0.95 4+0.05 4+0.8 4+0.2 4=1 2, so the relative proportions within the two subgroups are [0.95 4 PG1,0.05 4 PG1,0.8 4 PG1,0.2 4 PG1,0.05 4 PG2,0.95 4 PG2,0.2 4 PG2,0.8 4 PG2,]. By multiplying these probabilities by w= [1 2,1 2], we get Pw train=Ptrain, soKL(Pw train∥ P unbiased )≈0.527. We proceed in this way for all subgroups, and obtain the divergences detailed in Table 3. 19 Subgroups Matter for Robust Bias Mitigation E.3. gDRO For gDRO, the groups are the same but the weights are learned during training. Therefore we determine the weights which could in theory be achieved to give the lowest KL divergence2. To do this, we reframe the problem as a convex optimisation problem where we minimize the following objective: min w∈∆kKL Pw train∥ P unbiased subject to wi>0,kX i=1wi= 1. For each subgrouping, we calculate the relative probabilities within a subgroup (as for resampling) and then use scipy.optimize.minimize andscipy.special.rel entr to determine the optimal weight vector subject to the constraints above. For example, for (A, Y)subgroups, PG1=PG4=0.95 4+0.8 4=1.75 4andPG2=PG3=0.05 4+0.2 4=0.25 4, so the relative proportions within the two subgroups are [0.95 4 PG1,0.05 4 PG2,0.8 4 PG1,0.2 4 PG2,0.05 4 PG3,0.95 4 PG4,0.2 4 PG3,0.8 4 PG4,]. Minimisation gives w= [1 4,1 4,1 4,1 4], yielding Pw train= [0.136,0.050,0.114,0.200,0.050,0.136,0.200,0.114], and KL(Pw train∥ P unbiased )≈0.113. Complete results for all groupings are presented in Table 3. We often find that these weights correspond to those used for resampling. 2We note that these weights may not necessarily be attained in practice by all gDRO models because they are not specifically trained with this objective (although in our setting, minimising KL divergence to the unbiased test set should be a reasonable proxy for minimising worst-group loss). Also, stochasticity in training, optimisation challenges, and inherent difficulties in the task across subgroups may also affect the attainment of this optimum. Despite this, we believe doing this calculation still provides an important indication of the potential effectiveness of a chosen subgrouping. 20 Subgroups Matter for Robust Bias Mitigation F. Correlation between KL divergence and unbiased generalisation Figure 10. Test AUC is highly correlated with the minimum achievable KL divergence between Pw trainandPunbiased across all four datasets for gDRO and resampling in
|
https://arxiv.org/abs/2505.21363v1
|
CXP. Each dot represents mean performance on the unbiased test set for a specific grouping, with error bars indicating the standard deviation across 3 random seeds. 21 Subgroups Matter for Robust Bias Mitigation G. Various ablations G.1. MNIST results with a weaker spurious correlation To verify that our results still hold in settings where the spurious correlation is weaker, we re-generate the MNIST dataset in the exact same way, except that P(Y= 0, A= 0|S= 0) = P(Y= 1, A= 1|S= 0) = 0 .85and P(Y= 0, A= 0|S= 0) = P(Y= 1, A= 1|S= 0) = 0 .70, such that overall there are 77.5% spuriously correlated samples, instead of 87.5%. We repeat the same experiments and find that, while all results are higher overall, the same trends still appear. Notably, we identify a significant correlation between the minimum achievable KL divergence to Punbiased and the overall performance on Punbiased , as shown in Figure G11. This suggests that subgroup choice is an important factor in less extreme settings of bias as well. 0.0 0.1 0.2 0.3 0.4 0.5 KL divergence to unbiased 0.870.880.890.900.910.920.93AUC on unbiased R = -0.967, p < 0.001MNIST with weaker SC gDRO 0.0 0.1 0.2 0.3 0.4 0.5 KL divergence to unbiased R = -0.987, p < 0.001MNIST with weaker SC Resampling Figure 11. Relationship between AUC and the minimum achievable distance to the unbiased test distribution ( Punbiased ) for gDRO and resampling in MNIST with a weaker spurious correlation. Each dot represents mean performance on the unbiased test set for a specific grouping, with error bars indicating the standard deviation across 3 random seeds. G.2. MNIST experiments with a smaller dataset Figure 12. Relationship between AUC and the minimum achievable KL divergence to the unbiased test distribution ( Punbiased ) for gDRO and resampling in MNIST with a downsampled dataset. Trends appear similar across both dataset sizes, suggesting that the results on the other three datasets would hold had we been able to use a larger subgroup-annotated dataset. Each dot represents mean performance on the unbiased test set for a specific grouping, with error bars indicating the standard deviation across 3 random seeds. 22
|
https://arxiv.org/abs/2505.21363v1
|
arXiv:2505.21364v1 [cs.LG] 27 May 2025Towards Interpretability Without Sacrifice: Faithful Dense Layer Decomposition with Mixture of Decoders James Oldfieldm,q∗Shawn ImmYixuan LimMihalis A. Nicolaouc Ioannis PatrasqGrigorios G Chrysosm mUniversity of Wisconsin–MadisonqQueen Mary University of LondoncThe Cyprus Institute Abstract Multilayer perceptrons (MLPs) are an integral part of large language models, yet their dense representations render them difficult to understand, edit, and steer. Recent methods learn interpretable approximations via neuron-level sparsity, yet fail to faithfully reconstruct the original mapping–significantly increasing model’s next-token cross-entropy loss. In this paper, we advocate for moving to layer -level sparsity to overcome the accuracy trade-off in sparse layer approximation. Under this paradigm, we introduce Mixture of Decoders (MxDs). MxDs generalize MLPs and Gated Linear Units, expanding pre-trained dense layers into tens of thousands of specialized sublayers. Through a flexible form of tensor factorization, each sparsely activating MxD sublayer implements a linear transformation with full- rank weights–preserving the original decoders’ expressive capacity even under heavy sparsity. Experimentally, we show that MxDs significantly outperform state-of-the-art methods (e.g., Transcoders) on the sparsity-accuracy frontier in language models with up to 3B parameters. Further evaluations on sparse probing and feature steering demonstrate that MxDs learn similarly specialized features of natural language–opening up a promising new avenue for designing interpretable yet faithful decompositions. Our code is included at: https://github.com/ james-oldfield/MxD/ . 1 Introduction One strategy for addressing concerns about large language models’ (LLMs) [ 1,2,3] behavior is via a bottom-up approach to understanding and controlling the network internals–developing models of how and where human-interpretable features are represented in LLMs and how they affect the output [4,5,6]. Such a mechanistic understanding has proved helpful for a number of issues relating to safety and transparency, from controlling refusal of harmful requests [ 7] to detecting generation of unsafe code [6] and latent model knowledge [8]. However, developing models of LLMs’ internals faces challenges due to the dense nature of their representations [ 9,10]. Indeed, many studies have found that individual neurons in MLP layers encode multiple distinct concepts. Rather than human-interpretable features being neatly aligned with individual neurons, they are often distributed across many [ 11,12]. As a result, it is not straightforward to cleanly isolate specific concepts of interest in the models’ latent token representations. Traditionally, imposing constraints on model form has offered a way to instill more predictable properties or structure. Indeed, there is a rich history of success with constraints in machine learning: from parts-based representations through non-negativity [ 13,14], to structure through low-rankness or assumptions on geometry [ 15,16]. With the particular issues posed by dense representations in LLMs, specialization through sparsity has re-emerged as a dominating strategy for learning ∗Corresponding author: j.a.oldfield@qmul.ac.uk . Work done whilst at UW-Madison. Preprint. Under review. Figure 1: Units of specialization for sparse layer variants :Neuron -level sparsity of existing sparse MLPs [ 27,26] (center) vs layer -level sparsity (right), which the proposed Mixture of Decoders (MxD) layer enables at scale. For GPT2-124M , the dimensions are: O= 768 ,H∗=O·4,N≈O·32. more interpretable representations. With prior work showing that sparser models both aid human explanation [ 17] and achieve higher
|
https://arxiv.org/abs/2505.21364v1
|
scores on LLM-based auto-interpretability metrics [ 18,19], sparsity is often used as a proxy for interpretability [ 20,21]. To this end, many recent works– such as sparse autoencoders [ 22,23,6]–take inspiration from traditional sparse dictionary learning methodologies [ 24,25], re-writing pre-trained LLMs’ activations as sparse, non-negative linear combinations of atoms in a learned overcomplete basis. However, as argued in [ 26], such approaches do not learn the functional mechanisms of LLMs’ layers, and their inherent post-hoc nature demands additional parameters and computation on top of the base models. One alternative approach is to directly replace layers with more interpretable equivalents [ 28], such as with wide MLPs with sparsity constraints. Transcoders [ 27,29,30,26] (TCs) are a recent example of this, training new MLPs to mimic the functional behavior of MLPs with sparse hidden units, which have recently been shown to also learn more interpretable features [ 26]. Thus, instead of relying on external post-hoc analysis, sparse MLP layers offer a way to distill specialized features directly into the model’s forward pass itself. Both of the above methods for learning specialized features fall into the same category of what one may call ‘neuron-level sparsity’. Dictionary learning methods restrict the number of non-zero elements used from a learned dictionary, whilst sparse MLPs [ 27] limit the number of active rows used from a learned ‘decoder’ matrix. At its core, whilst this constraint is useful for interpretability, it is too restrictive–often heavily trading off accuracy for sparsity, poorly reconstructing the original model components [ 31,28]. We argue that preserving the base models’ performance is a crucial component of sparse MLP layer approximations for the following two key reasons: 1.Model faithfulness : sparse layers that poorly approximate the original layers risk missing critical intricacies of the base models’ behavior or latent features [ 32]. Conversely, an accurate reconstruction (yielding similar downstream next-token loss) is some evidence that the combination of newly learned subcomputations faithfully emulates the base model. 2.Practical adoption : sparse layers that closely preserve base models’ performance are capable of replacing the existing MLPs, directly integrating specialized computation into the native forward pass. Otherwise, downstream use of the sparse layers’ features must run on top of the base models’ computation. This introduces additional inference-time cost to every forward pass, and restricts any analysis to post-hoc settings. In this paper, we advocate for moving from neuron -level to layer -level sparsity (as illustrated in Figure 1) to address this. We propose the Mixture of Decoders (MxD) layer to overcome the sparsity-accuracy trade-off through scalable, resource-efficient conditional computation. Rather than individual vectors, MxDs learn interpretable sublayers as atomic units of specialization. This faithfully mirrors the functional form of dense layer we wish to approximate, and allows MxDs to readily generalize to modern MLP variants (i.e., the Gated Linear Unit [33]). At a technical level, MxDs are constructed via a flexible tensor factorization [ 34] with the Hadamard product [ 35]. Through their parameter efficiency, MxDs scale the number of specialized layers far beyond what is feasible with classic sparse mixture of experts (MoEs) [ 36], and
|
https://arxiv.org/abs/2505.21364v1
|
recover prior adapter- based MoEs [ 37,38] as a special case. Crucially, we prove that the proposed tensor factorization in MxDs leads to each ‘expert’ sublayer implementing a linear transformation with full-rank weights– allowing faithful reconstruction even under heavy sparsity. Empirically, we demonstrate that MxDs significantly outperform alternative sparse MLP layers such as Transcoders [ 27] and Skip Transcoders [26] on the sparsity-accuracy frontier. In addition to their faithfulness, MxDs remain competitive with the SOTA on interpretability metrics. Our contributions can be summarized as follows: 2 •We propose Mixture of Decoders , an instance of a flexible class of parameter-efficient MoE through Hadamard product-factorized weight tensors. •We prove that each specialized MxD expert’s weights inherit up to the same rank as the original MLP’s decoder, providing faithful approximation even in very sparse models. •Across 108sparse layers in 4LLMs (with up to 3B parameters) MxDs (i) pareto-dominate existing techniques on the sparsity-accuracy frontier yet (ii) remain competitive on 34sparse probing and steering tasks, validating the interpretability of the learned experts. 2 Methodology We first recall the technical details of language models’ MLP layers and existing approaches to sparse approximations in Section 2.1. We then introduce the proposed MxD in Section 2.2, outlining the attractive rank properties it inherits in Section 2.3 and factorized implementation in Section 2.4. We conclude with extensions to modern MLP layers in Section 2.5. 2.1 Preliminaries Letx∈RIbe the pre-MLP latent representation of a specific token at a given layer. Omitting bias terms throughout for brevity, the GPT2-style MLP layer produces the output vector y∈ROas: MLP( x) =D∗⊤z∗∈RO,withz∗:=ϕ E∗⊤x ∈RH∗, (1) where E∗∈RI×H∗,D∗∈RH∗×Oare the learnable ‘encoder’ and ‘decoder’ parameters respectively, andϕ(.)is an activation function, often a GELU [ 39]. We use∗to denote the weights/dimensions of the pre-trained base LLM. Sparse approximations One approach to learning interpretable features in MLPs is to train new, wider MLPs with sparse hidden units to reconstruct the original layer’s outputs [ 27,26,30,29], reminiscent of dictionary learning techniques [25]. In general, sparse MLPs share the model form: SMLP (x) =D⊤z=HX h=1zhdh∈RO,withz:=S E⊤x ∈RH, (2) whereS(.)is a sparsity-inducing function (such as the top- K[23] activation used in this paper). Here, the dimensionality of sparse MLPs’ learnable weights E∈RI×H,D∈RH×Oare set as H≫H∗ such that the hidden layer is significantly larger than that of the original MLP. The original post-MLP output vectors are approximated as a K-sparse, non-negative linear combination of the rows dn of a newly learned decoder matrix. Whilst this model form has been shown to learn interpretable, specialized features zhin language models [ 27,26], their poor reconstruction is of questionable faithfulness and limits their use as a layer replacement in practice. 2.2 Mixture of Decoders We now detail the proposed Mixture of Decoders (MxD) layer, which overcomes the sparsity- accuracy trade-off by treating sparsely activating linear layers as the atomic unit of specialization. We approximate the original MLP with a conditional combination of Nlinear transformations : MxD( x) =NX n=1an(W⊤ nz)∈RO, (3) where a:=S G⊤x ∈RNaresparse ‘expert coefficients’ from learnable gating matrix G∈RI×N, andz:=ϕ E⊤x ∈RHis the dense output from an encoder. Here, W∈RN×H×Ois a third-order
|
https://arxiv.org/abs/2505.21364v1
|
tensor of parameters collating all Nexperts’ decoder weights W(n,:,:) =Wn∈RH×O. In MxDs, we use a large Nto scale the feature specialization, and set H:=H∗to match the original MLP’s smaller hidden dimension. With the gate routing each token to just its top- Kexperts, each Wn∈RH×Oreceives a gradient signal from only a specific set of semantically similar tokens. This implicit clustering naturally leads 3 experts to specialize in feature-specific subcomputations, while collectively covering the layer’s full functionality. MxDs in Equation (3) also directly inherit the MLP layers’ original functional form, avoiding the need to impose sparsity and non-negativity constraints on the hidden units z∈RH. However, MxD decoders naively require a prohibitive NHO parameters–preventing Nfrom scaling to tens of thousands of specialized components. To achieve parameter-efficiency whilst retaining layer capacity for faithful layer approximation, we parameterize MxDs’ third-order weight tensor W∈RN×H×Ospecifically to yield full-rank expert weights, defined elementwise as: W(n, h,:) =cn∗dh∈RO,∀n∈ {1, . . . , N }, h∈ {1, . . . , H }, (4) where ∗is the Hadamard product [34,35], and cn,dh∈ROare the rows of learnable weights C∈RN×O,D∈RH×O. Intuitively, Dimplements a base transformation modulated by the N specialized units in C. Additional technical motivation for this parameterization with tensor methods can be found in Appendix A.3. This brings MxDs’ parameter count down significantly to O·(N+H) fromNHO in Equation (3) with Nfull decoders. One can then vary Nto parameter-match sparse MLP layers. We next detail how this design (i) retains expressivity in each unit for faithful layer approximation under sparsity in Section 2.3 and (ii) yields a simple forward pass in Section 2.4. 2.3 MxDs are rank-preserving In the original LLM, the linear transformation from the hidden units to the output is constrained by the rank of the original MLP’s decoder matrix D∗∈RH∗×O. Under only mild technical conditions, every expert’s weight matrix in MxDs inherits the rank of D∈RH×O, thus allowing it to match that of the original MLP’s decoder, despite its parameter-efficiency: Lemma 1 (Decoder rank preservation) .We can materialize linear expert n’s weight matrix as W(n,:,:) =Wn=Ddiag (cn)∈RH×O. Assuming diag (cn)∈RO×Ois a diagonal matrix with no zeros along its diagonal (and thus invertible), we then have rank(Wn) = rank( Ddiag(cn)) = rank( D). The proof is found in Appendix A.1, which first derives the matrix-valued expression for each expert from Equation (4) and then applies a standard rank equality. At a sparsity level of K, each MxD output vector is a weighted sum of K-many linear transformations (each with potentially full-rank weights) of the dense hidden units z. As a result, MxDs retain layer capacity even under high sparsity. Sparse MLPs’ hidden units have only Knon-zero elements in contrast–each output in Equation (2) is therefore confined to a K-dimensional subspace of RO, potentially limiting the capacity of sparse MLPs to faithfully approximate the original mapping in the small Kregime desirable for interpretability (mirroring speculations by [ 26]). Further, whilst alternative soft linear MoEs achieve scalability through low-rankness [ 40], Lemma 1 states that no such rank constraints are present in MxDs. For approximating existing MLP layers where low-rank assumptions may
|
https://arxiv.org/abs/2505.21364v1
|
not hold, MxDs are consequently a more suitable class of conditional layer. 2.4 Factorized forward pass Figure 2: Mixture of Decoders extends the base MLP/GLU layers with a conditional ‘expert’ branch, modulating the MLP’s outputs.MxDs compute a linear combination of Nlinear transfor- mations of the dense vector. With the proposed Hadamard- factorized weights, this yields a simple implementation. Lemma 2 (Hadamard-factorized MoE forward pass) .Letz∈ RHanda∈RNdenote the MLP hidden units and expert coefficients respectively. Further, denote the decoder matrices asC∈RN×O,D∈RH×Oparameterizing W∈RN×H×O. MxD’s forward pass can be re-written as: MxD( x) =NX n=1an W⊤ nz = C⊤a ∗ D⊤z .(5) The proof is found in Appendix A.2. We include a notebook at https://github.com/ james-oldfield/MxD/blob/main/form-equivalence.ipynb showing the equivalence in Py- Torch. Further, please see Appendix A.5 for a discussion of how the Hadamard factorization relates to prior parameter-efficient MoEs with element-wise scaling [37]. 4 Table 1: Model formulations of related work :x∈RI,y∈ROare the pre- and post-MLP representations respectively, zare the hidden units, and ais the vector of the ‘expert coefficients’ for MxD. Model-specific encoders/decoders E,Dmap between the hidden units and output. MLPs SAEs Transcoders Skip Transcoders MxDs [3] [22] [27] [26] (Ours) Model form y=D∗⊤z∗y≈D⊤z y ≈D⊤z y ≈D⊤z+S⊤x y ≈P nan W⊤ nz Sparse component None z=S E⊤y ∈RHz=S E⊤x ∈RHz=S E⊤x ∈RHa=S G⊤x ∈RN 2.5 Extending MxDs to GLUs In contrast to methods imposing neuron-level sparsity [ 22,27,26], MxDs do not make assumptions about the base layer’s encoder architecture or activation function. As a result, MxDs readily generalize to alternative architectures such as the Gated Linear Units (GLUs) [ 33] used in recent LLMs [ 1,2]. Recall that GLUs’ hidden units are computed as zGLU=ψ(E⊤ GLUx)∗ E⊤x ∈RH,with additional GLU parameters EGLU∈RI×Hand GLU activation function ψ(e.g.,Swish [1]). By substituting in the GLU hidden representations, MxDs straightforwardly extend the GLU model form too: MxD GLU(x) =NX n=1anW⊤ n ψ(E⊤ GLUx)∗ E⊤x | {z } GLU hidden units = C⊤a ∗D⊤ ψ(E⊤ GLUx)∗ E⊤x where a:=S G⊤x ∈RNare the expert units, and Wn=Ddiag(cn)∈RH×Oas before. For a technical discussion of GLUs and their relationship to MxDs, we refer readers to Appendix A.4– through the theoretical results developed in this paper, we show that GLU encoders themselves can be viewed as a mixture of rank- 1linear experts (in contrast to the rank-preserving MxDs). 3 Experiments The experimental section in the main paper is split into two parts. Section 3.1 first demonstrates how MxDs perform significantly better on the accuracy-sparsity frontier as sparse MLP layer ap- proximations on 4 LLMs. We then demonstrate in Section 3.2 that MxD’s features retain the same levels of specialization through sparse probing and steering evaluations. Thorough ablation studies, experiments with matrix rank, and comparisons to low rank MoEs are presented in Appendix B. 3.1 Sparse approximations of MLPs in LLMs In this section, we perform experiments approximating LLMs’ existing feed-forward layers with sparse MLPs, establishing that MxDs better navigate the sparsity-accuracy frontier, more faithfully approximating the base models’ MLPs than the SOTA baseline methods. Implementation details We train on 4base models: GPT2-124M [3],Pythia-410m , Pythia-1.4b [41], andLlama-3.2-3B [1] with up to
|
https://arxiv.org/abs/2505.21364v1
|
80k experts/features. We train all sparse layers on a total of 480M tokens of OpenWebText [ 42], with learning rate 1e−4and a context length of128, initializing the output bias as the empirical mean of the training tokens, and Din MxDs as the zero-matrix (following [ 26]). We vary Nin MxD layers to parameter-match Transcoders in all experiments, with parameter counts and dimensions shown in Table 2. For Llama3.2-3B , we use theSwish-GLU variant of MxD and GELU-MLP MxDs for the other three models, matching the architectures of their base encoders. Through ablation studies in Appendix B.6 we show that MxDs using the GELU/GLU variants are much more accurate layer approximators than the ReLU variants. Full experimental details are included in Appendix D. Whilst we do not have the computational resources to show similarly thorough experiments on even larger LLMs, we expect MxDs to scale just as well to models with tens of billions of parameters or more. Objective function Given the frozen weights of the MLP, we train sparse layers to minimize the normalized reconstruction loss between its output and that of the original MLP layer with objectives of the form L=Exh ||MLP( x)−f(x)||2 2 ||MLP( x)||2i , where f(.)denotes the various learnable sparse MLP layers. 5 Table 2: Sparse layer parameters/dimensions: Hdenotes the size of the layers’ hidden units and Nis the expert count. MxDs perform almost as many linear transformations as the baselines have features. GPT2-124M Pythia-410M Pythia-1.4B Llama-3.2-3B Model Params H N Params H N Params H N Params H N Transcoders [27] 37.7M 24,576 — 67.1M 32,768 — 268.5M65,536 — 604M 98,304 — Skip Transcoders [26] 38.4M 24,576 — 68.2M 32,768 — 272.7M65,536 — 614M 98,304 — MxDs 37.7M 3072 21 ,490 67 .1M 4096 28 ,658 268 .4M 8192 57 ,330 604 M 8202 86 ,015 16 32 64 128 2563.6003.6053.6103.6153.6203.6253.6303.635 Mean cross-entropy loss GPT2-124M (Layer 8) Original LLM TC (38M params) STC (38M params) MxDs (38M params) 16 32 64 128 2563.3253.3303.3353.3403.345 Pythia-410M (Layer 15) Original LLM TC (67M params) STC (68M params) MxDs (67M params) 16 32 64 128 256 Sparsity level K 3.1003.1053.1103.1153.120 Mean cross-entropy loss Pythia-1.4B (Layer 12) Original LLM TC (269M params) STC (273M params) MxDs (268M params) 16 32 64 128 256 Sparsity level K 3.02003.02253.02503.02753.03003.03253.03503.03753.0400 Llama3.2-3B (Layer 12) Original LLM TC (604M params) STC (614M params) MxDs (604M params) Figure 3: Model cross-entropy loss preserved when replacing MLPs with Transcoders [ 27], Skip Transcoders [ 26], and MxDs, as a function of the number of active units K(hidden neurons/experts). We highlight that MxDs have consistently lower loss at all levels of sparsity. To compare with recent work [ 26], we adopt the TopK activation function [ 23] for sparsity-inducing function S(.), removing the need for an additional sparsity penalty. 3.1.1 Results: sparsity vs faithfulness We train an exhaustive set of 60sparse MLP approximations across 4diverse LLMs with up to 3B parameters. We show in Figure 3 the resulting downstream base model cross-entropy loss when using the trained sparse layers in place of the original MLPs. As can be seen, not
|
https://arxiv.org/abs/2505.21364v1
|
only do the proposed MxD layers outperform Transcoders [ 27] notably, but model performance is similarly preserved at all sparsity levels in MxD layers . With prior work finding sparse solutions to be more interpretable [ 17,19], the performance gap of MxDs at small Kis a significant advantage. Please also see Figure 10 for results with normalized MSE, where MxDs’ reconstruction errors are up to an order of magnitude smaller. Full results on additional layers are included in Appendix B.3 for 48 6 World Sports Business T ech News type0.00.20.40.60.81.0T est F1 scoreag_news gpt2 MxD TC Skip-TC T opK-SAE World Sports Business T ech News type0.00.20.40.60.81.0T est F1 scoreag_news pythia-410m MxD TC Skip-TC T opK-SAEFigure 4: Highest F1 score probing for ‘news category’ [ 48] on individual features/experts. As expected, the MxDs remain competitive with the Transcoder baselines, outperforming TopK-SAEs. more trained sparse layers. Please also see Appendix B.1 for qualitative and quantitative results for how faithfully the sparse layers propagate to the LLMs’ output space of natural language. The recent ‘Skip Transcoders’ (STCs) [ 26], introduce an additional IOparameters with a skip connection S∈RI×Omapping the input directly to the output with y≈D⊤z+S⊤x. STC layers thus have considerably more parameters (e.g., STCs on llama3.2-3B have 10M more parameters than MxDs). Despite the smaller parameter counts, we find MxDs consistently outperform STCs on the sparsity-accuracy frontier, attesting to the benefits of MxDs’ model form. 3.2 Feature evaluations The accurate reconstruction of MxD models in Section 3.1 provides some evidence that MxDs are faithfully emulating the original MLP layers’ functional mapping. However, for interpretability, we care equally about the extent to which the learned features correspond to specialized, human- interpretable concepts. We confirm that MxD’s features compete with the baselines quantitatively in two ways: through probing for known concepts in Section 3.2.1 and by steering the model using the learned features Section 3.2.2. For all experiments in this section, we use the K= 32 models. Shared experts and specialization Interestingly, we find MxDs naturally learn a ‘shared’ expert performing a common base transformation–the remaining K−1active experts are thus free to dedicate their capacity to modelling features unique to individual tokens. This emergent shared/private processing complements recent trends to use shared experts by design in MoEs [ 43,44,45,46,47] with [ 43] arguing this facilitates greater specialization. Furthermore, one may view the skip connection in STCs [ 26] as performing an analogous role to the shared expert. With MxDs, however, allunits have the same high capacity to accurately learn separate subcomputation regardless of the frequency or rarity of features. We also observe that our trained MxDs exhibit very few ‘dead’ experts, as shown in Appendix C.1, with many experts contributing actively. Furthermore, initial ablations in Appendix C.2 show that one can train MxDs without shared experts if desired, at small performance cost. Please see qualitative results of activated tokens for particular experts in Appendix E. 3.2.1 Sparse probing with individual features/experts One challenge is that the sparse layers learn features in an unsupervised manner. As pointed out in [23], we therefore do not know which high-level
|
https://arxiv.org/abs/2505.21364v1
|
features we ought to expect the model to learn (or even whether they exist in the OpenWebText training data). Nonetheless, we can reasonably expect a useful unsupervised model to learn at least a handful of commonly occurring concepts and linguistic themes. We accordingly focus our evaluation on the relative abilities of the sparse models to learn features well-predicting a variety of binary features used in the literature. Concretely, to quantify the extent to which sparse layer features reliably fire in response to common high-level, interpretable concepts of natural language, we adopt the experimental settings of [ 49, 23,19], training binary probes on the individual units of specialization (sparse hidden units znfor TCs/SAEs and expert units anfor MxDs–all pre-activation). For probing of sample-level concepts, we mean-pool activations across all non-padding tokens [ 19]. We train separate probes on 100 features with the largest mean difference between positive and negative activations, as per [49]. 7 Figure 5: Mean score along dimensions of ‘textual coherence’ and ‘steerability’ of text generated by steering with the first 100 features of the sparse layers. Each sample is scored by 2 LLM judges. We perform experiments on all 24binary probing tasks in the SAEBench suite [ 19]. Four of which are shown in Figure 4, plotting the best F1 score (on a held-out set) for news topic classification in a 1-vs-all setting [ 48]. As can be seen, there exist individual MxD expert units that are predictive of various categories of news articles, competitive with the baselines. We refer readers to Appendix B.5 for additional experiments on 20more sample-level probing tasks, 10token-level probing tasks, and experimental details. 3.2.2 Feature steering Specific features might reliably fire in response to interpretable patterns of the input, yet not contribute to the generation process. Here, we aim to test this functional role of features by steering the LLMs. We note that these experiments do not aim to establish TCs/MxDs as competitive with the SOTA for controllable LLM generation. Rather, we aim to validate that the learned features contribute mechanistically to the LLM’s forward pass in a predictable way. Mechanisms for steering Letλ∈Rbe a hyperparameter controlling the desired ‘strength’ of the model edit. For TCs, we hook the forward pass at the relevant layer to increase the presence of target feature nwith ˆy=y+λdn. In contrast, MxDs can be steered with ˆy=y+λ·(W⊤ nz). Intuitively, increasing the weight of an expert’s contribution in the forward pass modulates the token representation in the direction of the learned specialization. Results We perform steering with the first 100neurons/experts individually, using λ:= 100 for all experiments. We generate a collection of 10synthetic outputs for each neuron, each string consisting of32generated tokens to the prompt “Let’s talk about ” . We then ask two LLMs2to rate the collection of text along two dimensions separately: (1) the extent to which a shared concept, theme, or linguistic pattern is present throughout the generated collection of text, and (2) the grammatical fluency of the text (please see Appendix D.1 for the full prompt). As can be seen from the mean scores over the 100neurons
|
https://arxiv.org/abs/2505.21364v1
|
shown in Figure 5, MxDs are competitive with the baselines, exhibiting a similar trade-off between textual coherence and presence of concept as we expect. 4 Related work Sparse decompositions Learning sparse [ 50,25], non-negative [ 51] features of a data signal has found many applications in computer vision [ 15,52,53,54] and natural language processing [55,56,57], motivated by the pursuit of interpretable, parts-based representations [ 13,14]. In transformer-based language models [ 3], similar variants have been proposed for post-hoc analysis; 2We use gemini-2.0-flash andllama-4-scout-17b-16e-instruct as two independent LLM judges. 8 sparse autoencoders (SAEs) are a popular method that rewrites latent features as non-negative combinations of atoms in a learned overcomplete dictionary, imposing either soft sparsity penalties [6,22,31] or thresholding activations directly [ 23,58,59]. Recent work aims to sparsify the existing layers of pretrained LLMs, learning new MLPs with sparse hidden units [ 29] for circuit analysis [ 27] or more interpretable yet faithful computation [26, 60]. Despite the surge of interest in SAEs, many works are emerging drawing attention to their limitations–underperforming baselines for probing [61], unlearning [62], and steering [63], in addition to other pathologies [64, 32, 65, 66]. Conditional computation One natural alternative to static fully connected layers is conditional computation [ 67,68]. Tracing back to the early work of [ 69,70], single dense layers are replaced with specialized subunits–conditional on the input–as a form of layer-level sparsity. The Mixture of Experts (MoE) architecture [ 36,71,72] is a prominent example of conditional computation, breaking the link between parameter count and FLOPs. Consequently, MoEs have seen rapid adoption in SOTA models in recent years–scaling to very large parameter counts [ 73,74,75,76,77]. For parameter-efficient instruction tuning [ 37] introduces conditional (IA)3adapters [ 38], modulating the MLP hidden dimension with the Hadamard product. Our proposed formulation with factorized weight tensors yields ‘MoVs’ [ 37] as a less scalable special case (see Appendix A.5). In contrast, MxDs model the decoder output space directly for reconstruction, and also provide significantly more specialized units than [37], making MxDs more suitable for our goal of interpretability. Whilst the primary focus of MoEs has been on their impressive capabilities, the literature has observed that individual experts often specialize in particular semantic patterns of the input data, despite not being trained to do so [ 78,79,43,80,81]. For example, many works find that data that are in some sense similar are routed to the same experts–specializing to object shapes [ 82], texture [ 83], image category [ 84], or semantic patterns in natural language [ 36]. In the context of large language models, this emergent property of specialization in MoEs has been a primary focus of recent work: from encouraging monosemantic experts [ 85] or sparsity amongst experts’ weights [ 86] to efficiently scaling the expert count for fine-grained specialization [ 40]. In contrast to these works exploring pre-training, we explore an efficient design of MoE to replace existing LLMs’ dense layers. 5 Conclusion In this paper, we showed the benefits of decomposing dense layers’ computations as a mixture of interpretable sublayers. We proposed the Mixture of Decoders (MxD) layer to
|
https://arxiv.org/abs/2505.21364v1
|
achieve this at scale, proving that MxDs’ linear experts preserve the matrix rank properties of the original decoders. Experimentally, we showed MxDs significantly outperform on the sparsity-accuracy frontier when trained to replace dense MLP layers. Quantitative results on sparse probing and feature steering demonstrated MxDs nonetheless learn specialized latent features similarly to existing interpretability techniques. Crucially, MxDs reexamine the dominating neuron-level sparsity paradigm of popular techniques, providing evidence that specialization doesn’t have to come with such a high cost to model performance. We believe MxDs (and specialization at the layer-level more generally) are an important step towards sparsity without sacrifice. We hope future work continues to build interpretable mechanisms that better preserve model capabilities. Limitations Our experiments show MxDs outperform on the sparsity-accuracy frontier on 4 diverse LLMs. Whilst we fully anticipate this trend to continue in even larger models, our experiments only provide direct evidence for LLMs with up to 3B parameters, given our limited resources. Furthermore, whilst the TopK activation can greatly reduce the decoders’ FLOPs, the large encoders in sparse MLPs and the gating function in MxDs remain an additional inference-time cost. Future work could explore hierarchical structures [ 85,36] and/or efficient retrieval [ 87] for further reductions in FLOPs. Secondly, MoEs are prone to issues of expert imbalance [ 71], or collapse [ 88]. Just as a low learning rate helps prevent dead SAE features [ 89], we too find a low learning rate avoids dead experts (see Appendix C.1 exploring expert balance and Section 3.2.2 for functional diversity). Thus, similar care needs to be taken with MxDs’ learning rate to ensure accurate yet non-degenerate reconstructions. 9 Acknowledgments JO is grateful to Demian Till for reviewing the draft and providing valuable feedback and suggestions. JO would also like to thank Markos Georgopoulos, Benjamin Hayum, and Wisconsin AI Safety Initiative’s Safety Scholars for insightful discussions throughout the project. We are also grateful to the open-source Zulip platform for facilitating research discussion. References [1]Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. [2]Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupatiraju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, et al. Gemma 2: Improving open language models at a practical size. arXiv preprint arXiv:2408.00118 , 2024. [3]Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019. [4]Zhengxuan Wu, Atticus Geiger, Thomas Icard, Christopher Potts, and Noah Goodman. In- terpretability at scale: Identifying causal mechanisms in alpaca. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Adv. Neural Inform. Process. Syst. (NeurIPS) , volume 36, pages 78205–78226. Curran Associates, Inc., 2023. [5]Atticus Geiger, Zhengxuan Wu, Christopher Potts, Thomas Icard, and Noah Goodman. Finding alignments between interpretable causal variables and distributed neural representations. In Francesco Locatello and Vanessa Didelez, editors, Proceedings of the Third Conference on Causal Learning and Reasoning , volume 236 of Proceedings of Machine Learning Research ,
|
https://arxiv.org/abs/2505.21364v1
|
pages 160–187. PMLR, 01–03 Apr 2024. [6]Adly Templeton. Scaling monosemanticity: Extracting interpretable features from claude 3 sonnet . Anthropic, 2024. [7]Andy Arditi, Oscar Obeso, Aaquib Syed, Daniel Paleka, Nina Panickssery, Wes Gurnee, and Neel Nanda. Refusal in language models is mediated by a single direction. arXiv preprint arXiv:2406.11717 , 2024. [8]Collin Burns, Haotian Ye, Dan Klein, and Jacob Steinhardt. Discovering latent knowledge in language models without supervision. In Int. Conf. Learn. Represent. (ICLR) , 2023. [9]David E. Rumelhart and James L. McClelland. A General Framework for Parallel Distributed Processing , pages 45–76. 1987. [10] Geoffrey E Hinton. Distributed representations. 1984. [11] Chris Olah, Nick Cammarata, Ludwig Schubert, Gabriel Goh, Michael Petrov, and Shan Carter. Zoom in: An introduction to circuits. Distill , 2020. doi: 10.23915/distill.00024.001. https://distill.pub/2020/circuits/zoom-in. [12] Nelson Elhage, Tristan Hume, Catherine Olsson, Nicholas Schiefer, Tom Henighan, Shauna Kravec, Zac Hatfield-Dodds, Robert Lasenby, Dawn Drain, Carol Chen, Roger Grosse, Sam McCandlish, Jared Kaplan, Dario Amodei, Martin Wattenberg, and Christopher Olah. Toy models of superposition, 2022. [13] Daniel D Lee and H Sebastian Seung. Learning the parts of objects by non-negative matrix factorization. nature , 401(6755):788–791, 1999. [14] Bruno A Olshausen and David J Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature , 381(6583):607–609, 1996. [15] Emmanuel J Candès, Xiaodong Li, Yi Ma, and John Wright. Robust principal component analysis? Journal of the ACM (JACM) , 58(3):1–37, 2011. [16] Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In IEEE Conf. Comput. Vis. Pattern Recog. (CVPR) , pages 4690–4699, 2019. 10 [17] Vikram V . Ramaswamy, Sunnie S. Y . Kim, Ruth C. Fong, and Olga Russakovsky. Over- looked factors in concept-based explanations: Dataset choice, concept learnability, and human capability. IEEE Conf. Comput. Vis. Pattern Recog. (CVPR) , pages 10932–10941, 2022. [18] Caden Juang, Gonçalo Paulo, Jacob Drori, and Nora Belrose. Open source automated in- terpretability for sparse autoencoder features. https://blog.eleuther.ai/autointerp/ , July 2024. EleutherAI Blog. [19] Adam Karvonen, Can Rager, Johnny Lin, Curt Tigges, Joseph Bloom, David Chanin, Callum McDougall, Yeu-Tong Lau, Eoin Farrell, Arthur Conmy, Kola Ayonrinde, Demian Till, Matthew Wearden, Samuel Marks, and Neel Nanda. SAEBench: A comprehensive benchmark for sparse autoencoders in language model interpretability. In Int. Conf. Mach. Learn. (ICML) , 2025. [20] Zachary Chase Lipton. The mythos of model interpretability. Communications of the ACM , 61: 36 – 43, 2016. [21] Forough Poursabzi-Sangdeh, Daniel G. Goldstein, Jake M. Hofman, Jennifer Wortman Vaughan, and Hanna M. Wallach. Manipulating and measuring model interpretability. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems , 2018. [22] Robert Huben, Hoagy Cunningham, Logan Riggs Smith, Aidan Ewart, and Lee Sharkey. Sparse autoencoders find highly interpretable features in language models. In Int. Conf. Learn. Represent. (ICLR) , 2023. [23] Leo Gao, Tom Dupre la Tour, Henk Tillman, Gabriel Goh, Rajan Troll, Alec Radford, Ilya Sutskever, Jan Leike, and Jeffrey Wu. Scaling and evaluating sparse autoencoders. In Int. Conf. Learn. Represent. (ICLR) , 2025. [24] Bruno A Olshausen and David J Field. Sparse coding
|
https://arxiv.org/abs/2505.21364v1
|
with an overcomplete basis set: A strategy employed by v1? Vision research , 37(23):3311–3325, 1997. [25] M. Aharon, M. Elad, and A. Bruckstein. K-svd: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on Signal Processing , 54(11):4311– 4322, 2006. doi: 10.1109/TSP.2006.881199. [26] Gonçalo Paulo, Stepan Shabalin, and Nora Belrose. Transcoders beat sparse autoencoders for interpretability. arXiv preprint arXiv:2501.18823 , 2025. [27] Jacob Dunefsky, Philippe Chlenski, and Neel Nanda. Transcoders find interpretable LLM feature circuits. In Adv. Neural Inform. Process. Syst. (NeurIPS) , 2024. [28] Lee Sharkey, Bilal Chughtai, Joshua Batson, Jack Lindsey, Jeff Wu, Lucius Bushnaq, Nicholas Goldowsky-Dill, Stefan Heimersheim, Alejandro Ortega, Joseph Bloom, et al. Open problems in mechanistic interpretability. arXiv preprint arXiv:2501.16496 , 2025. [29] Trenton Bricken, Adly Templeton, Joshua Batson, Brian Chen, Adam Jermyn, Tom Con- erly, Nick Turner, Cem Anil, Carson Denison, Amanda Askell, Robert Lasenby, Yifan Wu, Shauna Kravec, Nicholas Schiefer, Tim Maxwell, Nicholas Joseph, Zac Hatfield-Dodds, Alex Tamkin, Karina Nguyen, Brayden McLean, Josiah E Burke, Tristan Hume, Shan Carter, Tom Henighan, and Christopher Olah. Towards monosemanticity: Decomposing language models with dictionary learning. Transformer Circuits Thread , 2023. https://transformer- circuits.pub/2023/monosemantic-features/index.html. [30] Samuel Marks, Adam Karvonen, and Aaron Mueller. dictionary_learning. https://github. com/saprmarks/dictionary_learning , 2024. [31] Senthooran Rajamanoharan, Tom Lieberum, Nicolas Sonnerat, Arthur Conmy, Vikrant Varma, János Kramár, and Neel Nanda. Jumping ahead: Improving reconstruction fidelity with jumprelu sparse autoencoders. arXiv preprint arXiv:2407.14435 , 2024. [32] Joshua Engels, Logan Riggs, and Max Tegmark. Decomposing the dark matter of sparse autoencoders. arXiv preprint arXiv:2410.14670 , 2024. [33] Noam Shazeer. Glu variants improve transformer. arXiv preprint arXiv:2002.05202 , 2020. [34] Tamara G Kolda and Brett W Bader. Tensor decompositions and applications. SIAM review , 51 (3):455–500, 2009. [35] Grigorios G Chrysos, Yongtao Wu, Razvan Pascanu, Philip Torr, and V olkan Cevher. Hadamard product in deep learning: Introduction, advances and challenges. IEEE Transactions on Pattern Analysis and Machine Intelligence , pages 1–20, 2025. doi: 10.1109/TPAMI.2025.3560423. 11 [36] Noam Shazeer, *Azalia Mirhoseini, *Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of- experts layer. In Int. Conf. Learn. Represent. (ICLR) , 2017. [37] Ted Zadouri, Ahmet Üstün, Arash Ahmadian, Beyza Ermis, Acyr Locatelli, and Sara Hooker. Pushing mixture of experts to the limit: Extremely parameter efficient moe for instruction tuning. In Int. Conf. Learn. Represent. (ICLR) , 2024. [38] Haokun Liu, Derek Tam, Muqeeth Mohammed, Jay Mohta, Tenghao Huang, Mohit Bansal, and Colin Raffel. Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Adv. Neural Inform. Process. Syst. (NeurIPS) , 2022. [39] Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415 , 2016. [40] James Oldfield, Markos Georgopoulos, Grigorios Chrysos, Christos Tzelepis, Yannis Panagakis, Mihalis Nicolaou, Jiankang Deng, and Ioannis Patras. Multilinear mixture of experts: Scalable expert specialization through factorization. In Adv. Neural Inform. Process. Syst. (NeurIPS) , 2024. [41] Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle O’Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, et al.
|
https://arxiv.org/abs/2505.21364v1
|
Pythia: A suite for analyzing large language models across training and scaling. In Int. Conf. Mach. Learn. (ICML) , pages 2397–2430. PMLR, 2023. [42] Aaron Gokaslan, Vanya Cohen, Ellie Pavlick, and Stefanie Tellex. Openwebtext corpus. http://Skylion007.github.io/OpenWebTextCorpus , 2019. [43] Damai Dai, Chengqi Deng, Chenggang Zhao, R. X. Xu, Huazuo Gao, Deli Chen, Jiashi Li, Wangding Zeng, Xingkai Yu, Y . Wu, Zhenda Xie, Y . K. Li, Panpan Huang, Fuli Luo, Chong Ruan, Zhifang Sui, and Wenfeng Liang. Deepseekmoe: Towards ultimate expert specialization in mixture-of-experts language models, 2024. [44] Meta AI. The llama 4 herd: The beginning of a new era of natively multimodal ai innova- tion, 2025. URL https://ai.meta.com/blog/llama-4-multimodal-intelligence/ . Accessed: 2025-04-06. [45] Qwen Team. Qwen1.5-moe: Matching 7b model performance with 1/3 activated parameters", February 2024. URL https://qwenlm.github.io/blog/qwen-moe/ . [46] An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, Guanting Dong, Haoran Wei, Huan Lin, Jialong Tang, Jialin Wang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Ma, Jianxin Yang, Jin Xu, Jingren Zhou, Jinze Bai, Jinzheng He, Junyang Lin, Kai Dang, Keming Lu, Keqin Chen, Kexin Yang, Mei Li, Mingfeng Xue, Na Ni, Pei Zhang, Peng Wang, Ru Peng, Rui Men, Ruize Gao, Runji Lin, Shijie Wang, Shuai Bai, Sinan Tan, Tianhang Zhu, Tianhao Li, Tianyu Liu, Wenbin Ge, Xiaodong Deng, Xiaohuan Zhou, Xingzhang Ren, Xinyu Zhang, Xipin Wei, Xuancheng Ren, Xuejing Liu, Yang Fan, Yang Yao, Yichang Zhang, Yu Wan, Yunfei Chu, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, Zhifang Guo, and Zhihao Fan. Qwen2 technical report, 2024. [47] An Yang, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoyan Huang, Jiandong Jiang, Jianhong Tu, Jianwei Zhang, Jingren Zhou, Junyang Lin, Kai Dang, Kexin Yang, Le Yu, Mei Li, Minmin Sun, Qin Zhu, Rui Men, Tao He, Weijia Xu, Wenbiao Yin, Wenyuan Yu, Xiafei Qiu, Xingzhang Ren, Xinlong Yang, Yong Li, Zhiying Xu, and Zipeng Zhang. Qwen2.5-1m technical report, 2025. [48] Antonio Gulli. Ag corpus of news articles. http://groups.di.unipi.it/~gulli/AG_ corpus_of_news_articles.html , 2005. [49] Wes Gurnee, Neel Nanda, Matthew Pauly, Katherine Harvey, Dmitrii Troitskii, and Dimitris Bertsimas. Finding neurons in a haystack: Case studies with sparse probing. Transactions on Machine Learning Research , 2023. ISSN 2835-8856. [50] Rodolphe Jenatton, Guillaume Obozinski, and Francis Bach. Structured sparse principal component analysis. In Yee Whye Teh and Mike Titterington, editors, Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics , volume 9 of Proceedings of Machine Learning Research , pages 366–373, Chia Laguna Resort, Sardinia, Italy, 13–15 May 2010. PMLR. 12 [51] Patrik O Hoyer. Non-negative matrix factorization with sparseness constraints. Journal of machine learning research , 5(Nov):1457–1469, 2004. [52] Edo Collins, Radhakrishna Achanta, and Sabine Süsstrunk. Deep Feature Factorization for Con- cept Discovery , page 352–368. Springer International Publishing, 2018. ISBN 9783030012649. doi: 10.1007/978-3-030-01264-9_21. [53] James Oldfield, Christos Tzelepis, Yannis Panagakis, Mihalis Nicolaou, and Ioannis Patras. Panda: Unsupervised learning of parts and appearances in the feature maps of GANs. In Int. Conf. Learn. Represent. (ICLR) , 2023. [54] Yue Song, Thomas Anderson Keller, Yisong Yue, Pietro Perona, and Max Welling. Unsupervised
|
https://arxiv.org/abs/2505.21364v1
|
representation learning from sparse transformation analysis, 2024. [55] Wei Xu, Xin Liu, and Yihong Gong. Document clustering based on non-negative matrix factorization. In Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Informaion Retrieval , SIGIR ’03, page 267–273, New York, NY , USA, 2003. Association for Computing Machinery. ISBN 1581136463. doi: 10.1145/860435. 860485. [56] Da Kuang, Jaegul Choo, and Haesun Park. Nonnegative matrix factorization for interactive topic modeling and document clustering. Partitional clustering algorithms , pages 215–243, 2015. [57] Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. Linear algebraic structure of word senses, with applications to polysemy. Transactions of the Association for Computational Linguistics , 6:483–495, 2018. doi: 10.1162/tacl_a_00034. [58] Alireza Makhzani and Brendan Frey. K-sparse autoencoders. arXiv preprint arXiv:1312.5663 , 2013. [59] Bart Bussmann, Patrick Leask, and Neel Nanda. Batchtopk sparse autoencoders. In NeurIPS 2024 Workshop on Scientific Methods for Understanding Deep Learning , 2024. [60] Lucy Farnik, Tim Lawson, Conor Houghton, and Laurence Aitchison. Jacobian sparse autoen- coders: Sparsify computations, not just activations, 2025. [61] Subhash Kantamneni, Joshua Engels, Senthooran Rajamanoharan, Max Tegmark, and Neel Nanda. Are sparse autoencoders useful? a case study in sparse probing, 2025. [62] Eoin Farrell, Yeu-Tong Lau, and Arthur Conmy. Applying sparse autoencoders to unlearn knowledge in language models, 2024. [63] Zhengxuan Wu, Aryaman Arora, Atticus Geiger, Zheng Wang, Jing Huang, Dan Jurafsky, Christopher D. Manning, and Christopher Potts. AxBench: Steering LLMs? even simple baselines outperform sparse autoencoders. In Int. Conf. Mach. Learn. (ICML) , 2025. [64] David Chanin, James Wilken-Smith, Tomáš Dulka, Hardik Bhatnagar, and Joseph Bloom. A is for absorption: Studying feature splitting and absorption in sparse autoencoders, 2024. [65] Patrick Leask, Bart Bussmann, Michael Pearce, Joseph Bloom, Curt Tigges, Noura Al Moubayed, Lee Sharkey, and Neel Nanda. Sparse autoencoders do not find canonical units of analysis, 2025. [66] Gonçalo Paulo and Nora Belrose. Sparse autoencoders trained on the same data learn different features, 2025. [67] Yizeng Han, Gao Huang, Shiji Song, Le Yang, Honghui Wang, and Yulin Wang. Dynamic neural networks: A survey. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) , 44(11):7436–7456, 2021. [68] Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, and Doina Precup. Conditional com- putation in neural networks for faster models. In Int. Conf. Mach. Learn. Worksh. (ICMLW) , 2015. [69] Robert A Jacobs, Michael I Jordan, and Andrew G Barto. Task decomposition through com- petition in a modular connectionist architecture: The what and where vision tasks. Cognitive science , 15(2):219–250, 1991. [70] Robert A Jacobs, Michael I Jordan, Steven J Nowlan, and Geoffrey E Hinton. Adaptive mixtures of local experts. Neural computation , 3(1):79–87, 1991. 13 [71] William Fedus, Barret Zoph, and Noam Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. Journal of Machine Learning Research , 23 (120):1–39, 2022. [72] Barret Zoph, Irwan Bello, Sameer Kumar, Nan Du, Yanping Huang, Jeff Dean, Noam Shazeer, and William Fedus. St-moe: Designing stable and transferable sparse expert models. arXiv preprint arXiv:2202.08906 , 2022. [73] Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun,
|
https://arxiv.org/abs/2505.21364v1
|
Noam Shazeer, and Zhifeng Chen. GShard: Scaling giant models with condi- tional computation and automatic sharding. In Int. Conf. Learn. Represent. (ICLR) , 2021. [74] Nan Du, Yanping Huang, Andrew M Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi Zhou, Adams Wei Yu, Orhan Firat, et al. Glam: Efficient scaling of language models with mixture-of-experts. In Int. Conf. Mach. Learn. (ICML) , pages 5547–5569. PMLR, 2022. [75] Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. Mixtral of experts, 2024. [76] DeepSeek-AI, Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Daya Guo, Dejian Yang, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Haowei Zhang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Li, Hui Qu, J. L. Cai, Jian Liang, Jianzhong Guo, Jiaqi Ni, Jiashi Li, Jiawei Wang, Jin Chen, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, Junxiao Song, Kai Dong, Kai Hu, Kaige Gao, Kang Guan, Kexin Huang, Kuai Yu, Lean Wang, Lecong Zhang, Lei Xu, Leyi Xia, Liang Zhao, Litong Wang, Liyue Zhang, Meng Li, Miaojun Wang, Mingchuan Zhang, Minghua Zhang, Minghui Tang, Mingming Li, Ning Tian, Panpan Huang, Peiyi Wang, Peng Zhang, Qiancheng Wang, Qihao Zhu, Qinyu Chen, Qiushi Du, R. J. Chen, R. L. Jin, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, Runxin Xu, Ruoyu Zhang, Ruyi Chen, S. S. Li, Shanghao Lu, Shangyan Zhou, Shanhuang Chen, Shaoqing Wu, Shengfeng Ye, Shengfeng Ye, Shirong Ma, Shiyu Wang, Shuang Zhou, Shuiping Yu, Shunfeng Zhou, Shuting Pan, T. Wang, Tao Yun, Tian Pei, Tianyu Sun, W. L. Xiao, Wangding Zeng, Wanjia Zhao, Wei An, Wen Liu, Wenfeng Liang, Wenjun Gao, Wenqin Yu, Wentao Zhang, X. Q. Li, Xiangyue Jin, Xianzu Wang, Xiao Bi, Xiaodong Liu, Xiaohan Wang, Xiaojin Shen, Xiaokang Chen, Xiaokang Zhang, Xiaosha Chen, Xiaotao Nie, Xiaowen Sun, Xiaoxiang Wang, Xin Cheng, Xin Liu, Xin Xie, Xingchao Liu, Xingkai Yu, Xinnan Song, Xinxia Shan, Xinyi Zhou, Xinyu Yang, Xinyuan Li, Xuecheng Su, Xuheng Lin, Y . K. Li, Y . Q. Wang, Y . X. Wei, Y . X. Zhu, Yang Zhang, Yanhong Xu, Yanhong Xu, Yanping Huang, Yao Li, Yao Zhao, Yaofeng Sun, Yaohui Li, Yaohui Wang, Yi Yu, Yi Zheng, Yichao Zhang, Yifan Shi, Yiliang Xiong, Ying He, Ying Tang, Yishi Piao, Yisong Wang, Yixuan Tan, Yiyang Ma, Yiyuan Liu, Yongqiang Guo, Yu Wu, Yuan Ou, Yuchen Zhu, Yuduan Wang, Yue Gong, Yuheng Zou, Yujia He, Yukun Zha, Yunfan Xiong, Yunxian Ma, Yuting Yan, Yuxiang Luo, Yuxiang You, Yuxuan Liu, Yuyang Zhou, Z. F. Wu, Z. Z. Ren, Zehui Ren, Zhangli Sha, Zhe Fu, Zhean Xu, Zhen Huang, Zhen Zhang,
|
https://arxiv.org/abs/2505.21364v1
|
Zhenda Xie, Zhengyan Zhang, Zhewen Hao, Zhibin Gou, Zhicheng Ma, Zhigang Yan, Zhihong Shao, Zhipeng Xu, Zhiyu Wu, Zhongyu Zhang, Zhuoshu Li, Zihui Gu, Zijia Zhu, Zijun Liu, Zilin Li, Ziwei Xie, Ziyang Song, Ziyi Gao, and Zizheng Pan. Deepseek-v3 technical report, 2025. [77] Joan Puigcerver, Carlos Riquelme, Basil Mustafa, and Neil Houlsby. From sparse to soft mixtures of experts. In Int. Conf. Learn. Represent. (ICLR) , 2024. [78] Aya Abdelsalam Ismail, Sercan O Arik, Jinsung Yoon, Ankur Taly, Soheil Feizi, and Tomas Pfister. Interpretable mixture of experts. Transactions on Machine Learning Research , 2023. ISSN 2835-8856. [79] Marmik Chaudhari, Idhant Gulati, Nishkal Hundia, Pranav Karra, and Shivam Raval. Moe lens - an expert is all you need. In Sparsity in LLMs (SLLM): Deep Dive into Mixture of Experts, Quantization, Hardware, and Inference , 2025. 14 [80] Huy Nguyen, Xing Han, Carl Harris, Suchi Saria, and Nhat Ho. On expert estimation in hierarchical mixture of experts: Beyond softmax gating functions, 2025. [81] Stefan Nielsen, Rachel Teo, Laziz Abdullaev, and Tan Minh Nguyen. Tight clusters make specialized experts. In Int. Conf. Learn. Represent. (ICLR) , 2025. [82] Brandon Yang, Gabriel Bender, Quoc V Le, and Jiquan Ngiam. Condconv: Conditionally parameterized convolutions for efficient inference. Adv. Neural Inform. Process. Syst. (NeurIPS) , 32, 2019. [83] Basil Mustafa, Carlos Riquelme Ruiz, Joan Puigcerver, Rodolphe Jenatton, and Neil Houlsby. Multimodal contrastive learning with LIMoe: the language-image mixture of experts. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Adv. Neural Inform. Process. Syst. (NeurIPS) , 2022. [84] Carlos Riquelme, Joan Puigcerver, Basil Mustafa, Maxim Neumann, Rodolphe Jenatton, André Susano Pinto, Daniel Keysers, and Neil Houlsby. Scaling vision with sparse mixture of experts. Adv. Neural Inform. Process. Syst. (NeurIPS) , 34:8583–8595, 2021. [85] Jungwoo Park, Ahn Young Jin, Kee-Eung Kim, and Jaewoo Kang. Monet: Mixture of monose- mantic experts for transformers. In Int. Conf. Learn. Represent. (ICLR) , 2025. [86] Xingyi Yang, Constantin Venhoff, Ashkan Khakzar, Christian Schroeder de Witt, Puneet K. Dokania, Adel Bibi, and Philip Torr. Mixture of experts made intrinsically interpretable, 2025. [87] Xu Owen He. Mixture of a million experts, 2024. [88] Zewen Chi, Li Dong, Shaohan Huang, Damai Dai, Shuming Ma, Barun Patra, Saksham Singhal, Payal Bajaj, Xia Song, Xian-Ling Mao, Heyan Huang, and Furu Wei. On the representation collapse of sparse mixture of experts. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Adv. Neural Inform. Process. Syst. (NeurIPS) , 2022. [89] Arthur Conmy. My best guess at the important tricks for training 1l saes. https://www.lesswrong.com/posts/fifPCos6ddsmJYahD/ my-best-guess-at-the-important-tricks-for-training-1l-saes , December 2023. LessWrong. [90] James E. Gentle. Matrix Algebra: Theory, Computations, and Applications in Statistics . Springer, New York, 2nd edition, 2007. [91] Nicholas D Sidiropoulos and Rasmus Bro. On the uniqueness of multilinear decomposition of n-way arrays. Journal of Chemometrics: A Journal of the Chemometrics Society , 14(3): 229–239, 2000. [92] Donghyun Lee, Jaeyong Lee, Genghan Zhang, Mo Tiwari, and Azalia Mirhoseini. CATS: Context-aware thresholding for sparsity in large language models. In First Conference on Language Modeling , 2024. [93] Frank Lauren Hitchcock. The expression of
|
https://arxiv.org/abs/2505.21364v1
|
a tensor or a polyadic as a sum of products. Journal of Mathematics and Physics , 6:164–189, 1927. [94] J. Douglas Carroll and Jih Jie Chang. Analysis of individual differences in multidimensional scaling via an n-way generalization of “eckart-young” decomposition. Psychometrika , 35: 283–319, 1970. [95] CodeParrot. Github code dataset. https://huggingface.co/datasets/codeparrot/ github-code , 2022. [96] Yupeng Hou, Jiacheng Li, Zhankui He, An Yan, Xiusi Chen, and Julian McAuley. Bridging language and items for retrieval and recommendation. arXiv preprint arXiv:2403.03952 , 2024. [97] Philipp Koehn. Europarl: A parallel corpus for statistical machine translation. In Proceedings of Machine Translation Summit X: Papers , pages 79–86, Phuket, Thailand, September 13-15 2005. [98] Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexan- dra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, and Adam Tauman Kalai. Bias in bios: A case study of semantic representation bias in a high-stakes setting. In Proceedings of the Conference on Fairness, Accountability, and Transparency , FAT* ’19, page 120–128, New York, NY , USA, 2019. Association for Computing Machinery. ISBN 9781450361255. doi: 10.1145/3287560.3287572. URL https://doi.org/10.1145/3287560.3287572 . 15 [99] Anthony Duong Joseph Bloom, Curt Tigges and David Chanin. Saelens. https://github. com/jbloomAus/SAELens , 2024. Appendix Table of Contents A Proofs and additional technical results 16 A.1 Proof of rank equality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 A.2 Proof of MxD forward pass equivalence . . . . . . . . . . . . . . . . . . . . . . 17 A.3 Intuition for weight parameterization through the lens of tensor methods . . . . . 17 A.4 GLU encoders are a mixture of rank-1 linear experts . . . . . . . . . . . . . . . 18 A.5 Hadamard-factorized tensors generalize MoVs . . . . . . . . . . . . . . . . . . 19 B Additional quantitative results and ablations 19 B.1 Faithfulness in output space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 B.2 Additional reconstruction metrics . . . . . . . . . . . . . . . . . . . . . . . . . 20 B.3 Results on additional layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 B.4 Expert rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 B.5 Sparse probing . . . . . . . . . . . . . . . . . . . . . . . . . . .
|
https://arxiv.org/abs/2505.21364v1
|
. . . . . . . . 24 B.6 Ablations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 C Feature balance and shared experts 26 C.1 Expert/feature balance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 C.2 Shared experts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 D Detailed experimental setup 31 D.1 Feature steering details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 E Additional qualitative results 34 A Proofs and additional technical results A.1 Proof of rank equality Proof of Lemma 1. We first derive the expression for expert n’s weight matrix Wn=Ddiag(cn)∈ RH×Oand then show the rank equality that follows. First, recall that we have the third-order weight tensor defined as W(n, h,:) =cn∗dh∈RO, for matrices C∈RN×O,D∈RH×O. We can express each element of the tensor W∈RN×H×O in terms of elements of the two matrices as W(n, h, o ) =cno·dho= (D)ho·cno. (6) Equation (6) shows that for a fixed expert n, thenthrowcn∈ROessentially scales the columns of matrix D∈RH×O. This is equivalent to right-multiplying matrix Dby a diagonal matrix formed fromcn∈RO. Indeed, the (h, o)entry of such matrix product is [Ddiag(cn)]ho=OX i=1(D)hidiag(cn)io (7) = (D)hodiag(cn)oo (8) =dho·cno, (9) 16 since all off-diagonal terms (i.e., i̸=o) in Equation (7) vanish and diag(cn)oo=cnoby con- struction. Comparing Equation (6) and Equation (9) shows that, for every h∈ {1,2, . . . , H }and o∈ {1,2, . . . , O }we have W(n, h, o ) = [Ddiag(cn)]ho. Hence, indexing into the first mode of the tensor alone gives us the matrix-valued expression for expert nas claimed: W(n,:,:) =Wn=Ddiag(cn)∈RH×O. Finally, a standard result in linear algebra [ 90] has that rank(AB) =rank(A)for any A∈RH×O and invertible matrix B∈RO×O. Since matrix diag (cn)∈RO×Ois invertible by assumption in Lemma 1, setting A=DandB=diag(cn)yields the rank equality. A.2 Proof of MxD forward pass equivalence Recall we have input vector z∈RH, expert coefficients a∈RN, and layer weights W∈RN×H×O. The weights are defined in Equation (4) element-wise through the Hadamard product ∗as W(n, h,:) =cn∗dh∈RO,∀n∈ {1, . . . , N }, h∈ {1, . . . , H }, for learnable parameters C∈RN×O,D∈RH×O. Lemma 2 states that MxD’s forward pass can be equivalently expressed as NX n=1an W⊤ nz = C⊤a ∗ D⊤z . Proof of Lemma 2. The LHS can first be re-written as an explicit sum over the hidden dimension ˆy=NX n=1an W⊤ nz =NX n=1HX h=1an wnh:zh ∈RO. (10) Plugging in the definition of wnh:∈ROfrom Equation (4) then
|
https://arxiv.org/abs/2505.21364v1
|
yields ˆy=NX n=1HX h=1an wnh:zh (11) =NX n=1HX h=1an (cn∗dh)zh (12) =NX n=1ancn ∗HX h=1zhdh (13) = C⊤a ∗ D⊤z , (14) which is exactly the RHS of Equation (5), showing the MxD forward pass is equivalent to the Hadamard product of C⊤aandD⊤z. A.3 Intuition for weight parameterization through the lens of tensor methods A second complementary way of viewing the MxD layer’s parameterization (and its full-rank properties) is through the lens of tensor methods [ 34]. A tensor-based motivation for MxD’s weight tensor parameterization and forward pass is presented in Appendix A.3.1 and Appendix A.3.2, respectively. Notation and definitions A brief primer is first included below, based on [ 34] (and can be safely skipped for those already familiar): 17 •The mode- nfibers of an Nthorder tensor X∈RI1×I2×···× INare the In-dimensional column vectors obtained by fixing every index except that of the nthmode (e.g., x:i2i3∈RI1 are the mode- 1fibers of a third-order tensor X∈RI1×I2×I3). Stacking all mode- nfibers column-wise yields the so-called mode- nunfolding X(n)∈RInׯIn, with number of columns given by the product of remaining dimensions ¯In=QN t=1 t̸=nIt. •TheKhatri-Rao product (denoted by ⊙) between two matrices A∈RI×KandB∈RJ×K, is the column-wise Kronecker product (denoted by ⊗): A⊙B:= a:1⊗b:1···a:K⊗b:K ∈R(I·J)×K. •Themode- n(vector) product of a tensor X∈RI1×I2×···× INwith a vector u∈RInis denotedX×nuand has entries (X×nu)i1...in−1in+1...iN=PIn in=1xi1i2...iNuin. A.3.1 MxD weight tensors through the Khatri-Rao product MxDs construct the collective weight tensor through the Khatri-Rao product ⊙[34] of the two factor matrices C∈RN×O,D∈RH×O. Concretely, the mode- 3unfolding3of the third-order weight tensorW∈RN×H×Oin MxDs from Equation (4) is alternatively given by: W(3):= (C⊙D)⊤∈RO×(N·H). (15) Given that the factor matrices are learned end-to-end without constraints, they are likely of full column-rank, i.e. rank(D) =rank(C) =O(asN > O, H = 4·O > O in practice given the MLP layers’ larger bottleneck). Consequently, their Khatri-Rao product parameterizing the collective N experts’ weights will be of maximum rank Otoo, through Lemma 1 of [ 91]. As a result, parameterized this way, the O-dimensional fibers likely span the full output space. A.3.2 Tensorized MxD forward pass Furthermore, the layer’s forward pass can then be viewed as performing two tensor contractions between the third-order weight tensor W∈RN×H×O(collecting all Nexperts’ H×O-dimensional matrices) and expert coefficients a∈RNand hidden activations z∈RH. This can be expressed in terms of the so-called mode- nproduct (denoted by ×n) [34] as follows: ˆy=NX n=1an· W⊤ nz =NX n=1anHX h=1wnhzh=NX n=1HX h=1anzhwnh =W×1a×2z∈RO. (16) A.4 GLU encoders are a mixture of rank-1 linear experts Both the proposed MxDs and Gated Linear Units (GLUs) [ 33] share a similar functional form, using the element-wise product. However, there are crucially important differences between GLUs and MxDs that make both their interpretation and model capacity different. In short, the technical results here in our paper show that GLUs’ encoder can be viewed as a linear mixture of expert layer with rank-1 experts. Furthermore, GLUs can be modified and extended to MxDs with two additions to their model form as detailed at the end of this subsection. First, recall that the GLU encoder [33] computes: yGLU=ψ(E⊤ GLUx)∗ E⊤x ∈RH, (17) for input vector x∈RI,
|
https://arxiv.org/abs/2505.21364v1
|
learnable weights EGLU,E∈RI×H, and activation function ψ(.). To transform Equation (17) into the same model form as MxDs, we first pre-multiply the LHS by the identity matrix to match the MxD model form of Equation (5), yielding: yGLU= I⊤a ∗ E⊤x , (18) 3which is simply a reshaping of a higher-order tensor into a matrix, arranging all Nexpert matrices’ column vectors along the columns of a new matrix. 18 where a=ψ(E⊤ GLUx)∈RHandI∈RH×His the H-dimensional identity matrix. Next, we can write this explicitly in terms of a linear MoE with expert weights Wn∈RI×Has follows: yGLU= I⊤a ∗ E⊤x (19) =HX n=1an W⊤ nx (20) =HX n=1an(Ediag (( I)n))⊤x , (21) where (I)n∈RHis the nthrow of the H-dimensional identity matrix (i.e. a one-hot vector with its only non-zero element at index n). We draw particular attention to how the nthexpert’s matrix Wn=Ediag (( I)n)∈RI×Hessentially picks out the nthcolumn of E, leaving all remaining H−1 columns as zero vectors. Therefore, GLU encoders compute a MoE with linear expert weights of (at most) rank 1 . This relationship between GLUs and conditional computation is consistent with prior work interpreting individual GLU column vectors as experts [ 92]. Whilst GLUs’ encoders’ model form does not put any inherent restrictions on the total number of rank-1 terms that can contribute to the output, the sparsity necessary for specialization does. We conclude this section by summarizing the two technical changes needed to transform GLUs into full-rank linear MoEs based on the Hadamard product: 1. Replace Iin Equation (18) with learnable, non-diagonal weight matrices for full-rankness. 2.Choose ψ(.)to produce non-negative, sparse coefficients to encourage specialization through sparsity among the experts (for example, a softmax function, or a ReLU activation followed byTopK ). The first of the steps above provides full-rankness, whilst the second brings the sparsity and non- negativity needed for specialization. We include a notebook showing this connection in PyTorch at: https://github.com/james-oldfield/MxD/blob/main/glus-to-moes.ipynb . A.5 Hadamard-factorized tensors generalize MoVs Prior work [ 37] proposes to linearly combine Nmany (IA)3adapters [ 38] for parameter-efficient MoEs for instruction fine-tuning. The implementation results in a very similar functional form to the factorized forward-pass in MxDs. Interestingly, the Hadamard product parameterization of the third-order weight tensor in Equation (4) provides a more general framework through which one can also derive MoVs’ model form, shedding light on the relationship to the proposed MxDs and their benefits. Concretely, factorizing the weight tensor instead along the second mode as W(n,:, o) =cn∗do∈RHin our framework immediately recovers MoV [ 37] as a special case. In particular, in contrast to the MxD in Appendix A.3 whose weight tensor can be parametrized equivalently through its mode- 3unfolding [ 34], MoV’s implicit weight tensor can be given in terms of its mode- 2unfolding in terms of a similar Khatri-Rao product of two factor matrices. Instead, MoVs in analogy would yield expert weights by pre-multiplying Das:Wn= diag( cn)D∈ RH×Ofor much larger C∈RN×H). Due to H≫O,our proposed MxD formulation yields around 4×the number of specialized units as MoVs with the same parameter budget (yet MoVs’ experts are of no higher rank than MxDs’),
|
https://arxiv.org/abs/2505.21364v1
|
making MxDs a much more suitable and efficient class of layer for our goal of scalable specialization. We therefore see that the proposed lens of tensor methods for unification provides valuable insights about how to design more interpretable layers with the minimum trade-off to capabilities. B Additional quantitative results and ablations B.1 Faithfulness in output space Our main experiments measure model faithfulness in latent space–how well the sparse layer variants reconstruct the intermediate MLPs’ mapping. Here, we provide additional experiments comparing the faithfulness of sparse layers as their computation propagates to the model output space. Concretely, 19 we sample 32consecutive tokens with the base model and then measure how similar the same generations are when the target MLP layer is replaced with the sparse layers. We sample 512text snippets from OpenWebText, and use the first 4 words of each as the initial prompts, generating 32future tokens after each prompt. We plot in Figures 6 and 7 the percentage of the samples’ continuations that are identical in the original LLM and hooked LLMs up to nfuture tokens ahead. We note that this is a rather punishing task–any small deviations quickly compound as ngrows. Despite this, we see that the MxDs match the future token generations far better than the baselines, exhibiting more faithfulness in model output space (as well as in latent space). We also show qualitative examples of the first 8prompts and the subsequent ‘diffs’ (using Python 3’sdifflib ) of the generated tokens in Figures 8 and 9, where MxDs’ superior preservation can be viewed qualitatively. 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 n-gram lengthMxD STC TC SAE1 0.99 0.94 0.88 0.84 0.8 0.76 0.71 0.68 0.65 0.62 0.6 0.58 0.55 0.53 0.51 1 0.99 0.86 0.75 0.65 0.59 0.52 0.48 0.42 0.39 0.35 0.34 0.31 0.28 0.24 0.22 1 0.99 0.83 0.7 0.61 0.57 0.49 0.45 0.39 0.33 0.31 0.28 0.25 0.22 0.2 0.18 1 0.95 0.56 0.31 0.18 0.11 0.057 0.031 0.02 0.016 0.012 0.0059 0.0039 0.002 0 0 0.00.10.20.30.40.50.60.70.80.91.0 Figure 6: Pythia-410m : The percentage of 512generated samples that contain nwords identical to the original model’s output (when replacing the base LLM’s MLP layer with the sparse layers). 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 n-gram lengthMxD STC TC SAE1 1 0.94 0.88 0.85 0.81 0.78 0.73 0.7 0.66 0.62 0.58 0.54 0.51 0.5 0.46 1 0.98 0.85 0.73 0.65 0.57 0.49 0.43 0.37 0.32 0.27 0.22 0.19 0.17 0.15 0.13 1 0.99 0.83 0.7 0.6 0.52 0.46 0.38 0.33 0.29 0.23 0.2 0.17 0.13 0.12 0.088 1 0.96 0.73 0.53 0.37 0.28 0.21 0.16 0.12 0.082 0.057 0.043 0.035 0.023 0.014 0.0098 0.00.10.20.30.40.50.60.70.80.91.0 Figure 7: GPT2-124m : The percentage of 512generated samples that contain nwords identical to the original model’s output (when replacing the base LLM’s MLP layer with the sparse layers). B.2 Additional reconstruction metrics To highlight the scale of difference in the reconstructions between MxDs and the baselines, we also plot in Figure 10 the normalized MSE at
|
https://arxiv.org/abs/2505.21364v1
|
the end of training for all models and LLMs. At the smallest values of K(which we care about most for interpretability), MxDs’ normalized MSE is up to an order of magnitude smaller than Transcoders’ . B.3 Results on additional layers We also fully train all models and baselines (with 4 different values of K) on different target layers for each model. The results are shown in Figure 11 for 48additional trained layers for the same setup in the original paper, using different colours to highlight that these are new results. As can be seen, the same trend holds: MxDs significantly outperform the baselines at small Kin all LLMs. 20 Figure 8: Pythia-410m : The first few generated tokens from the base model (‘ GT’) and the corresponding tokens from the model when the sparse layers replace the target MLP. Red denotes tokens that are removed, orange denotes newly inserted tokens, and green denotes matching tokens. B.4 Expert rank This section concerns the matrix rank of the linear experts in parameter-efficient MoEs. We first compare to low-rank MoEs in Appendix B.4.1 to demonstrate the benefits of full-rankness, and then follow up in Appendix B.4.2 by confirming that the learned MxD expert ranks are close to maximum in the trained models. B.4.1 Comparisons to low-rank MoEs In this section, we study the impact of expert rank on the ability of efficient MoE layers to reconstruct pre-trained MLP layers’ mappings. One compelling alternative to MxDs for efficient conditional computation is the µMoE layer [ 40], which imposes low-rankness on expert weights to achieve parameter-efficiency. Whilst µMoEs are found to perform competitively in the pre-training setting, the impact of low-rankness on approximations of existing layers will determine their suitability in the sparse layer approximation setting studied in this work. 21 Figure 9: GPT2-124m : The first few generated tokens from the base model (‘ GT’) and the corre- sponding tokens from the model when the sparse layers replace the target MLP. Red denotes tokens that are removed, orange denotes newly inserted tokens, and green denotes matching tokens. We therefore compare to µMoE layers, which we use to compute a linear MoE in place of the MLP’s decoder. In CP µMoEs, Nexperts’ weight matrices are jointly parameterized through low-rank tensor structure with the CP decomposition [ 93,94] for chosen rank R∈N+. With the same learnable encoder and expert gating matrices producing the expert coefficients a∈RNand hidden units z∈RHgenerated the same way as in the main paper, we train µMoE layers to approximate the original MLP layer’s output with: µMoE(x) =NX n=1HX h=1RX r=1anzhD(r, h)·C(r, n)·W(:, r)∈RO, (22) where C∈RR×N,D∈RR×H,W∈RO×Rare the learnable low-rank terms of the implicit third-order tensor parameterizing all Ncollective experts’ weights. We match the MxD experimental configuration as closely as possible for a fair comparison. For the encoders, we mirror MxDs and use the GELU activation function, which we find through ablations in Appendix B.6 to perform the best. We initialize the parameters the same as MxDs and Skip 22 16 32 64 128 2560.0010.0020.0030.0040.0050.0060.0070.0080.0090.0100.0110.0120.0130.0140.0150.0160.017 Normalized MSE GPT2-124M (Layer 8) TC (38M params) STC (38M
|
https://arxiv.org/abs/2505.21364v1
|
params) MxDs (38M params) 16 32 64 128 2560.00000.00010.00020.00030.00040.00050.00060.00070.00080.00090.00100.00110.00120.00130.00140.00150.00160.00170.0018 Pythia-410M (Layer 15) TC (67M params) STC (68M params) MxDs (67M params) 16 32 64 128 256 Sparsity level K 0.00000.00020.00040.00060.00080.00100.00120.00140.00160.00180.00200.00220.0024 Normalized MSE Pythia-1.4B (Layer 12) TC (269M params) STC (273M params) MxDs (268M params) 16 32 64 128 256 Sparsity level K 0.000000.000050.000100.000150.000200.000250.000300.000350.000400.000450.000500.000550.000600.000650.000700.00075 Llama3.2-3B (Layer 12) TC (604M params) STC (614M params) MxDs (604M params)Figure 10: Normalized MSE at the end of training Sparse MLP layers, as a function of the number of active units (i.e., hidden neurons vs experts); with differences as large as an order of magnitude in error. Transcoders: we use the standard PyTorch linear layer initialization for D,C(and the encoder layers), and initialize Was the zero matrix. We vary the µMoE layer rank R, training fully 3 sparse approximation layers for K= 32 active experts, varying the total number of experts Nto keep the parameter count the same–isolating the impact of the choice of rank. As with the main experiments, we record the downstream model loss when we splice in the trained layer to replace the MLP layers, shown in Figure 12. As can be seen, the µMoE layers perform well when they are close to full-rank (i.e. when the normalized rankR O→1). Crucially, however, performance drops off notably when the rank is reduced. Whilst µMoEs still perform far better than neuron-level sparsity methods (i.e. the corresponding CE loss results in Figure 3), we observe that full-rankness is necessary for the most faithful layer approximations–which the proposed MxDs provide by design. As a motivating example, for why SparseMoEs and SoftMoEs are not practical: SparseMoEs [ 36] and SoftMoEs [ 70] require 2.16trillion parameters for a single layer, for the same 86k experts we use forLlama-3.2-3B . This is orders of magnitude more parameters than the entire base network itself, making it prohibitively costly for SparseMoEs to scale to sufficiently high expert counts. B.4.2 MxD empirical expert rank Next, we show experimentally that the learned experts’ matrices Wn=Ddiag(cn)∈RH×Oare very nearly full-rank in practice, corroborating the properties of expert matrices shown theoretically 23 32 64 128 2563.6003.6053.6103.6153.6203.6253.6303.6353.6403.645 Mean cross-entropy loss GPT2-124M (Layer 10) Original LLM TC (38M params) STC (38M params) MxDs (38M params) 32 64 128 2563.32253.32503.32753.33003.33253.33503.33753.34003.3425 Pythia-410M (Layer 12) Original LLM TC (67M params) STC (68M params) MxDs (67M params) 32 64 128 256 Sparsity level K 3.10003.10253.10503.10753.11003.11253.11503.1175 Mean cross-entropy loss Pythia-1.4B (Layer 10) Original LLM TC (269M params) STC (273M params) MxDs (268M params) 32 64 128 256 Sparsity level K 3.0203.0223.0243.0263.0283.0303.0323.0343.0363.038 Llama3.2-3B (Layer 10) Original LLM TC (604M params) STC (614M params) MxDs (604M params)Figure 11: Additional layer results : model cross entropy loss preserved when replacing MLPs with Transcoders [ 27], Skip Transcoders [ 26], and MxDs, as a function of the number of active units (hidden neurons/experts). These results complement those in the main paper, but here we train a new set of additional models on different layers. in Lemma 1. We compute the mean ‘normalized rank’, which we take for MxDs to be the empirical matrix rank of the learned expert’s weights, over
|
https://arxiv.org/abs/2505.21364v1
|
the maximum possible rank given the dimensions: 1 NNX n=1rank(Wn) min{H, O}. (23) We show in Table 3 the normalized rank across all 4 base models: MxD’s learned experts exhibit no rank deficiencies, providing further evidence of the large potential capacity of MxD layers despite their sparsity constraints on the expert-level. Table 3: Mean normalized expert matrix rank of Equation (23) across models for the first 2k experts inK= 32 trained MxDs – the learned expert matrices are very close to full column rank. GPT2-124M Pythia-410M Pythia-1.4B Llama-3.2-3B 0.99±0.005 0 .99±0.007 0 .99±0.005 0 .99±0.002 B.5 Sparse probing Sample-level probing Here, we follow the SAEBench [ 19] evaluation protocol. In this ‘sample- level’ setting, each text string is labeled with a binary concept at a global level (e.g., the language of 24 0.5 0.6 0.7 0.8 0.9 1.0 Normalized expert rank (mean) 3.6043.6053.6063.6073.6083.6093.6103.611 Mean cross-entropy loss Rank ablation GPT2-small MxD MoE 0.5 0.6 0.7 0.8 0.9 1.0 Normalized expert rank (mean) 3.3263.3283.3303.3323.334 Mean cross-entropy loss Rank ablation pythia-410m MxD MoE Figure 12: Comparisons to µMoEs for various choices of (normalized) rank: high rank weights best-preserve the models’ downstream cross-entropy loss. Table 4: Details of sample-level sparse probing datasets used. Dataset # Training examples # Test examples Classification task description Number of classes fancyzhx/ag_news [48] 16,000 4,000 News article topic 4 codeparrot/github-code [95] 20,000 5,000 Programming language 5 amazon_reviews_mcauley_1and5_sentiment [96] 8,000 2,000 Positive/negative review sentiment 2 Helsinki-NLP/europarl [97] 20,000 5,000 European language 5 LabHC/bias_in_bios [98] 32,000 8,000 Profession from bio 8 the snippet, or its sentiment). This is in contrast to what we refer to as ‘token-level probing’, where each token within the text samples is labeled individually (e.g., whether a word is a certain part of speech). We perform experiments on a total of 24sample-level sparse probing tasks with the same ‘maximum mean difference’ feature filtering applied in [ 19]. The details of the datasets used are summarized in Table 4. Token-level probing We also explore sparse probing for 10features defined at the token-level. For this, we follow [ 49], and include experiments training probes on the mean feature activations under tokens spanning the surnames of the individuals. We note that this is a significantly harder task, and makes even stronger assumptions about the features the dataset includes, but is nonetheless some additional weak evidence about the relative feature-learning abilities of the sparse models. Through various surnames, we probe for 6occupations of individuals, whether or not individuals are alive, and individuals’ labeled gender. We also experimented with probing for compound words as in [ 49], but found no predictive features in our trained models. Details of the surname token-level probing datasets (and the total training examples the tokenizers could parse) are included in Table 5. Table 5: Details of token-level sparse probing datasets used. Dataset # Training examples # Test examples Classification task description Number of classes Occupation [49] 4784 1195 Occupation of individual 6 Is alive? [49] 4800 1199 Are they alive 2 Gender [49] 4800 1200 Labeled gender 2 Experimental setup For sample-level probing, we truncate the input strings to
|
https://arxiv.org/abs/2505.21364v1
|
the first 128tokens for all datasets but for the Github dataset, where we take the last 128tokens to avoid license headers [19,49]. For token-level probing, we instead take only the last 128tokens, where the final token contains the surname of the individual in question in the datasets of [49]. Binary probes are trained on 80% of the training data (randomly shuffled) with the sklearn library’s LogisiticRegression module with parameters: • class_weight=‘balanced’ • penalty=‘l2’ • solver=‘newton-cholesky’ 25 • max_iter=200 A random seed of 42is used throughout the code to ensure reproducibility. B.5.1 Sparse probing results We show in Figure 13 results on 20additional (sample-level) sparse probing tasks, where MxDs remain competitive with the baselines. We also plot the expert activation (of the single expert with the highest F1 test set score) for the positive/negative classes for all tasks split across Figures 14 and 15. One can observe a certain degree of separability between the two semantic clusters of data given by the expert coefficient, thus confirming that individual experts are learning to specialize to particular high-level features. We also include results on 10token-level probing tasks in Figure 16, with the corresponding activation densities displayed in Figure 17. Whilst MxDs appear to perform slightly less well here on average, they remain competitive as expected. B.6 Ablations We turn next to ablation studies to explore the value of the various model components below: B.6.1 Choice of sparsity constraint We first train a variety of MxDs on GPT2 models with the TopK activation function [ 23] and instead train models with a ReLU followed by an explicit λ||.||1sparsity penalty on the specialized components in addition to the reconstruction loss [ 22]. We show the results in Figure 18, where, similarly to [ 26], we find the TopK activation to dominate on the sparsity-accuracy frontier–we thus use the TopK activation for all experiments. B.6.2 Choice of MxD encoder Secondly, we show in Figure 19 the benefits of MxDs’ flexibility in inheriting the original MLP layer’s encoder form/activation function. All models here are trained from scratch for the same number of tokens and with the same experimental setup as in Section 3.1, with K= 32 . In the first 3 left-most subfigures, we see the Normalized MSE is as low as half when using GELU vs the non-native ReLU activation. We next ablate the impact of inheriting the same encoder as the Llama-3.2-3B base model. In the rightmost subfigure of Figure 19, we train MxDs with ReLU-MLP, GELU-MLP, and Swish-GLU encoders. As can be seen, using a GLU with a Swish activation model (matching the base model architecture) yields a Normalized MSE almost an order of magnitude smaller than MLPs with GELU/ReLU. C Feature balance and shared experts C.1 Expert/feature balance Following the code of [ 27,99], we log how often each unit of specialism/feature is used, over a fixed window of ∼1M tokens. We show in Figure 20 the feature frequency at the end of training, where we observe that MxDs see a similar healthy balance of experts to the frequency of usage of features in the
|
https://arxiv.org/abs/2505.21364v1
|
baselines. Interestingly, we observe a small peak of experts that fire more frequently in MxDs (e.g., around -2 on the x-axis)–perhaps specializing in common patterns and primitives in natural language. C.2 Shared experts We find that, by default, our MxD models naturally learn to use a shared expert, with the remaining experts exhibiting strong specialization in a wide range of themes and linguistic patterns. The use of a shared expert is becoming an increasingly popular design choice, including in the latest Llama 4 models [ 44]–we therefore allow this pattern to emerge naturally in our base models, further justified 26 de en es fr nl European language0.00.20.40.60.81.0T est F1 scoreeuroparl gpt2 MxD TC Skip-TC T opK-SAE(a) Europarl dataset, on GPT2 de en es fr nl European language0.00.20.40.60.81.0T est F1 scoreeuroparl pythia-410m MxD TC Skip-TC T opK-SAE (b) Europarl dataset, on Pythia-410m C HTML Java PHP Python Programming language0.00.20.40.60.81.0T est F1 scoregithub-code gpt2 MxD TC Skip-TC T opK-SAE (c) Github code dataset, on GPT2 C HTML Java PHP Python Programming language0.00.20.40.60.81.0T est F1 scoregithub-code pythia-410m MxD TC Skip-TC T opK-SAE (d) Github code dataset, on Pythia-410m Negative Positive Amazon review sentiment0.00.20.40.60.81.0T est F1 scoreamazon_reviews_mcauley_1and5_sentiment gpt2 MxD TC Skip-TC T opK-SAE (e) Amazon review sentiment dataset, on GPT2 Negative Positive Amazon review sentiment0.00.20.40.60.81.0T est F1 scoreamazon_reviews_mcauley_1and5_sentiment pythia-410m MxD TC Skip-TC T opK-SAE (f) Amazon review sentiment dataset, on Pythia-410m attorneydentist journalist photographerphysician professor psychologistteacher Profession0.00.20.40.60.81.0T est F1 scorebias_in_bios gpt2 MxD TC Skip-TC T opK-SAE (g) Bias in Bios dataset, on GPT2 attorneydentist journalist photographerphysician professor psychologistteacher Profession0.00.20.40.60.81.0T est F1 scorebias_in_bios pythia-410m MxD TC Skip-TC T opK-SAE (h) Bias in Bios dataset, on Pythia-410m Figure 13: Sample-level sparse probing results on individual experts/features; the best F1 score on a held out set is presented. 27 3 2 1 0 Pre-activation value0.00.51.0DensityWorld 2.5 0.0 2.5 5.0 Pre-activation value0.00.51.0Sports 4 2 0 Pre-activation value0.00.5Business 4 3 2 1 Pre-activation value0.00.51.0T ech(a) AG news dataset, on GPT2 1.0 0.5 Pre-activation value02DensityWorld 1.5 1.0 0.5 Pre-activation value02Sports 0.5 0.0 Pre-activation value024Business 0.50 0.25 0.00 0.25 Pre-activation value024T ech (b) AG news dataset, on Pythia-410m 4 2 0 Pre-activation value0.00.51.0Densityde 10 0 10 Pre-activation value0.00.10.2en 5 0 Pre-activation value0.00.5es 0 10 20 Pre-activation value0.00.51.0fr 2 0 2 Pre-activation value0.00.5Densitynl (c) Europarl dataset, on GPT2 1.0 0.5 Pre-activation value0.02.55.0Densityde 1.5 1.0 0.5 Pre-activation value024en 1.0 0.5 0.0 Pre-activation value024es 0 1 Pre-activation value02fr 0.5 0.0 0.5 1.0 Pre-activation value024Densitynl (d) Europarl dataset, on Pythia-410m 1 0 Pre-activation value01DensityC 2 0 Pre-activation value0.00.51.0HTML 4 3 2 Pre-activation value01Java 2 0 Pre-activation value0.00.51.0PHP 4 3 2 1 Pre-activation value01DensityPython (e) Github code dataset, on GPT2 0 1 2 Pre-activation value024DensityC 0.5 0.0 0.5 Pre-activation value024HTML 1.0 0.5 0.0 Pre-activation value02Java 0 1 Pre-activation value012PHP 0 1 2 Pre-activation value024DensityPython (f) Github code dataset, on Pythia-410m 2 0 Pre-activation value01DensityNegative 1 0 1 Pre-activation value012Positive (g) Amazon review sentiment dataset, on GPT2 0.8 0.6 0.4 Pre-activation value0.02.55.0DensityNegative 0.8 0.6 0.4 0.2 Pre-activation value0.02.55.0Positive (h) Amazon review sentiment dataset, on Pythia-410m Figure 14: [1/2] Sample-level sparse probing results on individual experts for MxDs; here we
|
https://arxiv.org/abs/2505.21364v1
|
plot the values of the expert pre-activation for positive /other classes (in the 1-vs-all setting). 28 4 3 2 1 Pre-activation value0.00.51.01.5Densityattorney 0 5 10 Pre-activation value0.00.51.01.5dentist 2 0 2 Pre-activation value0.00.51.0journalist 1 0 1 2 3 Pre-activation value012photographer 4 3 2 1 Pre-activation value0.00.51.0Densityphysician 0 2 Pre-activation value012professor 0 2 4 6 Pre-activation value012psychologist 2 1 Pre-activation value0.00.51.01.5teacher(a) Profession from biography, on GPT2 1.2 1.0 0.8 0.6 0.4 0.2 Pre-activation value024Densityattorney 0.50 0.25 0.00 0.25 0.50 Pre-activation value024dentist 0.4 0.2 0.0 0.2 0.4 Pre-activation value0123journalist 1.0 0.8 0.6 0.4 0.2 Pre-activation value024photographer 0.8 0.6 0.4 0.2 0.0 Pre-activation value024Densityphysician 1.00 0.75 0.50 0.25 0.00 Pre-activation value024professor 1.25 1.00 0.75 0.50 0.25 Pre-activation value024psychologist 1.25 1.00 0.75 0.50 0.25 Pre-activation value024teacher (b) Profession from biography, on Pythia-410m Figure 15: [2/2] Sample-level sparse probing results on individual experts for MxDs; here we plot the values of the expert pre-activation for positive /other classes (in the 1-vs-all setting). Singer Researcher Actor Athlete Politician Journalist Occupation0.00.20.40.60.81.0T est F1 scoreoccupation gpt2 MxD TC Skip-TC T opK-SAE (a) Occupation surname probing, on GPT2 Singer Researcher Actor Athlete Politician Journalist Occupation0.00.20.40.60.81.0T est F1 scoreoccupation pythia-410m MxD TC Skip-TC T opK-SAE (b) Occupation surname probing, on Pythia-410m Yes No Is Alive?0.00.20.40.60.81.0T est F1 scoreisalive gpt2 MxD TC Skip-TC T opK-SAE (c) “Is alive?” surname probing, on GPT2 Yes No Is Alive?0.00.20.40.60.81.0T est F1 scoreisalive pythia-410m MxD TC Skip-TC T opK-SAE(d) “Is alive?” surname probing, on Pythia-410m F M Gender0.00.20.40.60.81.0T est F1 scoregender gpt2 MxD TC Skip-TC T opK-SAE(e) Gender surname prob- ing, on GPT2 F M Gender0.00.20.40.60.81.0T est F1 scoregender pythia-410m MxD TC Skip-TC T opK-SAE(f) Gender surname prob- ing, on Pythia-410m Figure 16: Token-level sparse probing results on individual experts/features; the best F1 score on a held out set is presented. 29 0 5 Pre-activation value0.000.250.50DensitySinger 5 0 5 Pre-activation value0.00.20.4Researcher 0 5 10 Pre-activation value0.000.250.50Actor 5 0 5 Pre-activation value0.00.20.4Athlete 5 0 5 Pre-activation value0.000.250.50Politician 5.0 2.5 0.0 Pre-activation value0.00.5Journalist(a) Occupation surname probing, on GPT2 1 0 1 Pre-activation value012DensitySinger 2 0 2 Pre-activation value0.00.51.0Researcher 0.0 2.5 Pre-activation value0.00.51.0Actor 2 1 0 Pre-activation value012Athlete 0 2 Pre-activation value012Politician 1 0 Pre-activation value012Journalist (b) Occupation surname probing, on Pythia-410m 0 10 Pre-activation value0.00.2DensityYes 5 0 Pre-activation value0.000.250.50No (c) Alive/dead surname probing, on GPT2 1 0 Pre-activation value012DensityYes 1 0 1 Pre-activation value012No (d) Alive/dead surname probing, on Pythia-410m 0 5 Pre-activation value0.00.5DensityF 5 0 Pre-activation value0.000.250.50M (e) Gender surname probing, on GPT2 0 2 4 Pre-activation value024DensityF 2 0 Pre-activation value012M (f) Gender surname probing, on Pythia-410m Figure 17: Token-level sparse probing results on individual experts for MxDs; here we plot the values of the expert pre-activation for positive /other classes (in the 1-vs-all setting). 30 24252627 Mean 0 3.6003.6053.6103.6153.6203.6253.6303.6353.640 Mean cross-entropy loss : 5.0e-5 : 7.0e-5 : 1.0e-4 : 1.5e-4 : 1.0e-4 : 1.5e-4 : 2.5e-4 : 3.0e-4 : 4.0e-4 GPT-2 ablation: L1 penalty vs T opK activation Transcoders (L1) Transcoders (T opK) Ours (L1) Ours (T opK) Original modelFigure 18: ReLU+TopK activation function [ 23] vs ReLU w/ L1 sparsity penalty [ 22]: both MxDs and
|
https://arxiv.org/abs/2505.21364v1
|
Transcoders better recover the cross entropy loss with the TopK activation. =ReLU (MLP) =GELU (MLP) MxD encoder activation0.000000.000050.000100.000150.000200.00025 Normalized MSE Pythia-410M =ReLU (MLP) =GELU (MLP) MxD encoder activation0.00000.00050.00100.00150.00200.0025GPT2-124M =ReLU (MLP) =GELU (MLP) MxD encoder activation0.000000.000050.000100.000150.000200.000250.00030Pythia-1.4B =ReLU (MLP) =GELU (MLP) =Swish (GLU) MxD encoder architecture0.000000.000050.000100.000150.00020Llama-3.2-3B Figure 19: Encoder architecture ablation : MSE loss when using ReLU activation vs the GELU used by the base models; and MLPs vs GLUs for Llama (rightmost subfigure). through the evidence in [ 43] that shared experts can enhance specialization among the remaining experts [ 43]. We highlight, however, that a simple trick of sampling ˆK∼Unif{K−K/a, K +K/a} for the Top- ˆKactivation at train-time (for e.g. a:= 2 ) is sufficient to remove the dominating shared-expert at minimal hit to reconstruction performance, if desired. We train two sets of models with a base K= 32 onGPT2-small andpythia-410m , using a:= 2. We first show in Figure 21 the indices of the top-activating experts for the 2 model variants on a template prompt, after training has finished. On the left-hand side of Figure 21, the models route all tokens through the same shared expert at position 1. However, we see on the right-hand side that training with the ‘random-K’ strategy breaks the dependence on a shared expert in position 1. Furthermore, we include in Figure 22 the corresponding train-time MSE loss for the 4 models here as ablations–observing that the random-K strategy also brings comparable performance. Based on these experiments, we recommend this simple training strategy if one desires MxD models without shared experts. D Detailed experimental setup We list in Table 6 the resources used for each experiment: the GPU and the indicative run-time for a single model. The mlp_expansion_factor column refers to the expansion factor applied to the input dimension to generate the MLP width in the sparse layers (i.e. H:=I·mlp_expansion_factor ). 31 10 8 6 4 2 0 Log10 feature firing frequency050010001500200025003000Number of featuresFeature usage: GPT2-124M - MxD 10 8 6 4 2 0 Log10 feature firing frequency0100020003000Number of featuresFeature usage: Pythia-410M - MxD 10 8 6 4 2 0 Log10 feature firing frequency01000200030004000Number of featuresFeature usage: GPT2-124M - TC 10 8 6 4 2 Log10 feature firing frequency01000200030004000Number of featuresFeature usage: Pythia-410M - TC 10 8 6 4 2 Log10 feature firing frequency01000200030004000Number of featuresFeature usage: GPT2-124M - STC 10 8 6 4 2 Log10 feature firing frequency010002000300040005000Number of featuresFeature usage: Pythia-410M - STCFigure 20: log10feature sparsity (following [ 27,99]); MxDs’ experts are well-balanced, similar to the baselines’ features. Table 6: Total training time and resources used to produce the k= 32 experiments (the required compute being roughly the same across models trained with different k). Model GPU used VRAM Training time d_in mlp_expansion_factor Asset link GPT2-124m x1 GeForce RTX 3090 24GB 8h 34m 37s 768 32 https://huggingface.co/docs/transformers/en/model_doc/gpt2 Pythia-410m x1 GeForce RTX 3090 24GB 8h 35m 17s 1024 32 https://huggingface.co/EleutherAI/pythia-410m Pythia-1.4B x1 A100 80GB 23h 25m 23s 2048 32 https://huggingface.co/EleutherAI/pythia-1.4b Llama-3.2-3B x1 A100 80GB 2d 3m 50s 3072 32 https://huggingface.co/meta-llama/Llama-3.2-3B 32 1st highest expert index2nd highest expert index3rd highest expert index4th highest expert index [
|
https://arxiv.org/abs/2505.21364v1
|
526 18499 7257 8244] [16092 3344 17100 7388] [19829 10864 7720 5507] [20001 15277 1905 11387]Token 1 Token 2 Token 3 Token 4 [10160 10962 19772 9610] [19772 15461 2630 8228] [19772 18694 7385 3494] [19772 19466 10619 970]Token 1 Token 2 Token 3 Token 41st highest expert index2nd highest expert index3rd highest expert index4th highest expert indexModel trained with fixed K Model trained with random K Prompt: "Who is the president of the USA?" GPT2-small 1st highest expert index2nd highest expert index3rd highest expert index4th highest expert index [ 7412 13294 3097 19430] [13439 24209 13723 18099] [ 9587 3857 10715 6198] [ 2809 3378 25799 9435]Token 1 Token 2 Token 3 Token 4 [28104 1694 18149 2013] [28104 1163 5124 11890] [28104 5124 27687 3657] [28104 4126 12814 23628]Token 1 Token 2 Token 3 Token 41st highest expert index2nd highest expert index3rd highest expert index4th highest expert index Prompt: "Who is the president of the USA?" Pythia-410mModel trained with fixed K Model trained with random KFigure 21: Top-activating experts for template prompt with and without using a randomized value of Kat train-time for TopK expert selection: randomization largely prevents a shared expert. Shown are the leading 4 tokens and expert indices. 0 20K 40K 60K 80K 100K 120K Training steps0.0050.0100.0150.0200.0250.0300.035Normalized MSEGPT2-small TC STC MxD MxD (random K) 0 20K 40K 60K 80K 100K 120K Training steps0.0000.0010.0020.0030.0040.0050.006Normalized MSEPythia-410m TC STC MxD MxD (random K) Figure 22: MxD performance with random K sampling : Normalized MSE loss as a func- tion of training steps using a fixed Top K:= 32 expert selection and when sampling ˆK∼ Unif K−K 2, K+K 2 . 33 D.1 Feature steering details For the steering experiments, we use two LLM judges to grade generations on two axes. The full template prompt we feed to gemini-2.0-flash andllama-4-scout-17b-16e-instruct is as follows (note that line breaks and emphases are included here only to aid visualization): Prompt given to LLM judges You are an expert evaluator of synthetic text. TASK : Rate a collection of {num_samples} samples along two independent axes. AXIS 1 – CONCEPT COHERENCE: 0.00 no shared concepts/themes/style. 0.25 faint overlap. 0.50 some overlap or similar structure. 0.75 mostly the same concepts or structure; a few partial drifts. 1.00 all snippets clearly share the same concepts, themes, style, or structure. AXIS 2 – GRAMMATICAL FLUENCY: 0.00 incomprehensible. 0.25 dense errors; meaning often obscured. 0.50 frequent errors; meaning still mostly recoverable. 0.75 minor errors that rarely hinder comprehension. 1.00 completely grammatical and natural. (Do not penalise fluency if a snippet starts or ends abruptly.). SCORING: Choose any real value in [0, 1] for each axis. OUTPUT FORMAT: Respond with exactly two numbers formatted ‘0.00, 0.00’ in the order [coherence, fluency] and no other text or symbols. TEXT TO EV ALUATE: {samples} E Additional qualitative results We show in Figures 23 and 24 tokens activating the first 9 experts as they appear numerically. We sample 6 bins of expert coefficient value to show both tokens that highly activate the experts and those that do so only mildly. As can be seen,
|
https://arxiv.org/abs/2505.21364v1
|
both high- and low-level specializations emerge in both GPT and Pythia models. Whilst we observe specializations to a range of concepts (such as punctuation, MMO games, words in specific contexts), we do not notice any systemic differences between the types of expert special- izations that emerge between the two models in MxD layers. 34 Figure 23: Tokens activating the first 9numerical experts on MxDs with K= 32 trained on Pythia-410m ; we sample 6 bands of activations to show both tokens that highly activate experts and those that activate them only mildly. Magnitude of activation is denoted by the orange highlight. Moderate specialism emerges, e.g., to MMO games, abbreviations, and words in specific contexts. 35 Figure 24: Tokens activating the first 9numerical experts on MxDs with K= 32 trained on GPT2-124m ; we sample 6 bands of activations to show both tokens that highly activate experts and those that activate them only mildly. Magnitude of activation is denoted by the orange highlight. Moderate specialism emerges, e.g., to punctuation, names, and months. 36
|
https://arxiv.org/abs/2505.21364v1
|
arXiv:2505.21372v1 [cs.LG] 27 May 2025Improving LLM-based Global Optimization with Search Space Partitioning Andrej Schwanke∗1Lyubomir Ivanov∗1David Salinas1,2 Fabio Ferreira1Aaron Klein4Frank Hutter3,2,1Arber Zela∗1 1University of Freiburg,2ELLIS Institute Tübingen,3Prior Labs,4ScaDS.AI, University of Leipzig Abstract Large Language Models (LLMs) have recently emerged as effective surrogate models and candidate generators within global optimization frameworks for ex- pensive blackbox functions. Despite promising results, LLM-based methods often struggle in high-dimensional search spaces or when lacking domain-specific priors, leading to sparse or uninformative suggestions. To overcome these limitations, we propose HOLLM , a novel global optimization algorithm that enhances LLM-driven sampling by partitioning the search space into promising subregions. Each subre- gion acts as a “meta-arm” selected via a bandit-inspired scoring mechanism that effectively balances exploration and exploitation. Within each selected subregion, an LLM then proposes high-quality candidate points, without any explicit domain knowledge. Empirical evaluation on standard optimization benchmarks shows thatHOLLM consistently matches or surpasses leading Bayesian optimization and trust-region methods, while substantially outperforming global LLM-based sampling strategies. 1 Introduction and Motivation Global optimization [ 26,44] (also known as gradient-free or zeroth-order optimization) of blackbox functions, where the only information provided to the optimizer is the function value, is a fundamental challenge across numerous domains including hyperparameter tuning [ 50,54], policy search [ 12], molecular design and chemical engineering [ 33,25], just to name a few. Methods such as Bayesian optimization [ 47,21] and evolutionary algorithms [ 24] have been a standard and effective choice across various applications. However, they typically require assumptions regarding the underlying objective function’s nature, which consecutively affect algorithmic design choices. At the same time, recent advances in Large Language Models (LLMs) have demonstrated remarkable capabilities in generative modelling and reasoning [ 9,39,53], suggesting their potential usage for optimization tasks as well [ 51]. Efforts in integrating LLMs within blackbox optimization algorithms as surrogate models or as candidate samplers have already shown encouraging results [ 60,34,64, 2,1,32]. Yet these methods typically rely on carefully engineered, domain -specific prompts, and in higher dimensions and complex search spaces the LLM’s suggestions tend to scatter sparsely, covering only a fraction of the domain [31]. As a motivating example, we investigated the capabilities of LLMs to simulate uniform sampling from a unit hypercube. In Figure 1a we show 80 samples drawn from the unit square [0,1]2, comparing uniform sampling (blue) with Gemini-1.5’s [ 43] attempt at simulating uniform sampling using the prompt provided in Listing 1 in Appendix D (green points), and Gemini-1.5 performing uniform sampling with 5 samples per smaller subregion, using the same prompt (red points). We can clearly notice that even in 2D the LLM demonstrates high bias when sampling, therefore failing to appropriately fill the space as it was tasked to, whilst partitioning the space and prompting the LLM ∗Equal contribution. Email to: {schwankea, ivanovl, zelaa}@cs.uni-freiburg.de 0.0 0.2 0.4 0.6 0.8 1.00.00.20.40.60.81.0LLM Sampling 0.0 0.2 0.4 0.6 0.8 1.0LLM Partition Sampling(a) 0.0 0.2 0.4 0.6 0.8 1.00.00.20.40.60.81.0T wo-Minima Function and Sample Points LLM Samples LLM Partition Samples Minima (b) 10 20 50 70 100 Number of samples0.91.01.11.21.31.4Hausdorff distance dH(,[0,1]8) Hausdorff Distance in 8-Dimensional Space Random
|
https://arxiv.org/abs/2505.21372v1
|
samples DeepSeek R1 Mistral Large Grok 3 BetaLLaMA 4.0 Maverick Claude 3.7 Gemini 1.5 Gemini 1.5 + partitioning (c) Figure 1: ( a) 80 samples in [0,1]2: Gemini-1.5 simulating uniform sampling (green), and with region-wise partitioning (red) using the prompt in Listing 1. ( b) Gemini-1.5 prompted (see Listing 2) to generate 80 samples around the 2 minima (red crosses) globally (triangles) and withing the two bounding boxes (circles). ( c) Hausdorff distance dH(P,[0,1]8)for uniform vs. LLM-simulated sampling in the 8-D hypercube. 16 times yields a more faithful simulation. Another illustrative example is shown in Figure 1b, where we prompt Gemini-1.5 (using the prompt shown in 2) to sample close to the two global minima (red stars) of a quadratic function, given the input space boundaries. We can clearly notice the higher sampling bias when the input space is [0,1]2instead of the smaller regions denoted via the dashed bounding boxes. Finally, in Figure 1c, we compute the Hausdorff distance, dH(P,[0,1]8), between the set of N∈ {10,20,50,70,100}sampled points Pand the 8-dimensional unit hypercube [0,1]8. The blue curve indicates the values for standard uniform sampling and the other ones performed by various non-agentic LLMs. Similarly as in the 2D case, partitioning the hypercube into 32 regions and sampling within each (Gemini 1.5 + partitioning) notably improves the spatial coverage, enabling the LLM to more closely approximate uniform sampling. In this paper, we introduce Hierarchical Optimization with Large Language Models ( HOLLM ), a novel blackbox optimization method that leverages adaptive spatial partitioning to guide LLM-based sampling. HOLLM iteratively builds a KD-tree on existing evaluation data, creating adaptive local partitions whose granularity evolves with sampling density. Each subregion is assigned a bandit- inspired utility score, balancing exploitation (regions with promising observed values) and exploration (geometrically large or statistically uncertain regions). Subregions are selected stochastically accord- ing to these scores, and LLMs then generate localized candidate proposals within the chosen regions. As LLMs trained on optimization literature and scientific data encode a valuable meta-prior about typical function behavior (e.g., local unimodality), we effectively harness this prior without assuming a fixed parametric surrogate (e.g., Gaussian Process). Furthermore, restricting candidate generation to smaller, lower-dimensional subregions significantly reduces LLM sampling difficulty compared to global high-dimensional sampling. We note that spatial partitioning heuristics have already proven effective in continuum-armed bandits [ 38,10,55,23], Trust Region Bayesian optimization [ 18,15], and Monte Carlo Tree Search [ 27,56,61]. The key contribution of this work is the integration of these partitioning ideas to substantially improve LLM-driven global optimization performance. Empirical evaluations on continuous and discrete benchmark functions, including hyperparameter optimization and neural architecture search, demonstrate that HOLLM effectively balances explo- ration and exploitation, matching or outperforming state-of-the-art methods, including established Bayesian optimization variants and Trust Region algorithms, particularly in scenarios requiring efficient navigation of complex landscapes. Furthermore, compared to approaches that prompt the LLM to propose candidates globally, HOLLM achieves considerable gains by focusing LLM suggestions locally. We provide the implementation of our algorithm in the following repository: https://github.com/automl/hollm . 2 Background and Related Work We consider the problem of maximizing a blackbox function
|
https://arxiv.org/abs/2505.21372v1
|
f:X →Rwhere Xis a compact domain. The objective is to find x∗= arg maxx∈Xf(x)through a sequence of function evaluations. In this blackbox setting, we do not have access to gradients or other properties of f, and can only observe function values at queried points. The performance of optimization algorithms in this context 2 can also be measured using simple regret orcumulative regret . For a sequence of evaluated points x1, x2, . . . , x t, the simple regret after titerations is defined as: rt=f(x∗)−max i∈{1,...,t}f(xi), while the cumulative regret is: Rt=Pt i=1(f(x∗)−f(xi)). Bayesian Optimization. Bayesian Optimization (BO) [ 21,20,47] is a well-established framework for optimizing expensive blackbox functions by maintaining a probabilistic surrogate (typically a Gaussian Process [ 41]) to guide evaluations and optimizing an acquisition function (e.g. Expected Improvement [ 63]) in order to balance exploration and exploitation and efficiently search the space. Extensions like TuRBO [ 18,15] address high-dimensional settings by maintaining multiple trust regions, which are dynamically resized based on optimization progress, enabling scalable and focused exploration around promising evaluations via local GPs. Multi-Armed Bandits and Hierarchical Optimization Algorithms. Multi-Armed Bandits (MABs) [ 49] deal with the problem of sequential decision-making under the exploration-exploitation dilemma. In the basic setting, a MAB algorithm repeatedly selects among a fixed (also infinite) num- ber of arms or actions, each with an unknown reward distribution, aiming to minimize the cumulative regret. In the global optimization setting, the arms are the points that lie in the input space Xand at each iteration t, an arm xt∈ X is pulled and the regret is computed by evaluating the function f(xt)[23]. Several MAB algorithms leverage hierarchical space partitioning [ 29]. Most notably, HOO [ 10] constructs a hierarchical partitioning of the search space using n-ary trees and at each step, an unexplored region (tree leaves) is selected based on upper confidence bounds (UCB) [ 5,30] and f is evaluated at a point uniformly sampled inside the selected region. Building on HOO, extensions include parallel versions [ 23], optimization without explicit smoothness knowledge [ 38] or under noisy observations [ 55], and adaptive trees [ 11] including Monte Carlo bandits [ 56,61]. Most of these methods come with theoretical guarantees on regret bounds that depend on the dimensionality and smoothness properties of the objective function. Large Language Models for Blackbox Optimization. Recent work has increasingly explored integrating LLMs into blackbox optimization workflows. Some approaches prompt LLMs directly to generate candidate solutions in natural language [ 34,60,1,64], use them to estimate uncertainty [ 40], extract features [ 32], or even design novel acquisition functions [ 2]. Others replace traditional surrogate models with LLMs to predict function values and guide search in applications such as hyperparameter tuning [ 34] and molecular design [ 40]. However, these methods often rely on carefully engineered prompts containing domain-specific information (e.g., dataset statistics or problem descriptions), raising concerns about their robustness in domains where this information is not available. Recent work by [ 31] shows that, in simple MAB settings, LLMs struggle to explore effectively without significant prompt intervention, highlighting their
|
https://arxiv.org/abs/2505.21372v1
|
limitations in decision-making. Our algorithm builds upon these foundations in order to improve LLM-based blackbox optimization by integrating tree-based space partitioning, a UCB-inspired score function for balancing exploration and exploitation, and LLM-based candidate generation within locally promising regions. 3 HOLLM: Hierarchical Optimization with LLMs In this section, we present the HOLLM algorithm for optimizing potentially noisy blackbox functions f:X →R, which consists of 5 main steps: Partition ,Score ,Select ,Sample andEvaluate . Given an initial set of n0evaluations Dn0={(xi, f(xi))}n0 i=1, the algorithm iteratively calls each of these steps. It starts by adaptively partitioning the search space in a data-driven way, scores each of these regions to balance exploration-exploitation, selects the Mmost promising regions based on their score, leverages LLMs to sample candidates within these regions, and finally evaluates the best candidates according to their predicted function value from the LLM. We provide an illustrative depiction of these steps in Figure 2. This approach allows the LLM to focus on promising smaller regions of the space while benefiting from the global partitioning strategy. We provide the algorithm pseudocode in Algorithm 1 and a more detailed version in the Appendix A. In the following, we explain each step in detail. 3.1 Partition : Adaptive Discretization Based on the motivating examples we presented in Section 1, we hypothesize that firstly identifying promising smaller regions in the input space Xmakes the LLM-based sampling more reliable 3 4 2 0 2 44 2 024 4 2 0 2 44 2 024 4 2 0 2 44 2 024 4 2 0 2 44 2 024 4 2 0 2 44 2 024Partition Score Select Sample Evaluate Figure 2: Overview of the HOLLM algorithm: starting from initial data D, it iteratively performs Partition ,Score ,Select ,Sample (via LLM), and Evaluate steps to balance exploration and exploitation. For the partitioning here, we utilized a KD-Tree where each axis is split based on the mean values. Each rectangle represents a partition defined by the tree leaves. The red stars represent the new sampled points from the LLM. compared to prompting the LLM to sample globally. To this end, we propose using an adaptive input space partitioning method based on the evaluated data at each iteration of the algorithm. In order to obtain disjoint space partitions that cover the entire space, we use k-dimensional trees (KD-trees), a space-partitioning data structure that recursively divides the space into half-spaces, so that we can efficiently compute the partitions in high dimensions ( O(tlog(t))for a balanced tree where t is the number of iterations), whereas for other methods, such as a Delaunay triangulation [ 22] and V oronoi diagram [ 27,59], this would become quickly impractical as the dimension dincreases. Each non-leaf node in a KD-tree represents a splitting hyperplane perpendicular to one of the coordinate axes, dividing the space into two parts. Points on the "left" side of this hyperplane are represented by the left subtree, and points on the "right" side are represented by the right subtree. Starting from the root node X∅=X, every internal node chooses a split dimension s(the one
|
https://arxiv.org/abs/2505.21372v1
|
with the largest variance among points in the node) and a split value δ(the mean across the selected dimension). This produces two child nodes Xleft={x∈ X:xs≤δ},Xright={x∈ X:xs> δ}, whose union equals their parent and whose interiors are disjoint. After inserting nsample points, the Kleaves {Xl}K l=1form a partition of Xinto axis-aligned hyperrectangles and contain information about the points evaluated within it, including their coordinates and function values. We denote the set of indices each leaf Xℓholds as: Iℓ={i≤t:xi∈ Xℓ}, with sample size nℓ=|Iℓ| ≤mt. mtis the maximum number of points a leaf in the KD-tree can keep before splitting, parameterized by the number of iterations. At the start of round t, we optionally set mt=m0+ λlog(1 + t) , where m0(⌈d/2⌉by default) is the initial leaf size and λ(0by default) is the growth parameter. The logarithmic growth of mtensures that the partitions do not become too fine-grained quickly. An infinite-armed bandit view. Conceptually, the KD-tree can be interpreted as a data-driven discretizer in infinite-armed bandits [ 57,13]: its leaves form a coarse partition at the beginning of learning and refine only where information accumulates, mirroring the “zooming” phenomenon in continuum–armed bandits [ 29,57,38,10,48,55,11,13,23]. A key distinction, however, is that the entire tree is re-fitted at every round, adapting the partitions’ boundaries based on the current data to avoid potential early convergence to local minima. This strategy is similar to [ 56,61], where they observed that recomputing the partitioning every few iterations resulted in better empirical performance. Although partition boundaries may merge or shift, one can frame the procedure as operating on a fixed, infinite KD-tree whose internal nodes are activated and deactivated on the fly, as abstracted in adaptive-treed bandits [ 11,48]. Following this paradigm, every axis-aligned hyper-rectangular partition, Xl⊂ X , can be seen as a "meta-arm" [ 48]. The reservoir of all such boxes is uncountable, hence discovering arms, rather than merely pulling them, becomes part of the learning problem. In this language, our algorithm may be viewed as an infinite-armed bandit strategy that(i)repeatedly draws a batch of candidate active input space partitions by re-fitting a KD-tree to the ever-growing set of evaluations, and (ii)allocates pulls among those boxes according to a score function (as described below). 3.2 Score : Synthesis of Exploitation, Geometry and Uncertainty At every iteration tthe KD-tree yields a finite collection of leaves (partitions) {Xℓ,t}Kt ℓ=1. In order to decide where to spend our limited evaluation budget, we need to rank these leaves based on a scoring function (also called utility or acquisition function) that balances exploitation of good leaves and exploration of large regions that may hold good points and that are under-sampled. 4 (i) Exploitation via the empirical maximum .The exploitation term should optimistically reflect the best empirical evidence available for each region. Classical HOO algorithms use the sample mean as a low-variance proxy for the local reward [ 29,57,10,55]. In global optimization, however, the objective may be highly heteroscedastic , where one exceptionally good point inside an otherwise mediocre box can be more informative than the entire distribution. We therefore let our exploitation
|
https://arxiv.org/abs/2505.21372v1
|
statistic be the largest improvement ever observed in a region Xℓ,t: fmin(t) = min i≤tf(xi), Y i=f(xi)−fmin(t) +ε, µ ℓ,t= max i∈Iℓ,tYi. (1) We subtract the current empirical minimum fmin(t)(since we are maximizing f) so the values become strictly non-negative and comparable across rounds1. Choosing a max rather than an average emphasizes regions that contain a good function value, a behavior also found in acquisition functions in Bayesian optimization [21] and MCTS [56, 61]. (ii) Geometric exploration through hypervolume .Let[lℓ1, uℓ1]×···× [lℓd, uℓd]be the axis-aligned hyperrectangle corresponding to leaf Xℓ,t, where lℓdanduℓdare the low and upper axis values across dimension ddetermined by the points in Xℓ,t={xi∈ X :i∈Iℓ,t}. In order to assign a high exploration score to regions in the input space that are underexplored, we use the d-th root of the leaves’ Euclidean volume Vol(Xℓ,t):Vℓ,t= Qd j=1(uℓj−lℓj)1/d, which is equivalent to the geometric mean of the side lengths of the hyperrectangle and is less sensitive to side lengths across single dimensions compared to the cell diameter. The d-th root scales Vol(Xℓ,t)so it has the same units as a length . Because axis-aligned boxes shrink anisotropically as the KD-tree refines, the d-th root removes the strong dependence on dimension and yields comparable numbers across d. (iii) Statistical exploration via a UCB–V term. Even a tiny region may deserve further sampling if it contains a few samples with high variance. Let σ2 ℓ,tbe the empirical unbiased variance of the observed function values {Yi}i∈Iℓ,twithin region Xℓ,tat iteration t, and let nℓ,t=|Iℓ,t|be the number of samples in that cell. We adopt an exploration factor inspired by UCB-V (Upper Confidence Bound with Variance estimates) type algorithms [ 4,3,57,37], and apply it to our dynamic KD-tree partitioning, reminiscing UCB-AIR for infinite-armed bandits where the number of arms increases at each iteration [57]. More specifically, we score the region Xℓ,twith: Eℓ,t=s 2σ2 ℓ,tmax 0,ln(t/(Ktnℓ,t)) nℓ,t+c·max 0,ln(t/(Ktnℓ,t)) nℓ,t. (2) Here, Kt=|{Xℓ,t}|is the current number of active leaves (partitions) in the KD-tree at iteration t, and cis a positive constant2. The argument of the logarithm, t/(Ktnℓ,t), compares the average number of samples per region ( t/Kt) to the samples nℓ,tin the specific region Xℓ. This is a concentration term that focuses exploration on regions sampled less frequently than the current average. The max(0 ,ln(·))ensures the logarithmic term contributes non-negatively, effectively diminishing direct exploration incentive from this term for regions sampled more than average relative to Kt. Since the effective noise or function variability can vary significantly across regions, we scale this concentration term inside the first summand with the empirical variance σ2 ℓ,tof the corresponding region. The second summand is a correction term characteristic of Bernstein-style concentration bounds [ 4,36]. It helps to ensure that the exploration bonus is sufficiently large, particularly when nℓ,tis small or when the empirical variance σ2 ℓ,thappens to be small or zero3. This makes the exploration strategy more robust for leaves with limited observations. Final composite score. All components must live on a shared numeric scale; otherwise, whichever component happens to have the largest dynamic range would dominate the others and nullify the intended trade–off. After
|
https://arxiv.org/abs/2505.21372v1
|
each rebuild, we normalize the scores to [0,1], preserving the intended relative weights even when the set of leaves changes drastically. The total score of each partition determined by the KD-tree partitioning is: Bℓ,t= ¯µℓ,t+αt β1¯Vℓ,t+β2Eℓ,t , (3) 1The additive constant εprevents zero scores during the startup phase. 2cis often related to the range of function values or is a tuning parameter. We set it to 1 since in the total score we weight the total exploration factor. 3When nℓ,t<2, the empirical variance σ2 ℓ,tis undefined or zero. To prevent a misleadingly small exploration bonus in such highly uncertain cases, σ2 ℓ,tmight be initialized to a small positive default value. 5 Algorithm 1: HIERARCHICAL OPTIMIZATION WITH LLM S(HOLLM) Data: Initial data D, budget T, batch size b, regions to sample from M, proposals per region k 1.while t≤Tdo 2. Update temperature αt(and optionally maximum leaf size mt) 3. Partition space by building KD-tree on D, obtaining Ktleaves{Xℓ,t}Kt ℓ=1// Partition 4. foreach leaf Xℓ,tdo 5. Compute µℓ,t(Eq. 1), Vℓ,t= Vol( Xℓ,t)1/dandEℓ,t(Eq. 2) 6. Normalize and compute total score Bℓ,t= ¯µℓ,t+αt β1¯Vℓ,t+β2Eℓ,t // Score 7. end 8. Select Mleaves by sampling with probabilities pℓ,t∝Bℓ,t // Select 9. Generate kcandidates for each chosen leaf via LLM_G ENERATE (D,Xℓ,t, k)// Sample 10. Pick the top bproposals by their LLM predicted scores 11. Evaluate fon them, add to D, and set t←t+b // Evaluate 12.end 13.return best(x, y)∈ D where ¯µℓ,t,¯Vℓ,t,Eℓ,tare the min-max normalized scores, and β1,β2are hyperparameters ( β1+β2= 1 by default) weighting the geometric versus statistical exploration. The αtmultiplier is a total exploration weight following an annealing schedule (cosine in our experiments). In the early phase (αt≈αmax) the Bℓ,treduces to a near-uniform mixture of exploitation and the two exploratory terms. Assuming β1=β2and non-drastically changing regions, as tgrows, the influence of ¯Vℓ,t decays faster than that of Eℓ,tbecause the latter itself shrinks with nℓ. Hence, geometric exploration is front-loaded, while statistical calibration persists more throughout the optimization. When tis close toTthe rule essentially becomes a greedy maximizer of ¯µℓ,t, which is optimal once an ε-accurate maximizer has already been isolated. Thus, this composite score represents the classical trade-off: “go where I have seen something good, go where I have not looked at all, and go where my estimate is still uncertain” . 3.3 Select : Stochastic Selection of Partitions Once the score Bℓ,t(3)has been computed for every leaf, the algorithm must decide where to spend the next evaluation budget of size b. The Select step stochastically selects partitions by sampling from a categorical distribution over leaves. At round t, we draw without replacement a batch of M distinct leaves, denoted as Bt, from this categorical distribution where the sampling probability is: pℓ,t=Bℓ,t/PKt r=1Br,t, where ℓis the leaf index and 1≤ℓ≤Kt. Sampling stochastically instead of selecting the top- Mleaves means that sub-optimal leaves are sampled infinitely often [ 4], potentially helping to mitigate premature convergence especially in highly non-convex and multimodal functions. Each leaf has always a positive probability due to the small constant ϵ >0we add to the exploitation term in Equation 1 and the min-max
|
https://arxiv.org/abs/2505.21372v1
|
normalization in Equation 3. As tgrows, those exploratory components shrink and Bℓ,tbecome increasingly peaked around the empirical best leaves, pushing pℓ,ttoward a near -greedy regime. Moreover, a smooth annealing of αtin Equation 3 avoids an abrupt "switch-to-greedy" policy, which may ignore late-appearing, high-value regions if they happen to be discovered just after the switch. Finally, sampling Mleaves without replacement diversifies evaluations by always sampling on distinct regions. 3.4 Sample : LLM-Guided Candidate Generation After the Select step has identified a batch of leaves Bt={X1,t, . . . ,Xb,t}, which also contain their corresponding hyperrectangular partition boundaries, HOLLM suggests new candidate points inside each chosen partition by prompting an LLM with the following logic: “Given the history of evaluations Dt, propose knew points that are likely to reveal high values of finside Xi,t. ”We construct a structured prompt (see Appendix D) containing: (i) points in Dtas in-context examples, (ii) the numeric partition bounds (lis, uis)d s=1for cell Xi,t, and (iii) task instructions to return new proposals and their estimated function values. Feeding this prompt to the LLM yields (ˆxi,ˆfi) = LLM_G ENERATE Dt,(lis, uis)d s=1, k , 6 0 20 40 60 80 100 Number of evaluations01234Function value Hartmann3 BORE GP-EI CQR RE RS TPE TuRBO LLM HOLLM 0 20 40 60 80 100 Number of evaluations0.00.51.01.52.02.53.0Function value Hartmann6 BORE GP-EI CQR RE RS TPE TuRBO LLM HOLLM 0 20 40 60 80 100 Number of evaluations−2000−1500−1000−5000Function value Rosenbrock BORE GP-EI CQR RE RS TPE TuRBO LLM HOLLM 0 20 40 60 80 100 Number of evaluations−200−150−100−500Function value Rastrigin BORE GP-EI CQR RE RS TPE TuRBO LLM HOLLM 0 20 40 60 80 100 Number of evaluations−175−150−125−100−75−50−250Function value Levy BORE GP-EI CQR RE RS TPE TuRBO LLM HOLLM 0 20 40 60 80 100 Number of evaluations−20−15−10−50Function value Ackley BORE GP-EI CQR RE RS TPE TuRBO LLM HOLLMFigure 3: Best function value across 100 iterations on the synthetic problems. HOLLM outperforms or matches the performance of baselines, especially on higher dimensional problems (e.g., Ackley). where ˆxi={ˆxi,1, . . . , ˆxi,k} ⊂ X i,tandˆfi= (ˆfi,1, . . . , ˆfi,k)∈Rkare, respectively, the LLM’s candidate locations and their predicted function values4. Across the Mselected leaves we thus obtain k·Msuggestions. The parameter ktrades off the breadth of local exploration against prompt complexity and LLM inference cost. Finally, HOLLM keeps the globally best baccording to ˆfand evaluates them on true function. 4 Empirical Evaluation In this section, we evaluate HOLLM on a variety of search spaces and tasks. These span continuous synthetic functions and discrete search spaces for neural architecture search (NAS) [ 17,58] and hyperparameter optimization [19, 62]. Baselines. On these benchmarks, we compare against different state-of-the-art algorithms from Bayesian optimization (BO), such as multi-fidelity methods (CQR [ 45]), Gaussian Process BO with Expected Improvement (GP-EI [ 50,6]), density estimator methods (TPE [ 7] & BORE [ 52]), trust region BO (TuRBO [ 18]), evolutionary strategies (RE [ 42]) and random search (RS [ 8]). In all benchmarks, we also compare to the global LLM-based optimizer baseline (see Algorithm 2 in appendix) that uses the
|
https://arxiv.org/abs/2505.21372v1
|
exact same prompt structure as HOLLM (we provide the prompt templates in Appendix D), with the only difference being the region boundaries. Setup. Starting from n0= 5initial random evaluations, we run each method 3 times for a total of T= 100 iterations with different random seeds and report their mean and standard error. We use their implementation in SyneTune [ 46], except TuRBO, for which we use the official BoTorch code from the authors [ 6]. If not stated otherwise, for HOLLM we always decay the αtexploration coefficient from 1.0to0.01using a cosine annealing schedule [ 35], a batch size b= 4,M= 5 selected partitions, k= 5proposals per selected partition, and a fixed maximum leaf size mt=m0=⌈d/2⌉. In Appendix C.1, we provide ablations on these hyperparameter choices in our algorithm. We use Gemini-1.5-Flash as the LLM in Sample due to its fast inference speed, low cost, and large context window. Importantly, the LLM is provided with only minimal task information: the input dimen- sionality, variable names whenever applicable (e.g., hyperparameter names), partition boundaries, and in-context examples. No task-specific descriptions or dataset statistics are included. While prior work [ 34] shows that performance can improve by enriching prompts with such information, we avoid this to prevent potential contamination and reported performance on overly engineered prompts. We provide the full experimental details in Appendix B. 4We prompt the LLM to generate candidates and predict their performance with a single prompt or with two prompts, one for generation and one for prediction. 7 0.0 0.2 0.4 0.6 0.8 1.0 x0.75 0.50 0.25 0.000.250.500.751.00f(x)Iteration 8 Function Previous evaluations New evaluation Best point Selected box 0.0 0.2 0.4 0.6 0.8 1.0 Cell Center0.00.2ProbabilityCell Selection Probabilities 0.0 0.2 0.4 0.6 0.8 1.0 x0.75 0.50 0.25 0.000.250.500.751.00f(x)Iteration 17 0.0 0.2 0.4 0.6 0.8 1.0 Cell Center0.000.050.10ProbabilityCell Selection Probabilities 0.0 0.2 0.4 0.6 0.8 1.0 x0.75 0.50 0.25 0.000.250.500.751.00f(x)Iteration 26 0.0 0.2 0.4 0.6 0.8 1.0 Cell Center0.000.05ProbabilityCell Selection ProbabilitiesFigure 4: Illustrative example of HOLLM optimizing a 1D multimodal problem. The rectangles represent the space partitions (top figure) and are highlighted in orange whenever they are selected based on their respective probabilities (bottom figure). We used a batch size of 3. All new points (red stars) are LLM suggestions. Notice how partitions become fine-grained around the global maximum. 4.1 Synthetic Functions We benchmark on six synthetic functions of varying nature and dimensionality: Hartmann -3Dand Hartmann -6D(smooth but sharply multimodal), Rosenbrock -8D(unimodal with a narrow valley and ill-conditioning), Rastrigin -10D (regular multimodality with 1010local minima), Lévy-10D (plateaus, cliffs, and funnels), and Ackley -20D (flat regions and a single sharp global minimum at the origin). These functions pose challenges ranging from fine local search to broad exploration. See Table 1 in Appendix B.2 for more details on these functions. Results presented in Figure 3 show that HOLLM consistently outperforms the global LLM baseline, especially on the multimodal functions, also exhibiting less variance between runs. It also matches or surpasses all other baselines. Most notably, on Ackley-20D with input range [−32.768,32.768]20, HOLLM locates the global maximum in just 50 iterations, while baselines struggle to
|
https://arxiv.org/abs/2505.21372v1
|
improve beyond random search. Visualizing the Optimization Process. In Figure 4, we show a visualization of HOLLM ’s mechanics on a 1D multimodal function. The rectangles represent the KD-tree space partitions and they are highlighted in orange whenever they get selected. We can see that during the first iterations the partitions are larger and HOLLM is more exploratory, also confirmed by the regions’ respective probabilities (bottom bar plot). Later on, as the regions become smaller, high modes are identified and by the end the score probability mass concentrates more around the global maximum. We provide similar visualizations for Lévy-1D and Rosenbrock-1D in Appendix C. 4.2 Hyperparameter Optimization We assess the effectiveness of HOLLM on hyperparameter optimization by optimizing the 9D cate- gorical space from FCNet [ 28], where the task is to minimize the validation MSE of a fully connected network on 4 distinct datasets: PROTEIN ,NAVAL ,PARKINSONS andSLICE . See Appendix B.3 for more details on this search space. Results shown in Figure 5 demonstrate that our method outperforms or is on par with methods such as BORE and CQR, which typically are the off-the-shelf best choices on these benchmarks. Compared to the global LLM baseline, we can clearly see improvements on all datasets except on Parkinson, where the LLM seems to benefit more by sampling globally and reaches a low MSE after only 20 iterations. This may be due to potential outliers in the data that may impact HOLLM’s performance. 0 20 40 60 80 100 Number of evaluations−0.00040−0.00035−0.00030−0.00025−0.00020−0.00015−0.00010−0.000050.00000Function value Naval BORE GP-EI CQR RE RS TPE LLM HOLLM 0 20 40 60 80 100 Number of evaluations−0.035−0.030−0.025−0.020−0.015−0.010−0.005Function value Parkinsons BORE GP-EI CQR RE RS TPE LLM HOLLM 0 20 40 60 80 100 Number of evaluations−0.30−0.29−0.28−0.27−0.26−0.25−0.24−0.23−0.22Function value Protein BORE GP-EI CQR RE RS TPE LLM HOLLM 0 20 40 60 80 100 Number of evaluations−0.0010−0.0008−0.0006−0.0004−0.0002Function value Slice BORE GP-EI CQR RE RS TPE LLM HOLLM Figure 5: Hyperparameter optimization on 4 datasets from the FCNet search space. All baselines from Synetune are evaluated asynchronously using 4 workers. 8 0 20 40 60 80 100 Number of evaluations1.4 1.2 1.0 0.8 0.6 0.4 0.2 0.0- Regret C10 BORE GP-EI CQR RE RS TPE TuRBO LLM HOLLM 0 20 40 60 80 100 Number of evaluations4.0 3.5 3.0 2.5 2.0 1.5 1.0 0.5 0.0- Regret C100 0 20 40 60 80 100 Number of evaluations4.0 3.5 3.0 2.5 2.0 1.5 1.0 0.5 0.0- Regret ImageNet-16Figure 6: Results on the NAS-Bench-201 6 dimensional discrete function. We plot the negative regret vs. the number of iterations. We run each method 6 times and report the mean and standard error. 4.3 Neural Architecture Search Neural Architecture Search (NAS) [ 58], like hyperparameter optimization, aims to identify the best-performing neural network architecture for a given dataset by maximizing validation accuracy. We use the NAS-Bench-201 benchmark [ 16], which provides precomputed validation accuracies for all architectures on CIFAR-10, CIFAR-100, and Downsampled ImageNet 16×16[14]. The search space is 6D, with each dimension representing a discrete choice among 5 possible layer operations. See Appendix B.4 for full details.
|
https://arxiv.org/abs/2505.21372v1
|
We use a continuous representation [0,1]6of the input space and discretize it to evaluate the true function. As seen in Figure 6, HOLLM always outperforms the LLM baseline that samples globally and is on par with BORE and CQR. The global LLM seems to get stuck in local minima, therefore leading to stagnating performance from early on. 0.0 0.5 1.0 1.5 2.0 RegretHOLLM HOLLM (only explore) HOLLM (uniform) HOLLM (UCB1) HOLLM (only exploit) KDTree + RS KDTree + GP NB201-CIFAR100 Figure 7: Ablations on HOLLM’s components.Ablations. To assess the impact of key design choices in HOLLM , we perform the following ablations: (1) we modify the score function in Equation 3 by isolating either the exploitation or exploration term; (2) we replace the variance-aware UCB-V bonus in Equation 2 with the simpler UCB1 [ 5]; (3) we substitute the categorical distribution used in line 8 of Algorithm 1 with a uniform distribution; and (4) we replace the LLM-based sampler in Sample with non-LLM baselines such as uniform sampling and samples from a local Gaussian Process fitted to each partition. Results in Figure 7 show that the choice of candidate sampler in Sample has the most significant effect on regret. 5 Conclusion, Limitations and Societal Impact We propose HOLLM , a novel LLM-based global optimization method for expensive blackbox functions, that combines adaptive KD-tree partitioning, a bandit-inspired score function, and LLM capabilities for generating new candidate points on locally promising regions. HOLLM excels especially on multimodal functions with many local optima that pose a risk for premature convergence, hyperparameter optimization and neural architecture search, consistently outperforming LLMs with a global sampling policy and other non-LLM state-of-the-art methods. Limitations. While HOLLM combines nonparametric bandit methods with LLM-based sampling and shows strong empirical performance, it has several limitations. First, the approach currently lacks formal theoretical guarantees, particularly regarding dynamic partitioning and regret bounds, which we leave for future work. Second, its effectiveness depends heavily on the quality of LLM-generated proposals; biased or miscalibrated models can misguide the search or waste evaluations. Third, the inference and monetary cost of LLMs, especially proprietary ones, can limit scalability in high- dimensional settings. Finally, although default parameter values perform well in our experiments, real- world deployment may require tuning them to avoid premature convergence or excessive exploration. Impact. The use of LLMs in global optimization has societal implications. HOLLM has the potential to accelerate progress in areas such as drug discovery, materials design, and energy systems by reducing experimental costs and enabling personalized solutions. On the other hand, reliance on LLMs trained on biased data risks perpetuating social injustices when guiding sensitive decisions (e.g., hiring). Additionally, repeated LLM queries incur considerable energy costs, and the opacity of LLM-driven decisions may limit transparency and reproducibility. Therefore, responsible deployment requires bias assessment, usage controls, and transparency in both computational and ethical impacts. 9 Acknowledgments Robert Bosch GmbH is acknowledged for financial support. Fabio Ferreira and Frank Hutter acknowledge funding by the European Union (via ERC Consolidator Grant DeepLearning 2.0, grant no. 101045765). Views and opinions expressed are however those of the
|
https://arxiv.org/abs/2505.21372v1
|
author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. Aaron Klein acknowledges the financial support by the Federal Ministry of Education and Research of Germany and by Sächsische Staatsministerium für Wissenschaft, Kultur und Tourismus in the programme Center of Excellence for AI-research „Center for Scalable Data Analytics and Artificial Intelligence Dresden/Leipzig", project identification number: ScaDS.AI. Frank Hutter acknowledges the financial support of the Hector Foundation. We also thank Google Cloud for their free trial program, that enabled us to use the Google Gemini models throughout this project. References [1]Dhruv Agarwal, Manoj Ghuhan Arivazhagan, Rajarshi Das, Sandesh Swamy, Sopan Khosla, and Rashmi Gangadharaiah. Searching for optimal solutions with LLMs via bayesian optimization. InThe Thirteenth International Conference on Learning Representations , 2025. [2]Virginia Aglietti, Ira Ktena, Jessica Schrouff, Eleni Sgouritsa, Francisco J. R. Ruiz, Alan Malek, Alexis Bellot, and Silvia Chiappa. FunBO: Discovering acquisition functions forbayesian optimization with funsearch, 2025. [3]Jean-Yves Audibert and Sébastien Bubeck. Minimax policies for adversarial and stochastic bandits. In Proceedings of the 22th annual conference on learning theory , pages 217–226, Montreal, Canada, June 2009. [4]Jean-Yves Audibert, Rémi Munos, and Csaba Szepesvári. Exploration-exploitation tradeoff using variance estimates in multi-armed bandits. Theor. Comput. Sci. , 410(19):1876–1902, April 2009. [5]Peter Auer, Nicolò Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. Machine Learning , 47(2):235–256, May 2002. [6]Maximilian Balandat, Brian Karrer, Daniel R. Jiang, Samuel Daulton, Benjamin Letham, Andrew Gordon Wilson, and Eytan Bakshy. BoTorch: A Framework for Efficient Monte-Carlo Bayesian Optimization. In Advances in Neural Information Processing Systems 33 , 2020. [7]James Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. Algorithms for hyper- parameter optimization. In Advances in Neural Information Processing Systems , volume 24. Curran Associates, Inc., 2011. [8]James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. Journal of Machine Learning Research , 13(10):281–305, 2012. [9]Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-V oss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In Proceedings of the 34th International Conference on Neural Information Processing Systems , NeurIPS ’20, Red Hook, NY , USA, 2020. Curran Associates Inc. [10] Sébastien Bubeck, Rémi Munos, Gilles Stoltz, and Csaba Szepesvári. X-armed bandits. J. Mach. Learn. Res. , 12(null):1655–1695, July 2011. 10 [11] Adam Bull. Adaptive-treed bandits. Bernoulli , 21, 02 2013. [12] Roberto Calandra, André Seyfarth, Jan Peters, and Marc Peter Deisenroth. Bayesian optimiza- tion for learning gaits under uncertainty. Annals of Mathematics and Artificial Intelligence , 76(1–2):5–23, February 2016. [13] Alexandra Carpentier and Michal Valko. Simple regret for infinitely many armed bandits. In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37 , ICML’15, page 1133–1141. JMLR.org, 2015.
|
https://arxiv.org/abs/2505.21372v1
|
[14] Patryk Chrabaszcz, Ilya Loshchilov, and Frank Hutter. A downsampled variant of imagenet as an alternative to the cifar datasets. CoRR , abs/1707.08819, 2017. [15] Samuel Daulton, David Eriksson, Maximilian Balandat, and Eytan Bakshy. Multi-objective bayesian optimization over high-dimensional search spaces. In James Cussens and Kun Zhang, editors, Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence , volume 180 of Proceedings of Machine Learning Research , pages 507–517. PMLR, 01–05 Aug 2022. [16] Xuanyi Dong and Yi Yang. Nas-bench-201: Extending the scope of reproducible neural architecture search. In International Conference on Learning Representations , 2020. [17] Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Neural architecture search: A survey. Journal of Machine Learning Research , 20(55):1–21, 2019. [18] David Eriksson, Michael Pearce, Jacob Gardner, Ryan D Turner, and Matthias Poloczek. Scalable global optimization via local bayesian optimization. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d 'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems , volume 32. Curran Associates, Inc., 2019. [19] Matthias Feurer and Frank Hutter. Hyperparameter Optimization , pages 3–33. Springer International Publishing, Cham, 2019. [20] P. Frazier. A tutorial on bayesian optimization. ArXiv , abs/1807.02811, 2018. [21] Roman Garnett. Bayesian Optimization . Cambridge University Press, 2023. [22] Robert B Gramacy, Annie Sauer, and Nathan Wycoff. Triangulation candidates for bayesian optimization. In Advances in Neural Information Processing Systems , volume 35, pages 35933–35945. Curran Associates, Inc., 2022. [23] Jean-Bastien Grill, Michal Valko, Remi Munos, and Remi Munos. Black-box optimization of noisy functions with unknown smoothness. In Advances in Neural Information Processing Systems , volume 28. Curran Associates, Inc., 2015. [24] Nikolaus Hansen. The cma evolution strategy: A tutorial, 2016. [25] José Miguel Hernández-Lobato, James Requeima, Edward O. Pyzer-Knapp, and Alán Aspuru- Guzik. Parallel and distributed thompson sampling for large-scale accelerated exploration of chemical space. In Proceedings of the 34th International Conference on Machine Learning - Volume 70 , ICML’17, page 1470–1479. JMLR.org, 2017. [26] Donald R. Jones, Matthias Schonlau, and William J. Welch. Efficient global optimization of expensive black-box functions. J. Global Optimization , 13(4):455–492, 1998. [27] Beomjoon Kim, Kyungjae Lee, Sungbin Lim, Leslie Kaelbling, and Tomas Lozano-Perez. Monte carlo tree search in continuous spaces using voronoi optimistic optimization with regret bounds. Proceedings of the AAAI Conference on Artificial Intelligence , 34(06):9916–9924, Apr. 2020. [28] Aaron Klein and Frank Hutter. Tabular benchmarks for joint architecture and hyperparameter optimization, 2019. 11 [29] Robert Kleinberg, Aleksandrs Slivkins, and Eli Upfal. Multi-armed bandits in metric spaces. In Proceedings of the Fortieth Annual ACM Symposium on Theory of Computing , STOC ’08, page 681–690, New York, NY , USA, 2008. Association for Computing Machinery. [30] Levente Kocsis and Csaba Szepesvari. Bandit based Monte-Carlo planning. In European Conference on Machine Learning , pages 282–203. Springer, 2006. [31] Akshay Krishnamurthy, Keegan Harris, Dylan J. Foster, Cyril Zhang, and Aleksandrs Slivkins. Can large language models explore in-context? In Advances in Neural Information Processing Systems , volume 37, pages 120124–120158. Curran Associates, Inc., 2024. [32] Agustinus Kristiadi, Felix Strieth-Kalthoff, Marta Skreta, Pascal Poupart, Alán Aspuru-Guzik, and Geoff Pleiss. A sober look
|
https://arxiv.org/abs/2505.21372v1
|
at llms for material discovery: are they actually good for bayesian optimization over molecules? In Proceedings of the 41st International Conference on Machine Learning , ICML’24. JMLR, 2024. [33] Robert Langer and David A. Tirrell. Designing materials for biology and medicine. Nature , 428(6982):487–492, April 2004. [34] Tennison Liu, Nicolás Astorga, Nabeel Seedat, and Mihaela van der Schaar. Large language models to enhance bayesian optimization. In The Twelfth International Conference on Learning Representations , 2024. [35] Ilya Loshchilov and Frank Hutter. SGDR: Stochastic gradient descent with warm restarts. In International Conference on Learning Representations , 2017. [36] Andreas Maurer and Massimiliano Pontil. Empirical bernstein bounds and sample-variance penalization., 2009. [37] Subhojyoti Mukherjee, K. P. Naveen, Nandan Sudarsanam, and Balaraman Ravindran. Efficient- ucbv: An almost optimal algorithm using variance estimates. Proceedings of the AAAI Confer- ence on Artificial Intelligence , 32(1), Apr. 2018. [38] Rémi Munos. Optimistic optimization of a deterministic function without the knowledge of its smoothness. In Advances in Neural Information Processing Systems , volume 24. Curran Associates, Inc., 2011. [39] OpenAI. GPT-4 technical report, 2023. [40] Mayk Ramos, Shane Michtavy, Marc Porosoff, and Andrew White. Bayesian optimization of catalysts with in-context learning, 04 2023. [41] Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning . The MIT Press, 2006. [42] Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V . Le. Regularized evolution for image classifier architecture search. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence and Thirty-First Innovative Applications of Artificial Intelligence Conference and Ninth AAAI Symposium on Educational Advances in Artificial Intelligence , AAAI’19/IAAI’19/EAAI’19. AAAI Press, 2019. [43] Machel Reid, Nikolay Savinov, Denis Teplyashin, Dmitry Lepikhin, Timothy P. Lillicrap, Jean- Baptiste Alayrac, Radu Soricut, Angeliki Lazaridou, Orhan Firat, Julian Schrittwieser, Ioannis Antonoglou, Rohan Anil, Sebastian Borgeaud, Andrew M. Dai, Katie Millican, Ethan Dyer, Mia Glaese, Thibault Sottiaux, Benjamin Lee, Fabio Viola, Malcolm Reynolds, Yuanzhong Xu, James Molloy, Jilin Chen, Michael Isard, Paul Barham, Tom Hennigan, Ross McIlroy, Melvin Johnson, Johan Schalkwyk, Eli Collins, Eliza Rutherford, Erica Moreira, Kareem Ayoub, Megha Goel, Clemens Meyer, Gregory Thornton, Zhen Yang, Henryk Michalewski, Zaheer Abbas, Nathan Schucher, Ankesh Anand, Richard Ives, James Keeling, Karel Lenc, Salem Haykal, Siamak Shakeri, Pranav Shyam, Aakanksha Chowdhery, Roman Ring, Stephen Spencer, Eren Sezener, and et al. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. CoRR , abs/2403.05530, 2024. 12 [44] Luis Miguel Rios and Nikolaos V . Sahinidis. Derivative-free optimization: a review of al- gorithms and comparison of software implementations. J. Glob. Optim. , 56(3):1247–1293, 2013. [45] David Salinas, Jacek Golebiowski, Aaron Klein, Matthias Seeger, and Cedric Archambeau. Optimizing hyperparameters with conformal quantile regression. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett, edi- tors, Proceedings of the 40th International Conference on Machine Learning , volume 202 of Proceedings of Machine Learning Research , pages 29876–29893. PMLR, 23–29 Jul 2023. [46] David Salinas, Matthias Seeger, Aaron Klein, Valerio Perrone, Martin Wistuba, and Cedric Archambeau. Syne tune: A library for large scale hyperparameter tuning and reproducible research. In Isabelle Guyon, Marius Lindauer, Mihaela
|
https://arxiv.org/abs/2505.21372v1
|
van der Schaar, Frank Hutter, and Roman Garnett, editors, Proceedings of the First International Conference on Automated Machine Learning , volume 188 of Proceedings of Machine Learning Research , pages 16/1–23. PMLR, 25–27 Jul 2022. [47] Bobak Shahriari, Kevin Swersky, Ziyun Wang, Ryan P. Adams, and Nando de Freitas. Taking the human out of the loop: A review of bayesian optimization. Proceedings of the IEEE , 104:148–175, 2016. [48] Aleksandrs Slivkins. Multi-armed bandits on implicit metric spaces. In Advances in Neural Information Processing Systems , volume 24. Curran Associates, Inc., 2011. [49] Aleksandrs Slivkins. Introduction to multi-armed bandits. Found. Trends Mach. Learn. , 12(1–2):1–286, November 2019. [50] Jasper Snoek, Hugo Larochelle, and Ryan P. Adams. Practical bayesian optimization of machine learning algorithms. In Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2 , NeurIPS’12, page 2951–2959, Red Hook, NY , USA, 2012. Curran Associates Inc. [51] Xingyou Song, Yingtao Tian, Robert Tjarko Lange, Chansoo Lee, Yujin Tang, and Yutian Chen. Position: leverage foundational models for black-box optimization. In Proceedings of the 41st International Conference on Machine Learning , ICML’24. JMLR.org, 2024. [52] Louis Tiao, Aaron Klein, Cédric Archambeau, Edwin V Bonilla, Matthias Seeger, and Fabio Ramos. Bayesian Optimization by Density-Ratio Estimation. In Proceedings of the 38th International Conference on Machine Learning (ICML2021) , Virtual (Online), July 2021. [53] Hugo Touvron, Louis Martin, Kevin R. Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, D. Bikel, Lukas Blecher, Cristian Cantón Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, A. Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel M. Kloumann, A. Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, R. Subramanian, Xia Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zhengxu Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models, 7 2023. [54] Ryan Turner, David Eriksson, Michael McCourt, Juha Kiili, Eero Laaksonen, Zhen Xu, and Isabelle Guyon. Bayesian optimization is superior to random search for machine learning hyperparameter tuning: Analysis of the black-box optimization challenge 2020. In Proceedings of the NeurIPS 2020 Competition and Demonstration Track , volume 133 of Proceedings of Machine Learning Research , pages 3–26. PMLR, 06–12 Dec 2021. [55] Michal Valko, Alexandra Carpentier, and Rémi Munos. Stochastic simultaneous optimistic optimization. In Sanjoy Dasgupta and David McAllester, editors, Proceedings of the 30th International Conference on Machine Learning , Proceedings of Machine Learning Research, pages 19–27, Atlanta, Georgia, USA, 17–19 Jun 2013. PMLR. 13 [56] Linnan Wang, Rodrigo Fonseca, and Yuandong Tian. Learning search space partition for black-box optimization using monte carlo tree search. In Proceedings of the 34th International Conference on Neural Information Processing
|
https://arxiv.org/abs/2505.21372v1
|
Systems , NeurIPS ’20, Red Hook, NY , USA, 2020. Curran Associates Inc. [57] Yizao Wang, Jean-Yves Audibert, and Rémi Munos. Algorithms for infinitely many-armed bandits. In Proceedings of the 22nd International Conference on Neural Information Processing Systems , NeurIPS’08, page 1729–1736, Red Hook, NY , USA, 2008. Curran Associates Inc. [58] Colin White, Mahmoud Safari, Rhea Sanjay Sukthanker, Binxin Ru, Thomas Elsken, Arber Zela, Debadeepta Dey, and Frank Hutter. Neural architecture search: Insights from 1000 papers. ArXiv , abs/2301.08727, 2023. [59] Nathan Wycoff, John W. Smith, Annie S. Booth, and Robert B. Gramacy. V oronoi candidates for bayesian optimization. ArXiv , abs/2402.04922, 2024. [60] Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V Le, Denny Zhou, and Xinyun Chen. Large language models as optimizers. In The Twelfth International Conference on Learning Representations , 2024. [61] Kevin Yang, Tianjun Zhang, Chris Cummins, Brandon Cui, Benoit Steiner, Linnan Wang, Joseph E Gonzalez, Dan Klein, and Yuandong Tian. Learning space partitions for path planning. InAdvances in Neural Information Processing Systems , volume 34, pages 378–391. Curran Associates, Inc., 2021. [62] Tong Yu and Hong Zhu. Hyper-parameter optimization: A review of algorithms and applications. arXiv preprint arXiv:2003.05689 , 2020. [63] Dawei Zhan and Huanlai Xing. Expected improvement for expensive optimization: a review. J. of Global Optimization , 78(3):507–544, November 2020. [64] Michael Zhang, Nishkrit Desai, Juhan Bae, Jonathan Lorraine, and Jimmy Ba. Using large language models for hyperparameter optimization. In NeurIPS 2023 Foundation Models for Decision Making Workshop , 2023. 14 A Algorithm Pseudocodes In this section, we present detailed pseudocode for the HOLLM algorithm in Algorithm 3, which complements Algorithm 1. In the algorithm, we omit the subscript tfor easier readability. Addition- ally, Algorithm 2 describes the LLM-based global optimization baseline method used throughout our experiments. To ensure fair comparison, we configure the baseline to propose k·bpoints per iteration (line 2 of Algorithm 2), matching the total number of proposals generated by HOLLM across all subregions, i.e., kproposals in each of the Msubregions. Algorithm 2: GLOBAL -LLM baseline Data: Initialize Dwithn0points, budget T, batch size b, proposals k·M 1.fort=n0, . . . , T −1do 2. Propose k·Mpoints with LLM_G ENERATE (X,D, k·M) 3. Evaluate the top band add to D 4.return best(x, y)∈ D B Details on Tasks, Baselines and Experimental Setup B.1 Baselines We compare HOLLM to the following baselines: •Random Search (RS) [ 8]serves as a simple baseline that uniformly samples configurations from the search space without any learning or adaptation. •Regularized Evolution (RE) [ 42]is an evolutionary algorithm that maintains a population of candidate solutions and evolves them through mutation operations. The method regularizes the population by removing the oldest individuals, preventing stagnation and maintaining diversity. •Conformalized Quantile Regression (CQR) [ 45]uses gradient boosted trees to predict perfor- mance quantiles and provides prediction intervals with statistical guarantees through conformal prediction techniques. •Tree-structured Parzen Estimator (TPE) [ 7]is a sequential model-based optimization method that models the distribution of good and bad configurations separately. It builds two probability densities: ℓ(x)for configurations with performance better than a threshold, and g(x)for those below
|
https://arxiv.org/abs/2505.21372v1
|
the threshold. New candidates are selected by maximizing the ratio ℓ(x)/g(x). •Bayesian Optimization by Density-Ratio Estimation (BORE) [ 52]reformulates Bayesian optimization as a binary classification problem. It trains a probabilistic classifier to distinguish between high-performing and low-performing configurations, then uses the predicted class probabilities to construct an acquisition function equivalent to expected improvement. •Gaussian Process BO with Expected Improvement (GP-EI) [ 50,6]employs a Gaussian Process as a surrogate model to capture the objective function’s behavior and uncertainty. It uses the Expected Improvement acquisition function, implemented via BoTorch, to balance exploration and exploitation when selecting new evaluation points. •Trust Region Bayesian Optimization (TuRBO) [ 18]addresses the curse of dimensionality in high-dimensional optimization by maintaining multiple local trust regions. Each region uses an independent Gaussian Process and adapts its size based on optimization progress, allowing the method to scale effectively to high-dimensional problems. B.2 Synthetic Benchmarks In Section 4, we initially evaluated HOLLM and the baselines on 6 synthetic deterministic function with varying dimensionality and nature. In Table 1, we provide details on each of them. 15 Algorithm 3: HIERARCHICAL OPTIMIZATION WITH LLM S(HOLLM) – D ETAILED Data: Initial data D={(xi, f(xi))}n0 i=1, batch size b, regions to sample from M, proposal count per leaf k, dimension d, initial leaf size m0, adaptive growth rate λ, total evaluations T, exploration weights β1,β2, annealing αmin,αmax 1.t←n0 // global evaluation counter 2.while t < T do 3. (optional) mleaf←m0+⌈λlog(1 + t)⌉ // adaptive leaf size 4. Build KD-tree on Dwith leafsize mleaf, obtaining Ktleaves {Xℓ}Kt ℓ=1 5. α←αmin+1 2(αmax−αmin)(1 + cos( πt/T )) // cosine annealing schedule 6. fmin←mint i=1f(xi) 7. Y+← {f(xi)−fmin+ϵ}t i=1 // positive transformed values 8. forl= 1toKtdo 9. nℓ← |X ℓ| // number of points in this leaf 10. (lℓ, uℓ)←bounds of cell Xℓ 11. Vℓ←Qd j=1(uℓj−lℓj)1/d// normalized volume 12. ifnℓ>0then 13. µℓ←max i∈IℓY+[i] // best value in cell 14. ifnℓ>1then 15. σ2 ℓ←1 nℓ−1P i∈Iℓ(Y+[i]−¯Yℓ)2// variance in cell 16. else 17. σ2 ℓ←0.01 // default variance for single-point cells 18. log_term←max(0 ,log(t/(Kt·nℓ))) 19. Eℓ←(p 2σ2 ℓ·log_term/n ℓ+ log _term/n ℓ) 20. else 21. µℓ←0 22. expl ℓ←1 // high exploration for empty cells 23. ¯µ,¯Vℓ,¯Eℓ←min-max normalize µℓ,Vℓ,Eℓacross all cells 24. Bℓ←¯µ+α·(β1·¯Vℓ+β2·¯Eℓ) // composite score 25. pℓ=Bℓ/PKt r=1Br // normalize score across cells 26. Sample Mcells{Xij}M j=1∼Categorical {pℓ} 27. ˆX← ∅,ˆF← ∅ 28. forj= 1toMdo 29. (ˆxj,ˆfj)←LLM_G ENERATE (D,(lij, uij), k) 30. Append ˆxjtoˆX; append ˆfjtoˆF 31. π←argsort (ˆF) // indices of sorted values (descending) 32. Xnew←topbpoints from ˆXusing indices π 33. foreachx∈Xnewdo 34. Evaluate y=f(x) 35. D ← D ∪ { (x, y)} 36. t←t+ 1 37.return best point (x∗, f(x∗))where x∗= arg max x∈Df(x) 16 Table 1: List of synthetic optimization functions and their main characteristics. Function (dim.) Landscape & key traits Global boundary Global optimum Hartmann 3D Smooth, strongly multimodal surface generated by four weighted Gaussians inside the unit cube; narrow, steep basins punish local search.(x1, x2, x3)∈[0,1]3fmin≈−3.86278 Hartmann 6D Six Gaussians in [0,1]6 create an even denser constellation of deceptive wells; still smooth but mildly ill-conditioned, and the search space grows exponentially.(x1, x2, . . . , x
|
https://arxiv.org/abs/2505.21372v1
|
6)∈[0,1]6fmin≈−3.32237 Rosenbrock 8D Classic curved “banana” valley stretched to eight variables; unimodal yet highly ill-conditioned, requiring precise valley-tracking; non-separable.(x1, x2, . . . , x 8)∈ [−2.048,2.048]8fmin= 0 Rastrigin 10D Quadratic core overlaid with cosine ripples forms a perfectly regular grid of 1010local minima; separable but brutally multimodal, exposing algorithms prone to premature convergence.(x1, x2, . . . , x 10)∈ [−5.12,5.12]10fmin= 0 Lévy 10D Sine perturbations on a quadratic backbone yield wide plateaus, sudden cliffs, and deep funnels—rugged and non -separable, stressing step-size control.(x1, x2, . . . , x 10)∈ [−10,10]10fmin= 0 Ackley 20D Exponential of radius plus averaged cosines: vast flat outer region, encircling ridge, and a single sharp basin at the origin; tests exploration versus exploitation in very high dimension.(x1, x2, . . . , x 20)∈ [−32.768,32.768]20fmin= 0 B.3 Hyperparameter Optimization Benchmarks For our hyperparameter optimization experiments, we evaluate on four tasks from the FCNet bench- mark [ 28]:PROTEIN ,NAVAL ,PARKINSONS , and SLICE . The FCNet benchmark provides a tabulated hyperparameter optimization setting where fully connected neural networks are trained on each dataset with different hyperparameter configurations. The search space consists of 9 categorical hyperparameters (network architecture and training parameters), yielding 62,208 possible configu- rations with pre-computed validation accuracies. To enable KD-tree partitioning on the categorical search space, we apply ordinal encoding to convert categorical variables into numerical split indices. Below we describe the four regression datasets used as the underlying machine learning tasks: •PROTEIN is a regression dataset containing physicochemical properties of protein tertiary struc- tures. The task involves predicting protein properties from 9 molecular descriptors across 45,730 protein samples. 17 •PARKINSONS contains biomedical voice measurements from 42 individuals with early-stage Parkinson’s disease participating in a six-month telemonitoring trial. The regression target is the progression of Parkinson’s symptoms, with 5,875 samples and 19 acoustic features. •NAVAL consists of simulated sensor data from a naval frigate’s propulsion system, including gas turbine, propeller, gearbox, and control systems. The regression task predicts component degradation levels using 11,934 samples with 16 operational features. •SLICE involves predicting the relative axial location of CT scan slices within the human body. The dataset contains 384 features extracted from 53,500 CT images, describing bone structures, air inclusions, and anatomical positioning. Table 2: Search space of the FCNet benchmark. The left column lists the hyperparameter names of the neural network that need to be tuned, whilst the right column the possible categorical choices for each hyperparameter. Hyperparameter Categorical Configuration Space Initial LR {0.0005, 0.001, 0.005, 0.01, 0.05, 0.1} Batch Size {8, 16, 32, 64} LR Schedule {cosine, fix} Activation (Layer 1) {relu, tanh} Activation (Layer 2) {relu, tanh} Layer 1 Size {16, 32, 64, 128, 256, 512} Layer 2 Size {16, 32, 64, 128, 256, 512} Dropout (Layer 1) {0.0, 0.3, 0.6} Dropout (Layer 2) {0.0, 0.3, 0.6} B.4 Neural Architecture Search Benchmarks For neural architecture search (NAS), we utilize the NAS-Bench-201 [ 16] tabular benchmark, which provides a comprehensive evaluation suite for architecture optimization. The search space consists of selecting optimal CNN operations for each of the 6 edges in a predefined cell-based
|
https://arxiv.org/abs/2505.21372v1
|
computational graph. Each edge can be assigned one of 5 categorical operations: avg_pool_3x3 (average pooling), nor_conv_3x3 (normal 3 ×3 convolution), skip_connect (identity connection), nor_conv_1x1 (normal 1 ×1 convolution), and none (no operation). This yields a total search space of 56= 15,625possible architectures. NAS-Bench-201 provides precomputed validation accuracies for all architectures across three image classification datasets: CIFAR-10, CIFAR-100, and ImageNet16-120 (a 16 ×16 downsampled version of ImageNet with 120 classes). This tabulated format enables efficient benchmarking by eliminating the computational overhead of training each architecture from scratch. C Additional Experiments In this section, we provide additional experiments and ablations, complementing the ones conducted throughout Section 4 of the main paper. C.1 Ablations To assess the robustness of our method and understand the influence of key hyperparameters on performance, we conducted a comprehensive ablation study. We employ the 10D Levy test function and examine 3 hyperparameters that directly impact the exploration-exploitation balance and efficacy of our approach: (i) maximum leaf capacity mleaf=m0+⌈λlog(1 + t)⌉(λ= 0), which controls the granularity of space partitioning; (ii) candidate sampling rate k(proposals generated per selected region), which determines the diversity of proposals within each selected region; and (iii) region selection parameter M(partitions selected per iteration), which governs the number of promising subregions explored simultaneously per iteration. The default hyperparameter configuration also used throughout the experiments in the main paper is: exploration parameter bounds αmax= 1.0 andαmin= 0.01, initial random sampling phase of n0= 5 evaluations, batch size b= 4 (points evaluated per iteration), k= 5,M= 5, and maximum leaf capacity m0=d/2, where ddenotes 18 0.0 0.2 0.4 0.6 0.8 1.0 x70 60 50 40 30 20 10 010f(x)Iteration 12 Function Previous evaluations New evaluation Best point Selected box 0.0 0.2 0.4 0.6 0.8 1.0 Cell Center0.00.2ProbabilityCell Selection Probabilities 0.0 0.2 0.4 0.6 0.8 1.0 x70 60 50 40 30 20 10 010f(x)Iteration 27 Function Previous evaluations New evaluation Best point Selected box 0.0 0.2 0.4 0.6 0.8 1.0 Cell Center0.000.050.10ProbabilityCell Selection Probabilities 0.0 0.2 0.4 0.6 0.8 1.0 x70 60 50 40 30 20 10 010f(x)Iteration 39 Function Previous evaluations New evaluation Best point Selected box 0.0 0.2 0.4 0.6 0.8 1.0 Cell Center0.000.05ProbabilityCell Selection Probabilities(a) 1D Levy problem. 0.0 0.2 0.4 0.6 0.8 1.0 x250 200 150 100 50 0f(x)Iteration 12 Function Previous evaluations New evaluation Best point Selected box 0.0 0.2 0.4 0.6 0.8 1.0 Cell Center0.00.2ProbabilityCell Selection Probabilities 0.0 0.2 0.4 0.6 0.8 1.0 x250 200 150 100 50 0f(x)Iteration 27 Function Previous evaluations New evaluation Best point Selected box 0.0 0.2 0.4 0.6 0.8 1.0 Cell Center0.00.10.2ProbabilityCell Selection Probabilities 0.0 0.2 0.4 0.6 0.8 1.0 x250 200 150 100 50 0f(x)Iteration 39 Function Previous evaluations New evaluation Best point Selected box 0.0 0.2 0.4 0.6 0.8 1.0 Cell Center0.000.050.10ProbabilityCell Selection Probabilities (b) 1D Rosenbrock problem. Figure 8: Illustrative examples of optimizing 1D functions. The rectangles represent the input space partitions (top row). They are highlighted in orange whenever selected based on their respective probabilities (bottom row). Both experiments used 5 initial points and a batch size of 4. All new points (red stars) are suggestions
|
https://arxiv.org/abs/2505.21372v1
|
from the LLM. problem dimensionality. We run each setting with 5 independent random seeds and report the mean performance ±standard error in Figure 9. Impact of Leaf Size ( m0).The leaf size parameter m0defines the maximum number of data points within a single leaf of the partitioning tree, directly controlling the granularity of search space decomposition. Our analysis across different values of m0as a factor of problem dimensionality d reveals a clear trade-off between partition resolution and statistical reliability (Figure 9, left). Coarse partitioning with m0=dyields suboptimal performance due to overly broad regions that group diverse areas of the search space, diminishing the method’s ability to precisely isolate promising subregions. Conversely, extreme fine partitioning with m0= 1also degrades performance because singleton regions provide insufficient statistical information and the variance component becomes a small constant across all regions, eliminating valuable uncertainty estimates necessary to guide exploration. We observe the best performance at m0=d/4, which strikes an effective balance by enabling detailed space partitioning while maintaining sufficient data density within each region to compute meaningful variance estimates for the exploration term. Impact of Number of Candidates per Region ( k).We investigated the effect of varying the number of candidate points ksampled from each selected region, testing values k∈ {1,3,5,7,10}. Results shown in Figure 9, middle ) reveal a clear trade-off between under- and over-sampling within regions. Setting k= 1leads to significant performance degradation as the method fails to adequately exploit promising regions by drawing only a single sample per region. Conversely, k= 10 results in 19 0 20 40 60 80 100 Number of evaluations30 25 20 15 10 5 0Function value Levy (Leaf Size) HOLLM (m0=0.25d) HOLLM (m0=0.5d) HOLLM (m0=1) HOLLM (m0=1d) 0 20 40 60 80 100 Number of evaluations30 25 20 15 10 5 0Function value Levy (Number of candidates per region) HOLLM (k=1) HOLLM (k=10) HOLLM (k=3) HOLLM (k=5) HOLLM (k=7) 0 20 40 60 80 100 Number of evaluations30 25 20 15 10 5 0Function value Levy (Number of Partitions) HOLLM (M = 1) HOLLM (M = 3) HOLLM (M = 5) HOLLM (M = 7)Figure 9: Impact of key hyperparameters on optimization performance for the 10D Levy function. Left: Ablation on leaf size ( m0) demonstrates that coarse partitioning ( m0= 1d) yields the poorest performance, while finer partitioning enables superior exploitation of promising regions through more granular space decomposition. Middle: Varying the number of candidates per selected region (k) reveals optimal performance at intermediate values, where undersampling ( k= 1) significantly degrades performance by limiting exploitation of high-potential regions, and oversampling ( k= 10 ) slows convergence due to inefficient allocation of evaluation budget. Right: The number of partitions selected per trial ( M) governs exploration breadth, where single-region focus ( M= 1) impedes convergence through insufficient exploration, while moderate values ( M∈ {3,5,7}) accelerate optimization by enabling simultaneous exploration of multiple promising regions. worse performance during initial iterations compared to intermediate kvalues, which we attribute to increased risk of oversampling sub-optimal regions in the beginning. While the method can recover from this scenario as oversampled sub-optimal
|
https://arxiv.org/abs/2505.21372v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.