text string | source string |
|---|---|
each 2D slice, and reconstructed the segmen- tations into a 3D volume using the method described in (Imran et al., 2024). The final prostate gland vol- ume (in mL) was computed from the reconstructed 3D model. 2.1.3. Slice-level labeling for model training Model training required slice-level annotations in- dicating th... | https://arxiv.org/abs/2505.21355v1 |
eight adjacent slices on average. Re- quiring spatially contiguous predictions helped reduce false positives and improved specificity without com- promising sensitivity. 2.2.3. Classification with clinical biomarkers To assess the predictive value of commonly used screening tools, we trained a random forest model us- i... | https://arxiv.org/abs/2505.21355v1 |
also achieved higher precision (77.8%), F1-score (84.5%), and accuracy (81.4%), demonstrating a better over- all balance between true-positive detection and false- positive control. Table 2: Threshold-based classification metrics (averaged over five folds) using a fixed decision threshold of 0.15. Model Sensitivity Spe... | https://arxiv.org/abs/2505.21355v1 |
threshold for patient-level clas- sification (eight consecutive positive slices) was em- pirically defined and may require adjustment in fu- ture prospective settings. Future work should include multicenter clinical trials, decision-curve analysis, and cost-e ffectiveness studies to evaluate the real-world impact of in... | https://arxiv.org/abs/2505.21355v1 |
Patel, A., Pensa, J., Liang, M., Benidir, T., Grajo, J.R., Joseph, J.P., Terry, R., et al., 2024. Microsegnet: A deep learning approach for prostate segmentation on micro-ultrasound images. Computerized Medical Imaging and Graphics 112, 102326. Kinnaird, A., Luger, F., Cash, H., Ghai, S., Urdaneta-Salegui, L.F., Pavlov... | https://arxiv.org/abs/2505.21355v1 |
arXiv:2505.21362v1 [cs.CL] 27 May 2025Evaluating LLM Adaptation to Sociodemographic Factors: User Profile vs. Dialogue History Qishuai Zhong1Zongmin Li1Siqi Fan2Aixin Sun1 1Nanyang Technological University, Singapore 2University of Electronic Science and Technology of China, Chengdu, China Abstract Effective engagement... | https://arxiv.org/abs/2505.21362v1 |
and foster user trust, LLMs should dynamically tai- lor their responses to reflect user expectations—a capability we refer to as behavioral adaptation . Sociodemographic attributes of user profiles (e.g., age, education, occupation, nationality) are strongly correlated with cultural norms and val- ues related to family... | https://arxiv.org/abs/2505.21362v1 |
related work in both fields. 2.1 Persona Attributes Understanding Evaluations of language models’ understanding of persona attributes typically center on two tasks: next-utterance prediction and persona expansion. Standard benchmarks such as PersonaChat (Zhang et al., 2018), RealPersonaChat (Yamashita et al., 2023), an... | https://arxiv.org/abs/2505.21362v1 |
We use the Value Survey Module (VSM 2013) (Hofstede and Hofstede, 2016), grounded in Hofstede’s Cultural Dimensions Theory (Gerlach and Eriksson, 2021), to quantify cultural values. This questionnaire features multiple-choice items on workplace dynamics and decision-making, each 1https://www.pewresearch.org/ Out-of-Con... | https://arxiv.org/abs/2505.21362v1 |
condition is satisfied. Out-of-Context Detector ( ooc_detector ): We employ Gpt-4o-mini-2024-07-18 (OpenAI et al., 2024) to validate the questions generated by the user simulator. It ensures that each question aligns with the user’s profile and maintains consistent first- person framing. If inconsistencies are detected... | https://arxiv.org/abs/2505.21362v1 |
the shared subset using the Pearson correlation coefficient (Freedman et al., 2007) and the two-way mixed-effects intraclass cor- relation coefficient (ICC(3, k)) (Shrout and Fleiss, 1979). We omit Fleiss’ Kappa due to its sensitivity to category prevalence in our skewed data (Hoehler, 2000). Results (Appendix E) demon... | https://arxiv.org/abs/2505.21362v1 |
quantify the model’s sensitivity to demographic variation. For exam- ple, when grouping RUby age, the divergence between the “<30” and “>60” cohorts should no- ticeably exceed the baseline, while the divergence between “<30” and “30–40” should remain below it—illustrating that greater age gaps drive greater variation i... | https://arxiv.org/abs/2505.21362v1 |
scores across all responses to estimate the model’s overall confidence. As illus- trated in Figure 4, all models—except Llama3.1- 8B-Instruct , which shows slightly reduced confi- dence in the dialogue setting—exhibit consistently high confidence across both contexts, supporting the interpretation that their selections... | https://arxiv.org/abs/2505.21362v1 |
less ex-plicit than user profiles, models still adapt their behavior to align with user characteristics. 6.2 Consistency across Context Formats We next evaluate each model’s behavioral consistency across context formats (scenario Consistency ). Following Section 5.1, we com- pute, for each model, both the distance and ... | https://arxiv.org/abs/2505.21362v1 |
indicate that most models adjust effectively to single-format at- tribute changes, particularly in attributes like age and education level, with the degree of value ad- justment positively correlated with the magnitude of attribute change. However, significant discrepan- cies arise in cross-format scenarios. Smaller mo... | https://arxiv.org/abs/2505.21362v1 |
Dong, Charlie F. Ruan, Yaxing Cai, Ruihang Lai, Ziyi Xu, Yilong Zhao, and Tianqi Chen. 2024. Xgrammar: Flexible and efficient structured gener- ation engine for large language models. Preprint , arXiv:2411.15100. Abhimanyu Dubey and Abhinav Jauhri et al. 2024. The llama 3 herd of models. Preprint , arXiv:2407.21783. Es... | https://arxiv.org/abs/2505.21362v1 |
arXiv:2406.14805. Grgur Kova ˇc, Masataka Sawayama, Rémy Portelas, Cédric Colas, Peter Ford Dominey, and Pierre- Yves Oudeyer. 2023. Large language models as superpositions of cultural perspectives. Preprint , arXiv:2307.07870.Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gon... | https://arxiv.org/abs/2505.21362v1 |
Chowdhery, Quoc V . Le, Ed H. Chi, Denny Zhou, and Jason Wei. 2022. Challenging big-bench tasks and whether chain-of-thought can solve them. Preprint , arXiv:2210.09261. Jean M. Twenge. 2017. Have smartphones destroyed a generation? The Atlantic . Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, Shigua... | https://arxiv.org/abs/2505.21362v1 |
11,001 conversations. Each conver- sation takes place between two users (User1 and User2), and the persona profiles of both partici- pants are provided alongside the dialogue. Every synthetic user is assigned five persona attributes, with at least one of these attributes explicitly men- tioned in their dialogues. Given... | https://arxiv.org/abs/2505.21362v1 |
each score, encouraging a more deliberate evaluation process. Prompts of all 4 dimensions are listed in Figures 8, 9, 10, 11 E Alignment between LLM Judge and Human Rates We measure the alignment between the scores as- signed by the LLM judge and the average hu- man ratings across 50 sampled dialogues. The results, pre... | https://arxiv.org/abs/2505.21362v1 |
for short-term career suggestions, either explicitly m entioning your age and job title or subtly hinting at them . As the discussion evolves, progressively share m ore personal details to obtain tailored advice and deeper insights. Always respond using the following JSO N form at: { "proposed_question": …} User You ar... | https://arxiv.org/abs/2505.21362v1 |
"rating": int, "reason": str}}' System User profile: { Age, Education Level, Job Title, Country} User Here are the questions: Question 1: ... Question 2:... ... User Figure 8: The prompt designed for guiding the LLM judge to score the generated dialogues from the perspective of “Attribute Coverage” You will be given a a... | https://arxiv.org/abs/2505.21362v1 |
score. * If the language is not natural and the following questions keep repeating the user's self introduction --> minus 1 from the score. * If the language is too polite and sounds like seeking the advice from seniors --> minus 1 from the socre. Always respond using the following JSON format: {{ "rating": int, "reaso... | https://arxiv.org/abs/2505.21362v1 |
to questions based on the given user profile. Analyze the question carefully and tailor your response to match the user's context. System Here are the details of the user profile: { Age, Education Level, Job Title, Country} Below is the question: { Question } { Option List } Answer the question in json format: {”option_i... | https://arxiv.org/abs/2505.21362v1 |
“Entry Level” and “Senior Management” users exhibit the most pronounced differences, while the distances between “Entry Level” and “C-Suite” users are smaller than ex- pected. This is likely because the classifier tends to assign only titles like “Chief of Staff” to the “Se- nior Management” category, while the “C-Suit... | https://arxiv.org/abs/2505.21362v1 |
certifications and emerging tech areas like AI, cybersecurity, cloud computing, etc. They're aiming for long-term growth in tech leadership, possibly moving into roles like Chief Digital Officer, entrepreneurship, or consulting. \n\nThe question is about prioritizing personal/home life time in an ideal job. The options ra... | https://arxiv.org/abs/2505.21362v1 |
now, perhaps he would prioritize more balance. The options are from 1 (utmost) to 5 (no importance). \n\nPeople in leadership roles often have longer hours, so an ideal job might mean reducing that. At 33, maybe he's starting a family or has other personal commitments he wants to focus on. High school education might m... | https://arxiv.org/abs/2505.21362v1 |
arXiv:2505.21363v1 [cs.LG] 27 May 2025Subgroups Matter for Robust Bias Mitigation Anissa Alloula1Charles Jones2Ben Glocker2Bartłomiej W. Papie ˙z1 Abstract Despite the constant development of new bias mit- igation methods for machine learning, no method consistently succeeds, and a fundamental question remains unanswer... | https://arxiv.org/abs/2505.21363v1 |
work has explicitly addressed. Indeed, most bias mitigation methods rely on some form of grouping to first identify disadvantaged subgroups within the training data and then to implement group-based strate- gies aimed at improving generalisation or fairness. This can be as simple as observing a disparity in model perfo... | https://arxiv.org/abs/2505.21363v1 |
KL divergence between the subgroup-weighted biased dis- tribution and the unbiased test distribution. •We challenge the conventional assumption that the best way to obtain “fairness” with respect to a specific set of subgroups is always achieved by using those same subgroups for bias mitigation. 2. Related work 2.1. Bi... | https://arxiv.org/abs/2505.21363v1 |
Subgroups Matter for Robust Bias Mitigation others have developed new methods altogether which do not require subgroups to be defined in the traditional way. For instance, Kearns et al. (2018) and Hebert-Johnson et al. (2018) propose algorithms which aim to achieve fairness across all identifiable or richly structured ... | https://arxiv.org/abs/2505.21363v1 |
loss for resampling is equivalent to: ˆθresampling := arg min θ∈ΘkX g=01 kE(x,y)∼Ptrain g[ℓ(θ; (x, y))]. (3) Domain Independent (DomainInd) learning adjusts the model architecture by replacing the single classifier head withkseparate classifier heads, each corresponding to a subgroup, such that although each sample is ... | https://arxiv.org/abs/2505.21363v1 |
the X-rays, were ineffective, leading to the hypothesis that sex-specific dif- ferences were not causing the disparity (Weng et al., 2023). Subsequent work revealed that men and women presented different proportions of chest drains and ECG wires, which the model used as spurious correlations to predict disease, and tha... | https://arxiv.org/abs/2505.21363v1 |
(A, Y)andA subgroup annotations for data- and model-based methods respectively. This noise does not affect the class labels Y. For Civil comments, in addition to the synthetic granular subgroups, we directly explore the impact of granularity on real subgroups, as the dataset contains subgroup information of multiple hi... | https://arxiv.org/abs/2505.21363v1 |
here. We report the mean and standard deviation of the aggregate area under receiver operating characteristic curve (AUC) on the unbiased test set, alongside worst-group accuracy and accuracy gap across subgroups. We select these mea- sures for their directness and simplicity compared to other fairness criteria. We do ... | https://arxiv.org/abs/2505.21363v1 |
SC/no-SC subgroups, enable the model to focus on the (A, Y)pairs without the spurious correlation during training. Therefore the model learns to predict Yindependently of A, leading to better generalisation performance. Conversely, subgroups which do not take (A, Y)information into account tend to result in worse perfo... | https://arxiv.org/abs/2505.21363v1 |
the baseline model, indicating that they are rela- tively robust to annotation noise affecting a minority of sub- group annotations. This aligns with findings from Awasthi et al. (2020) and (Stromberg et al., 2024) who explore the impact of noise in post-processing and last-layer retraining respectively. A similar tren... | https://arxiv.org/abs/2505.21363v1 |
differences in upper bound are largely driven by the divergence between both distributions1. We explore whether the divergences achieved for each subgrouping cor- relate with generalisation error. We assume that the difference between both distributions is attributable to differences in probabilities of sampling each (... | https://arxiv.org/abs/2505.21363v1 |
for gDRO and resampling (also similar divergences). Moreover, it is interesting to note that in- corporating Sinto the (A, Y)groups (to get (Y, S, A )) is the most optimal grouping, as we observe empirically for MNIST (Figure 2). Although Sis not involved in the (A, Y) spurious correlation and not a cause for poor gene... | https://arxiv.org/abs/2505.21363v1 |
(JTT), a method which does not require subgroup labels at training, but still requires some for model selection, and find that, again, its success is de- pendent on subgroup choice (Appendix C: Table C8 and Figure C9). We also repeat the MNIST experiments in a setting where the spurious correlation is weaker such that ... | https://arxiv.org/abs/2505.21363v1 |
to substantially improving the work. Impact Statement This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here. References Ahn, S., Kim, S., and Yun, S.-Y . Mitigating dataset bia... | https://arxiv.org/abs/2505.21363v1 |
C. How does distribution matching help domain generalization: An information-theoretic analysis, 2024. URL https://arxiv.org/abs/2406.09745 . 10 Subgroups Matter for Robust Bias Mitigation Dwork, C., Hardt, M., Pitassi, T., Reingold, O., and Zemel, R. Fairness through awareness. In Proceedings of the 3rd Innovations in... | https://arxiv.org/abs/2505.21363v1 |
via data selection. In Globerson, A., Mackey, L., Belgrave, D., Fan, A., Paquet, U., Tomczak, J., and Zhang, C. (eds.), Advances in Neural Information Processing Systems , vol- ume 37, pp. 94490–94511. Curran Associates, Inc., 2024. Jones, C., Castro, D. C., De Sousa Ribeiro, F., Oktay, O., McCradden, M., and Glocker, ... | https://arxiv.org/abs/2505.21363v1 |
ViG-Bias: Visually Grounded Bias Discovery and Mitigation , pp. 414–429. Springer Nature Switzerland, Novem- ber 2024. ISBN 9783031732027. doi: 10.1007/ 978-3-031-73202-7 24. URL http://dx.doi. org/10.1007/978-3-031-73202-7_24 .Masiha, M. S., Gohari, A., Yassaee, M. H., and Aref, M. R. Learning under distribution misma... | https://arxiv.org/abs/2505.21363v1 |
Fairness with Noisy Protected Groups. In Advances in Neural Information Processing Systems , volume 33, pp. 5190– 5203. Curran Associates, Inc., 2020a. Wang, X., Saxon, M., Li, J., Zhang, H., Zhang, K., and Wang, W. Y . Causal balancing for domain generalization. InICLR , 2023. URL https://arxiv.org/abs/ 2206.05263 . W... | https://arxiv.org/abs/2505.21363v1 |
a pacemaker Perceived gender Gender S Foreground colour Sex Smiling Religion Dataset size 60000 3225 12500 8900 We downsample some of the datasets from their original size because we are constrained by the availability of each (Y, S, A ) combination. For example, for CheXPert, pacemaker annotations are only available f... | https://arxiv.org/abs/2505.21363v1 |
on their strong performance in previous similar work (Irvin et al., 2019; Jain et al., 2024; Izmailov et al., 2022; Kirichenko et al., 2023; Idrissi et al., 2022). •Backbones for vision models : ResNet18, ResNet50, DenseNet121 (not for MNIST images) •Batch size : 32, 64, 128, 256, 512 •Learning rate : [1e-5:1e-3] •Weig... | https://arxiv.org/abs/2505.21363v1 |
overall validation accuracy), the method does not improve over ERM (except for on MNIST where JTT works remarkably effectively, most likely due to the simplicity of the task). Table 8. Just train twice generalisation performance on unbiased test set across the four datasets is highly variable depending on the validatio... | https://arxiv.org/abs/2505.21363v1 |
=wi·Ptrain[j]P l∈GiPtrain[l]forj∈Gi. Let the atomic subgroup indices correspond to (Y, S, A )combinations in order [0,1,2,3,4,5,6,7]. For the subgroups we constructed, we therefore have: •Y:{{0,1,2,3},{4,5,6,7}} •A:{{0,2,4,6},{1,3,5,7}} •S:{{0,1,4,5},{2,3,6,7}} •(A, Y):{{0,2},{1,3},{4,6},{5,7}} •(S, Y):{{0,1},{2,3},{4,... | https://arxiv.org/abs/2505.21363v1 |
CXP. Each dot represents mean performance on the unbiased test set for a specific grouping, with error bars indicating the standard deviation across 3 random seeds. 21 Subgroups Matter for Robust Bias Mitigation G. Various ablations G.1. MNIST results with a weaker spurious correlation To verify that our results still ... | https://arxiv.org/abs/2505.21363v1 |
arXiv:2505.21364v1 [cs.LG] 27 May 2025Towards Interpretability Without Sacrifice: Faithful Dense Layer Decomposition with Mixture of Decoders James Oldfieldm,q∗Shawn ImmYixuan LimMihalis A. Nicolaouc Ioannis PatrasqGrigorios G Chrysosm mUniversity of Wisconsin–MadisonqQueen Mary University of LondoncThe Cyprus Institut... | https://arxiv.org/abs/2505.21364v1 |
scores on LLM-based auto-interpretability metrics [ 18,19], sparsity is often used as a proxy for interpretability [ 20,21]. To this end, many recent works– such as sparse autoencoders [ 22,23,6]–take inspiration from traditional sparse dictionary learning methodologies [ 24,25], re-writing pre-trained LLMs’ activation... | https://arxiv.org/abs/2505.21364v1 |
recover prior adapter- based MoEs [ 37,38] as a special case. Crucially, we prove that the proposed tensor factorization in MxDs leads to each ‘expert’ sublayer implementing a linear transformation with full-rank weights– allowing faithful reconstruction even under heavy sparsity. Empirically, we demonstrate that MxDs ... | https://arxiv.org/abs/2505.21364v1 |
tensor of parameters collating all Nexperts’ decoder weights W(n,:,:) =Wn∈RH×O. In MxDs, we use a large Nto scale the feature specialization, and set H:=H∗to match the original MLP’s smaller hidden dimension. With the gate routing each token to just its top- Kexperts, each Wn∈RH×Oreceives a gradient signal from only a ... | https://arxiv.org/abs/2505.21364v1 |
not hold, MxDs are consequently a more suitable class of conditional layer. 2.4 Factorized forward pass Figure 2: Mixture of Decoders extends the base MLP/GLU layers with a conditional ‘expert’ branch, modulating the MLP’s outputs.MxDs compute a linear combination of Nlinear transfor- mations of the dense vector. With ... | https://arxiv.org/abs/2505.21364v1 |
80k experts/features. We train all sparse layers on a total of 480M tokens of OpenWebText [ 42], with learning rate 1e−4and a context length of128, initializing the output bias as the empirical mean of the training tokens, and Din MxDs as the zero-matrix (following [ 26]). We vary Nin MxD layers to parameter-match Tran... | https://arxiv.org/abs/2505.21364v1 |
only do the proposed MxD layers outperform Transcoders [ 27] notably, but model performance is similarly preserved at all sparsity levels in MxD layers . With prior work finding sparse solutions to be more interpretable [ 17,19], the performance gap of MxDs at small Kis a significant advantage. Please also see Figure 1... | https://arxiv.org/abs/2505.21364v1 |
features we ought to expect the model to learn (or even whether they exist in the OpenWebText training data). Nonetheless, we can reasonably expect a useful unsupervised model to learn at least a handful of commonly occurring concepts and linguistic themes. We accordingly focus our evaluation on the relative abilities ... | https://arxiv.org/abs/2505.21364v1 |
shown in Figure 5, MxDs are competitive with the baselines, exhibiting a similar trade-off between textual coherence and presence of concept as we expect. 4 Related work Sparse decompositions Learning sparse [ 50,25], non-negative [ 51] features of a data signal has found many applications in computer vision [ 15,52,53... | https://arxiv.org/abs/2505.21364v1 |
achieve this at scale, proving that MxDs’ linear experts preserve the matrix rank properties of the original decoders. Experimentally, we showed MxDs significantly outperform on the sparsity-accuracy frontier when trained to replace dense MLP layers. Quantitative results on sparse probing and feature steering demonstra... | https://arxiv.org/abs/2505.21364v1 |
pages 160–187. PMLR, 01–03 Apr 2024. [6]Adly Templeton. Scaling monosemanticity: Extracting interpretable features from claude 3 sonnet . Anthropic, 2024. [7]Andy Arditi, Oscar Obeso, Aaquib Syed, Daniel Paleka, Nina Panickssery, Wes Gurnee, and Neel Nanda. Refusal in language models is mediated by a single direction. ... | https://arxiv.org/abs/2505.21364v1 |
with an overcomplete basis set: A strategy employed by v1? Vision research , 37(23):3311–3325, 1997. [25] M. Aharon, M. Elad, and A. Bruckstein. K-svd: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on Signal Processing , 54(11):4311– 4322, 2006. doi: 10.1109/TSP.2006.... | https://arxiv.org/abs/2505.21364v1 |
Pythia: A suite for analyzing large language models across training and scaling. In Int. Conf. Mach. Learn. (ICML) , pages 2397–2430. PMLR, 2023. [42] Aaron Gokaslan, Vanya Cohen, Ellie Pavlick, and Stefanie Tellex. Openwebtext corpus. http://Skylion007.github.io/OpenWebTextCorpus , 2019. [43] Damai Dai, Chengqi Deng, ... | https://arxiv.org/abs/2505.21364v1 |
representation learning from sparse transformation analysis, 2024. [55] Wei Xu, Xin Liu, and Yihong Gong. Document clustering based on non-negative matrix factorization. In Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Informaion Retrieval , SIGIR ’03, page 267–273, Ne... | https://arxiv.org/abs/2505.21364v1 |
Noam Shazeer, and Zhifeng Chen. GShard: Scaling giant models with condi- tional computation and automatic sharding. In Int. Conf. Learn. Represent. (ICLR) , 2021. [74] Nan Du, Yanping Huang, Andrew M Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi Zhou, Adams Wei Yu, Orhan Firat, et al. Glam: Effici... | https://arxiv.org/abs/2505.21364v1 |
Zhenda Xie, Zhengyan Zhang, Zhewen Hao, Zhibin Gou, Zhicheng Ma, Zhigang Yan, Zhihong Shao, Zhipeng Xu, Zhiyu Wu, Zhongyu Zhang, Zhuoshu Li, Zihui Gu, Zijia Zhu, Zijun Liu, Zilin Li, Ziwei Xie, Ziyang Song, Ziyi Gao, and Zizheng Pan. Deepseek-v3 technical report, 2025. [77] Joan Puigcerver, Carlos Riquelme, Basil Musta... | https://arxiv.org/abs/2505.21364v1 |
a tensor or a polyadic as a sum of products. Journal of Mathematics and Physics , 6:164–189, 1927. [94] J. Douglas Carroll and Jih Jie Chang. Analysis of individual differences in multidimensional scaling via an n-way generalization of “eckart-young” decomposition. Psychometrika , 35: 283–319, 1970. [95] CodeParrot. Gi... | https://arxiv.org/abs/2505.21364v1 |
. . . . . . . . 24 B.6 Ablations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 C Feature balance and shared experts 26 C.1 Expert/feature balance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 C.2 Shared experts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... | https://arxiv.org/abs/2505.21364v1 |
yields ˆy=NX n=1HX h=1an wnh:zh (11) =NX n=1HX h=1an (cn∗dh)zh (12) =NX n=1ancn ∗HX h=1zhdh (13) = C⊤a ∗ D⊤z , (14) which is exactly the RHS of Equation (5), showing the MxD forward pass is equivalent to the Hadamard product of C⊤aandD⊤z. A.3 Intuition for weight parameterization through the lens of tensor ... | https://arxiv.org/abs/2505.21364v1 |
learnable weights EGLU,E∈RI×H, and activation function ψ(.). To transform Equation (17) into the same model form as MxDs, we first pre-multiply the LHS by the identity matrix to match the MxD model form of Equation (5), yielding: yGLU= I⊤a ∗ E⊤x , (18) 3which is simply a reshaping of a higher-order tensor into a ma... | https://arxiv.org/abs/2505.21364v1 |
making MxDs a much more suitable and efficient class of layer for our goal of scalable specialization. We therefore see that the proposed lens of tensor methods for unification provides valuable insights about how to design more interpretable layers with the minimum trade-off to capabilities. B Additional quantitative ... | https://arxiv.org/abs/2505.21364v1 |
the end of training for all models and LLMs. At the smallest values of K(which we care about most for interpretability), MxDs’ normalized MSE is up to an order of magnitude smaller than Transcoders’ . B.3 Results on additional layers We also fully train all models and baselines (with 4 different values of K) on differe... | https://arxiv.org/abs/2505.21364v1 |
params) MxDs (38M params) 16 32 64 128 2560.00000.00010.00020.00030.00040.00050.00060.00070.00080.00090.00100.00110.00120.00130.00140.00150.00160.00170.0018 Pythia-410M (Layer 15) TC (67M params) STC (68M params) MxDs (67M params) 16 32 64 128 256 Sparsity level K 0.00000.00020.00040.00060.00080.00100.00120.00140.00160... | https://arxiv.org/abs/2505.21364v1 |
the maximum possible rank given the dimensions: 1 NNX n=1rank(Wn) min{H, O}. (23) We show in Table 3 the normalized rank across all 4 base models: MxD’s learned experts exhibit no rank deficiencies, providing further evidence of the large potential capacity of MxD layers despite their sparsity constraints on the expert... | https://arxiv.org/abs/2505.21364v1 |
the first 128tokens for all datasets but for the Github dataset, where we take the last 128tokens to avoid license headers [19,49]. For token-level probing, we instead take only the last 128tokens, where the final token contains the surname of the individual in question in the datasets of [49]. Binary probes are traine... | https://arxiv.org/abs/2505.21364v1 |
baselines. Interestingly, we observe a small peak of experts that fire more frequently in MxDs (e.g., around -2 on the x-axis)–perhaps specializing in common patterns and primitives in natural language. C.2 Shared experts We find that, by default, our MxD models naturally learn to use a shared expert, with the remainin... | https://arxiv.org/abs/2505.21364v1 |
plot the values of the expert pre-activation for positive /other classes (in the 1-vs-all setting). 28 4 3 2 1 Pre-activation value0.00.51.01.5Densityattorney 0 5 10 Pre-activation value0.00.51.01.5dentist 2 0 2 Pre-activation value0.00.51.0journalist 1 0 1 2 3 Pre-activation value012photographer 4 3 2 1 Pre-activation... | https://arxiv.org/abs/2505.21364v1 |
Transcoders better recover the cross entropy loss with the TopK activation. =ReLU (MLP) =GELU (MLP) MxD encoder activation0.000000.000050.000100.000150.000200.00025 Normalized MSE Pythia-410M =ReLU (MLP) =GELU (MLP) MxD encoder activation0.00000.00050.00100.00150.00200.0025GPT2-124M =ReLU (MLP) =GELU (MLP) MxD encoder ... | https://arxiv.org/abs/2505.21364v1 |
526 18499 7257 8244] [16092 3344 17100 7388] [19829 10864 7720 5507] [20001 15277 1905 11387]Token 1 Token 2 Token 3 Token 4 [10160 10962 19772 9610] [19772 15461 2630 8228] [19772 18694 7385 3494] [19772 19466 10619 970]Token 1 Token 2 Token 3 Token 41st highest expert index2nd highest expert index3rd highest expert i... | https://arxiv.org/abs/2505.21364v1 |
both high- and low-level specializations emerge in both GPT and Pythia models. Whilst we observe specializations to a range of concepts (such as punctuation, MMO games, words in specific contexts), we do not notice any systemic differences between the types of expert special- izations that emerge between the two models... | https://arxiv.org/abs/2505.21364v1 |
arXiv:2505.21372v1 [cs.LG] 27 May 2025Improving LLM-based Global Optimization with Search Space Partitioning Andrej Schwanke∗1Lyubomir Ivanov∗1David Salinas1,2 Fabio Ferreira1Aaron Klein4Frank Hutter3,2,1Arber Zela∗1 1University of Freiburg,2ELLIS Institute Tübingen,3Prior Labs,4ScaDS.AI, University of Leipzig Abstract... | https://arxiv.org/abs/2505.21372v1 |
samples DeepSeek R1 Mistral Large Grok 3 BetaLLaMA 4.0 Maverick Claude 3.7 Gemini 1.5 Gemini 1.5 + partitioning (c) Figure 1: ( a) 80 samples in [0,1]2: Gemini-1.5 simulating uniform sampling (green), and with region-wise partitioning (red) using the prompt in Listing 1. ( b) Gemini-1.5 prompted (see Listing 2) to gene... | https://arxiv.org/abs/2505.21372v1 |
f:X →Rwhere Xis a compact domain. The objective is to find x∗= arg maxx∈Xf(x)through a sequence of function evaluations. In this blackbox setting, we do not have access to gradients or other properties of f, and can only observe function values at queried points. The performance of optimization algorithms in this conte... | https://arxiv.org/abs/2505.21372v1 |
limitations in decision-making. Our algorithm builds upon these foundations in order to improve LLM-based blackbox optimization by integrating tree-based space partitioning, a UCB-inspired score function for balancing exploration and exploitation, and LLM-based candidate generation within locally promising regions. 3 H... | https://arxiv.org/abs/2505.21372v1 |
with the largest variance among points in the node) and a split value δ(the mean across the selected dimension). This produces two child nodes Xleft={x∈ X:xs≤δ},Xright={x∈ X:xs> δ}, whose union equals their parent and whose interiors are disjoint. After inserting nsample points, the Kleaves {Xl}K l=1form a partition of... | https://arxiv.org/abs/2505.21372v1 |
statistic be the largest improvement ever observed in a region Xℓ,t: fmin(t) = min i≤tf(xi), Y i=f(xi)−fmin(t) +ε, µ ℓ,t= max i∈Iℓ,tYi. (1) We subtract the current empirical minimum fmin(t)(since we are maximizing f) so the values become strictly non-negative and comparable across rounds1. Choosing a max rather than an... | https://arxiv.org/abs/2505.21372v1 |
each rebuild, we normalize the scores to [0,1], preserving the intended relative weights even when the set of leaves changes drastically. The total score of each partition determined by the KD-tree partitioning is: Bℓ,t= ¯µℓ,t+αt β1¯Vℓ,t+β2Eℓ,t , (3) 1The additive constant εprevents zero scores during the startup pha... | https://arxiv.org/abs/2505.21372v1 |
normalization in Equation 3. As tgrows, those exploratory components shrink and Bℓ,tbecome increasingly peaked around the empirical best leaves, pushing pℓ,ttoward a near -greedy regime. Moreover, a smooth annealing of αtin Equation 3 avoids an abrupt "switch-to-greedy" policy, which may ignore late-appearing, high-val... | https://arxiv.org/abs/2505.21372v1 |
exact same prompt structure as HOLLM (we provide the prompt templates in Appendix D), with the only difference being the region boundaries. Setup. Starting from n0= 5initial random evaluations, we run each method 3 times for a total of T= 100 iterations with different random seeds and report their mean and standard err... | https://arxiv.org/abs/2505.21372v1 |
improve beyond random search. Visualizing the Optimization Process. In Figure 4, we show a visualization of HOLLM ’s mechanics on a 1D multimodal function. The rectangles represent the KD-tree space partitions and they are highlighted in orange whenever they get selected. We can see that during the first iterations the... | https://arxiv.org/abs/2505.21372v1 |
We use a continuous representation [0,1]6of the input space and discretize it to evaluate the true function. As seen in Figure 6, HOLLM always outperforms the LLM baseline that samples globally and is on par with BORE and CQR. The global LLM seems to get stuck in local minima, therefore leading to stagnating performanc... | https://arxiv.org/abs/2505.21372v1 |
author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. Aaron Klein acknowledges the financial support by the Federal Ministry of Education and Research of Germany and by Sächsisch... | https://arxiv.org/abs/2505.21372v1 |
[14] Patryk Chrabaszcz, Ilya Loshchilov, and Frank Hutter. A downsampled variant of imagenet as an alternative to the cifar datasets. CoRR , abs/1707.08819, 2017. [15] Samuel Daulton, David Eriksson, Maximilian Balandat, and Eytan Bakshy. Multi-objective bayesian optimization over high-dimensional search spaces. In Jam... | https://arxiv.org/abs/2505.21372v1 |
at llms for material discovery: are they actually good for bayesian optimization over molecules? In Proceedings of the 41st International Conference on Machine Learning , ICML’24. JMLR, 2024. [33] Robert Langer and David A. Tirrell. Designing materials for biology and medicine. Nature , 428(6982):487–492, April 2004. [... | https://arxiv.org/abs/2505.21372v1 |
van der Schaar, Frank Hutter, and Roman Garnett, editors, Proceedings of the First International Conference on Automated Machine Learning , volume 188 of Proceedings of Machine Learning Research , pages 16/1–23. PMLR, 25–27 Jul 2022. [47] Bobak Shahriari, Kevin Swersky, Ziyun Wang, Ryan P. Adams, and Nando de Freitas. ... | https://arxiv.org/abs/2505.21372v1 |
Systems , NeurIPS ’20, Red Hook, NY , USA, 2020. Curran Associates Inc. [57] Yizao Wang, Jean-Yves Audibert, and Rémi Munos. Algorithms for infinitely many-armed bandits. In Proceedings of the 22nd International Conference on Neural Information Processing Systems , NeurIPS’08, page 1729–1736, Red Hook, NY , USA, 2008. ... | https://arxiv.org/abs/2505.21372v1 |
the threshold. New candidates are selected by maximizing the ratio ℓ(x)/g(x). •Bayesian Optimization by Density-Ratio Estimation (BORE) [ 52]reformulates Bayesian optimization as a binary classification problem. It trains a probabilistic classifier to distinguish between high-performing and low-performing configuration... | https://arxiv.org/abs/2505.21372v1 |
6)∈[0,1]6fmin≈−3.32237 Rosenbrock 8D Classic curved “banana” valley stretched to eight variables; unimodal yet highly ill-conditioned, requiring precise valley-tracking; non-separable.(x1, x2, . . . , x 8)∈ [−2.048,2.048]8fmin= 0 Rastrigin 10D Quadratic core overlaid with cosine ripples forms a perfectly regular grid o... | https://arxiv.org/abs/2505.21372v1 |
computational graph. Each edge can be assigned one of 5 categorical operations: avg_pool_3x3 (average pooling), nor_conv_3x3 (normal 3 ×3 convolution), skip_connect (identity connection), nor_conv_1x1 (normal 1 ×1 convolution), and none (no operation). This yields a total search space of 56= 15,625possible architecture... | https://arxiv.org/abs/2505.21372v1 |
from the LLM. problem dimensionality. We run each setting with 5 independent random seeds and report the mean performance ±standard error in Figure 9. Impact of Leaf Size ( m0).The leaf size parameter m0defines the maximum number of data points within a single leaf of the partitioning tree, directly controlling the gra... | https://arxiv.org/abs/2505.21372v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.