text
string
source
string
lan- guage models are zero-shot reasoners. Advances in neural information processing systems , 35:22199– 22213. Dongyuan Li, Ying Zhang, Zhen Wang, Shiyin Tan, Satoshi Kosugi, and Manabu Okumura. 2024. Active learning for abstractive text summarization via llm- determined curriculum and certainty gain maximiza- tion. In Findings of the Association for Computa- tional Linguistics: EMNLP 2024 , pages 8959–8971. Tengxiao Liu, Qipeng Guo, Xiangkun Hu, Cheng Ji- ayang, Yue Zhang, Xipeng Qiu, and Zheng Zhang. 2024. Can language models learn to skip steps? arXiv preprint arXiv:2411.01855 . Marwa Naïr, Kamel Yamani, Lynda Said Lhadj, and Riyadh Baghdadi. 2024. Curriculum learning for small code language models. arXiv preprint arXiv:2407.10194 . OpenAI, :, Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, Alex Iftimie, Alex Karpenko, Alex Tachard Passos, Alexander Neitz, Alexander Prokofiev, Alexander Wei, Allison Tam, and 244 others. 2024. Openai o1 system card. Preprint , arXiv:2412.16720. Rui Pan, Yinwei Dai, Zhihao Zhang, Gabriele Oliaro, Zhihao Jia, and Ravi Netravali. 2025. Specreason: Fast and accurate inference-time compute via specu- lative reasoning. arXiv preprint arXiv:2504.07891 . Keiran Paster, Marco Dos Santos, Zhangir Azerbayev, and Jimmy Ba. 2023. Openwebmath: An open dataset of high-quality mathematical web text. arXiv preprint arXiv:2310.06786 . Du Phan, Matthew Douglas Hoffman, David Dohan, Sholto Douglas, Tuan Anh Le, Aaron Parisi, Pavel Sountsov, Charles Sutton, Sharad Vikram, and Rif A Saurous. 2023. Training chain-of-thought via latent-variable inference. Advances in Neural In- formation Processing Systems , 36:72819–72841. Zhenting Qi, Mingyuan Ma, Jiahang Xu, Li Lyna Zhang, Fan Yang, and Mao Yang. 2024. Mutual reasoning makes smaller llms stronger problem-solvers. arXiv preprint arXiv:2408.06195 . Qwen, :, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, and 25 oth- ers. 2025. Qwen2.5 technical report. Preprint , arXiv:2412.15115. Rafael Rafailov, Archit Sharma, Eric Mitchell, Christo- pher D Manning, Stefano Ermon, and Chelsea Finn. 2023. Direct preference optimization: Your lan- guage model is secretly a reward model. Advances in Neural Information Processing Systems , 36:53728– 53741. Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain your- self! leveraging language models for commonsense reasoning. arXiv preprint arXiv:1906.02361 . Maarten Sap, Hannah Rashkin, Derek Chen, Ronan LeBras, and Yejin Choi. 2019. Socialiqa: Com- monsense reasoning about social interactions. arXiv preprint arXiv:1904.09728 . John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proxi- mal policy optimization algorithms. arXiv preprint arXiv:1707.06347 . Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, and 1 others. 2024. Deepseek- math: Pushing the limits of mathematical reason- ing in open language models. arXiv preprint arXiv:2402.03300 . Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2018. Commonsenseqa: A question answering challenge targeting commonsense knowl- edge. arXiv preprint arXiv:1811.00937 . Kimi Team, Angang Du, Bofei Gao, Bowei Xing, Changjiu Jiang, Cheng Chen, Cheng Li, Chenjun Xiao, Chenzhuang Du, Chonghua Liao, and 1 others.
https://arxiv.org/abs/2505.17746v1
2025. Kimi k1. 5: Scaling reinforcement learning with llms. arXiv preprint arXiv:2501.12599 . Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171 . Benfeng Xu, Licheng Zhang, Zhendong Mao, Quan Wang, Hongtao Xie, and Yongdong Zhang. 2020. Curriculum learning for natural language understand- ing. In Proceedings of the 58th annual meeting of the association for computational linguistics , pages 6095–6104. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. 2023. Tree of thoughts: Deliberate problem solving with large language models. Advances in neural information processing systems , 36:11809–11822. Yuanhao Yue, Chengyu Wang, Jun Huang, and Peng Wang. 2024. Distilling instruction-following abilities of large language models with task-aware curriculum planning. arXiv preprint arXiv:2405.13448 . Eric Zelikman, Georges Harik, Yijia Shao, Varuna Jayasiri, Nick Haber, and Noah D Goodman. 2024. Quiet-star: Language models can teach them- selves to think before speaking. arXiv preprint arXiv:2403.09629 . Jinghan Zhang, Xiting Wang, Fengran Mo, Yeyang Zhou, Wanfu Gao, and Kunpeng Liu. 2025. Entropy- based exploration conduction for multi-step reason- ing. arXiv preprint arXiv:2503.15848 .Xuan Zhang, Chao Du, Tianyu Pang, Qian Liu, Wei Gao, and Min Lin. 2024. Chain of preference opti- mization: Improving chain-of-thought reasoning in llms. Advances in Neural Information Processing Systems , 37:333–356. Chujie Zheng, Zhenru Zhang, Beichen Zhang, Runji Lin, Keming Lu, Bowen Yu, Dayiheng Liu, Jin- gren Zhou, and Junyang Lin. 2024. Processbench: Identifying process errors in mathematical reasoning. arXiv preprint arXiv:2412.06559 . A Implementation details Our experimental settings are consistent with Quiet- STaR (Zelikman et al., 2024). Specifically, we employ the AdamW optimizer with a warm-up step count of 20, a weight decay of 0.001, and a batch size of 8. For Quiet-STaR, we train for 100 steps, and for Fast Quiet-STaR, we select the last ckpt in the previous stage for initialization and train for 50 steps for each stage. The learning rates are adjusted slightly depending on the model: we use a learning rate of 1e-6 for Mistral (Jiang et al., 2023) and 8e-6 for Qwen2.5 (Qwen et al., 2025). During training, we perform sampling with a temperature of T=1. For evaluation, we adopt greedy decoding to ensure deterministic outputs. All training experiments are conducted on eight H800 GPUs. For measuring the Time to First Token (TTFT), we utilize a single H800 GPU and fix the context length to 256. TTFT is defined as the elapsed time between the moment the model receives the full input sequence and the generation of the first token.
https://arxiv.org/abs/2505.17746v1
arXiv:2505.17747v1 [cs.CL] 23 May 2025Discriminating Form and Meaning in Multilingual Models with Minimal-Pair ABX Tasks Maureen de Seyssel*1Jie Chi*1,2Skyler Seto1Maartje ter Hoeve1 Masha Fedzechkina1Natalie Schluter1 1Apple2Technical University of Denmark {mdeseyssel,jchi2}@apple.com Abstract We introduce a set of training-free ABX-style discrimination tasks to evaluate how multilin- gual language models represent language iden- tity (form) and semantic content (meaning). In- spired from speech processing, these zero-shot tasks measure whether minimal differences in representation can be reliably detected. This offers a flexible and interpretable alternative to probing. Applied to XLM-R (Conneau et al., 2020a) across pretraining checkpoints and lay- ers, we find that language discrimination de- clines over training and becomes concentrated in lower layers, while meaning discrimination strengthens over time and stabilizes in deeper layers. We then explore probing tasks, show- ing some alignment between our metrics and linguistic learning performance. Our results position ABX tasks as a lightweight framework for analyzing the structure of multilingual rep- resentations. 1 Introduction Multilingual Transformer models such as mBERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020a) have become essential tools for cross- lingual NLP. Trained on large concatenated cor- pora spanning dozens of languages, these models learn representations that support transfer across languages even in the absence of explicit cross- lingual supervision (Wu and Dredze, 2019; Con- neau et al., 2020b; Xue et al., 2021; Philippy et al., 2023, etc.). Despite their success, it remains un- clear how these models internally organize linguis- tic form and shared meaning. Prior work suggests that multilingual models encode both language- specific information (e.g., surface forms, word or- der) and language-agnostic features (e.g., semantic content), but the nature and interaction of these rep- resentations is not fully understood. These encod- ing choices shape generalization and transfer be- *Equal contributionhavior, including both positive effects (e.g., shared structure benefiting low-resource languages) and negative ones, such as the curse of multilingual- ity(Conneau et al., 2020a), where performance degrades due to interference across languages. Understanding how form and meaning are rep- resented, and how this balance evolves during pre- training, is essential to explain and improve cross- lingual transfer. If a model strongly encodes lan- guage identity, it may better avoid interference be- tween closely related languages1. Conversely, if it aligns meanings across languages, it may sup- port more effective semantic generalization. To explore this balance, we ask: How are languages represented at the form level? How well do mod- els encode shared meanings? And how do these properties evolve across training? Previous work has often relied on probing tasks to investigate such questions. While useful, prob- ing requires training classifiers on top of frozen representations, and results are sensitive to probe design and task setup (Belinkov, 2022; Hewitt and Liang, 2019; V oita and Titov, 2020). This makes it difficult to isolate what is truly encoded by the model versus what is learnable with supervision. We propose a zero-shot alternative: ABX-style discrimination tasks that directly measure model representation structure without additional train- ing. Originating in speech processing (Schatz et al., 2013), ABX tasks evaluate whether a model reli- ably discriminate minimal contrasts:
https://arxiv.org/abs/2505.17747v1
given a triplet (A,B,X), isXcloser to AorB? By designing minimal pairs that differ only in language or in meaning, we isolate and quantify how well models distinguishes these dimensions. Because they are contrastive, zero-shot, and training-free, these met- rics can be applied across languages, checkpoints, 1In fact, it was found that forcing some sort of separation in multilingual models can help somewhat alleviate these neg- ative interferences (Pfeiffer et al., 2022; Blevins et al., 2024; Xu et al., 2024; Huang et al., 2024). 1 and architectures with minimal adaptation. Our contributions are as follows: 1.We propose a training-free, ABX-style frame- work for analyzing multilingual representa- tions by contrasting minimal pairs. Our tasks are designed to isolate language iden- tity (form) and semantic content (meaning), offering a complementary alternative to tradi- tional probing methods. 2.We apply this framework to XLM-R across 36 languages and 630 language pairs, analyz- ing all pretraining checkpoints and layers. We show that language and meaning discrimina- tion evolve in parallel but are not mutually exclusive: different layers vary in the degree to which they encode each axis. 3.We relate ABX discrimination scores to down- stream performance on POS tagging, NER, and NLI. We find that form-oriented tasks cor- relate more strongly with language discrim- ination, while NLI, a semantic task, shows no consistent relationship to either axis, high- lighting a disconnect between task perfor- mance and intrinsic representational structure. 2 Related Work Multilingual language models are expected to sup- port cross-lingual generalization by encoding both language-specific form and shared semantic con- tent. However, existing evaluation methods typ- ically focus on one of these dimensions in isola- tion. This section reviews prior work on analyzing multilingual representations and highlight the need for a unified, training-free framework that jointly evaluates both language identity and meaning in a controlled, contrastive setting. 2.1 Evaluating Form and Content in Multilingual Representations Multilingual pretrained language models aim to map diverse languages into a shared embedding space. This allows for zero-shot and cross-lingual transfer, but raises the question of how these models balance language-specific and language- agnostic features during training. Content-focused evaluations typically focus on cross-lingual alignment, using methods such as translation retrieval to measure whether semanti- cally equivalent inputs in different languages are mapped to nearby embeddings (Sundar et al., 2025; Pires et al., 2019; Libovick `y et al., 2020; Hu et al.,2020). Models like LASER (Artetxe and Schwenk, 2019) and LaBSE (Feng et al., 2022) are explicitly trained to optimize such alignment. More recent work introduces contrastive alignment scores such as DALI (Ravisankar et al., 2025) to better capture meaning equivalence. However, these approaches abstract away from language identity and provide little insight into how models handle surface-form distinctions across languages. Conversely, form-focused evaluations examine how well a model encodes language identity. Clus- tering analyses show that multilingual embeddings often group by language or script, particularly in lower layers (Libovick `y et al., 2020; Choenni and Shutova, 2022). Classifiers trained on frozen repre- sentations can often identify input language with high accuracy (Choenni and Shutova, 2022), but this depends on probe training and
https://arxiv.org/abs/2505.17747v1
may not reflect the geometry of the representation space itself. These two evaluation paradigms have remained largely separate. To our knowledge, no existing method allows for simultaneous, controlled evalu- ation of both dimensions without relying on task- specific training. As a result, we lack a unified evaluation framework that can directly assess both dimensions under comparable, controlled condi- tions. 2.2 Training Dynamics and Linguistic Emergence in Multilingual Models Several studies have examined how multilingual representations evolve during pretraining. Blevins et al. (2022) tracked the emergence of linguistic knowledge in XLM-R (Conneau et al., 2020a), showing that different properties emerge at differ- ent layers and stages, and that the best-performing checkpoint varies across languages and tasks. Other studies have shown that multilingual models sometimes internally pivot through high-resource languages like English when processing low- resource inputs (Wendler et al., 2024; Schut et al., 2025), while other research suggests that these models juggle both language-specific and language- neutral features (Tang et al., 2024; Libovick `y et al., 2020; Tanti et al., 2021) These works highlight the complex interplay between form and content in multilingual models and how this balance shifts over time. However, they again mainly rely on task- specific probes or downstream evaluations, which do not offer a way to disentangle form and content in a direct, unsupervised way. 2 2.3 Prior Uses of ABX Evaluation The ABX framework offers a contrastive, classifier- free means of evaluating representational structure in a controlled, unsupervised setting. Originally developed in speech processing and psycholinguis- tics (Schatz et al., 2013, 2014; Schatz, 2016), ABX tests ask whether a test item Xis more similar (in embedding space) to a reference item Aor to an al- ternative B. By controlling the design of A,B, and X, ABX evaluations can isolate specific factors of interest (such as phoneme identity in speech) while holding others constant (e.g., speaker, con- text) (Versteegh et al., 2015; Dunbar et al., 2017; Hallap et al., 2022; Sicherman and Adi, 2023, etc.). ABX tasks have proven robust to variability from other categorical structures, enabling reliable mea- surement of the target factor even when other lin- guistic or speaker-related properties vary (Schatz, 2016). While ABX has primarily been applied to phoneme discrimination, recent work has begun adapting it to the other tasks, testing models’ ability to discriminate between languages (Carbajal et al., 2016; de Seyssel and Dupoux, 2020; de Seyssel, 2023), speakers (Thorburn et al., 2019; de Seyssel et al., 2022) and to evaluate syntactic or semantic distinctions (Algayres et al., 2022, 2023). Our work builds on this foundation by adapt- ing ABX discrimination to text-based multilingual models. We propose a set of zero-shot tasks that independently measure sensitivity to language iden- tity and semantic content using minimal contrast triplets. To our knowledge, this is the first uni- fied, training-free framework that systematically isolates and evaluates these two core dimensions of multilingual representation. 3 Our ABX Discrimination Framework Understanding how multilingual models structure linguistic information in their internal representa- tions is key to explaining their interactions between different languages, and to a further extent their generalization behaviors.
https://arxiv.org/abs/2505.17747v1
To directly assess the intrinsic structure of multilingual representations without relying on the pitfalls of extrinsic evalua- tion, we adapt the ABX discrimination paradigm, originally developed for evaluating speech embed- dings, to the text domain. In the original ABX framework (Schatz et al., 2013, 2014; Schatz, 2016), illustrated in Figure 1,three items ( A,B,X) are presented, with AandB belonging to different categories, and Xmatching the category of either AorB. A model is suc- cessful when Xis closer (according to a distance metric in embedding space) to the item that shares its category. That is, for each triplet, a correct de- cision is recorded when d(X, A)< d(X, B), with XandAsharing the same category. The score for a given triplet is computed as : score( A, B, X ) = 1[d(X, A)< d(X, B) ] where 1denotes the indicator function. The over- all ABX score is the average success rate across all triplets. Importantly, control variables can be intro- duced to eliminate bias from confounding factors. In that case, both AandBshare the same control variable to ensure that the discrimination is based solely on the variable of interest. The ABX score reflects the proportion of cor- rect decisions, with higher scores indicating better discrimination. We apply this setup to sentence embeddings extracted from XLM-R at various lay- ers and checkpoints, where each sentence is repre- sented by the mean-pooled embedding of its sub- word tokens. Cosine similarity is used as the dis- tance metric2. We propose two ABX variants for studying mul- tilingual language models: language discrimina- tion(LD) and meaning discrimination (MD). Both tasks leverage paired multilingual data and are con- structed to isolate either language identity or seman- tic content while controlling for the other. These tasks enable zero-shot, training-free evaluation of key representational properties in multilingual mod- els. We present both tasks below; see Appendix B for further illustrations and examples. Language Discrimination In the LD task, the objective is to assess whether the model can dis- tinguish between embedding representations from different languages while controlling for meaning. In other words, the focus is on determining whether the form of the language is encoded in the rep- resentations sufficiently to discriminate between 2We choose cosine as it is the standard metric in many embedding-based evaluations, particularly in multilingual sen- tence retrieval and alignment tasks (e.g., Ravisankar et al., 2025; Sundar et al., 2025; Mohammadshahi et al., 2019). Co- sine is well-suited to measuring relative orientation in high- dimensional spaces and is less sensitive to differences in em- bedding magnitude, which makes it particularly effective for comparing representations across languages and layers. 3 Figure 1: Illustration of the ABX discrimination task. AandXshare the target variable, whereas Bdiffers. Control variables may be included, with AandBshar- ing the same control variable. languages. Triplets are constructed as follows: X comes from language L1and carries meaning M1; A, the target, is also from language L1but con- veys a different meaning M2; andB, the distractor, is from another language L2but shares the same meaning M2asA, hence controlling for meaning (see Appendix B for an illustration). The task
https://arxiv.org/abs/2505.17747v1
is considered successful if the model leads to the dis- tance between AandXto be smaller than that between BandX. Meaning Discrimination In the MD task, we test whether the model captures differences in meaning while holding language constant. The goal is to evaluate whether semantic content is encoded in the representations independently of surface form. Triplets are constructed such that A andXshare the same meaning M1but come from different languages ( L1andL2), while Bis in the same language as A(L2) but conveys a different meaning M2. The model is considered successful if it places Xcloser to Athan to B, indicating that it encodes semantic similarity across languages, beyond surface-level language identity. While LD primarily probes the presence of language-specific information, MD offers a more direct lens on semantic similarity. High MD scores, especially across languages, suggest that the model encodes meaning in a way that is at least partially language-agnostic. As such, MD may serve as a proxy for cross-lingual semantic alignment within the representation space. In fact, we show in Sec- tion 4.1 that a standard cross-lingual retrieval task, commonly used to assess such alignment, corre- lates highly with our MD task, supporting the idea that MD captures cross-lingual alignment3. We do 3The high correlation does not imply they are identical. Our ABX MD task targets the same underlying ability under more controlled conditions. Instead of ranking many candi-not perform a similar analysis for LD, as no exist- ing metric captures the specific abilities assessed by our language ABX task. 4 Discrimination dynamics in a multilingual models 4.1 General Experimental setup Model To study pretraining dynamics in a multi- lingual setting, we use the base version of XLM-R (Conneau et al., 2020a) (L = 12, H = 768, A = 12, 270M parameters), a widely used multilingual masked language model. Specifically, we rely on the checkpoints released by Blevins et al. (2022), who retrained XLM-R from scratch in order to ex- amine the evolution of language representations during pretraining4. All evaluations and analyses in this work are based on the representations from these checkpoints. ABX Languages and Dataset We construct ABX triplets and perform evaluations using the WMT24++ dataset (Deutsch et al., 2025), a multi- lingual corpus of 55 languages with sentence-level alignments across all language pairs. From this corpus, we select 36 languages spanning a broad range of families, scripts, and typological features (see Appendix A for the complete list). This selec- tion yields 630 unordered language pairs. Triplets are sampled randomly from aligned sentence pairs, ensuring that each triplet satisfies the relevant ABX condition (form or meaning), and that the sample size is sufficient to ensure broad and unbiased cov- erage. For each evaluation mode and language pair, we generate approximately 100,000 triplets. Unless stated otherwise, we report discrimination scores averaged across all layers. In most analy- ses, we present scores separately by checkpoint to track how discrimination abilities evolve during training. In addition to language-pair scores, we compute a global LD or MD score for each lan- guage, defined as the average across all pairings with the other
https://arxiv.org/abs/2505.17747v1
35 languages. These global metrics offer a higher-level view of how well a language is discriminated or semantically aligned within the multilingual space. Validation of ABX Metrics To validate our met- rics, we perform two control analyses. First, we dates, MD ABX uses contrastive triplets that isolate semantic differences while tightly controlling for language . 4Details of the pretraining scheme can be found in Blevins et al. (2022) 4 confirm that both ABX scores return values at or near chance (0.5) under a "baseline" setup, where the variable of interest (language for LD, meaning for MD) is held constant across all three elements of the triplet. This serves as a sanity check to rule out bias in the construction of the triplets or evalu- ation procedure. Second, we compare MD scores at the final checkpoint with performance on a stan- dard cross-lingual retrieval task. Following the setup of Sundar et al. (2025), we compute, for each language pair (L1, L2), the top-1 accuracy of re- trieving the most semantically similar sentence in language L2given a sentence in language L1. The retrieval pool consists of L2candidates from the WMT24 dataset (and vice versa). While retrieval tasks typically rely on mean-pooled representations from the final layer, our ABX evaluations average scores across all layers. Despite this difference, we find a strong correlation between the two metrics (Pearson r= 0.77)5. This supports the validity of ABX as a proxy for semantic alignment. Impor- tantly, ABX goes further by explicitly controlling for surface form (in the case of MD), enabling a more fine-grained assessment of the model’s se- mantic representations. 4.2 Experiments We begin by analyzing how the model’s ability to discriminate between language identity (form) and semantic meaning (content) evolves during training. Figures 2, 3, and 4 present complementary views of these dynamics across checkpoints and layers. Checkpoint-level evolution. Figure 2 shows the evolution of average LD and MD scores across checkpoints, aggregated over all language pairs. First of all, we can see that all scores are consis- tently above the 0.5 baseline, meaning that the model, at all checkpoints, can discriminate be- tween languages and meanings (cross-lingually) to some extent. LD score declines rapidly dur- ing early training steps and gradually recovers in later stages, while MD score steadily improves. This suggests that as training progresses, the model increasingly prioritizes semantic abstraction over explicit language-specific cues. We also observe a negative correlation between the two measures when considering all language pairs across checkpoints (Spearman’s ρ=−0.74, 5When both use last-layer embeddings, r= 0.73; when comparing last-layer retrieval to all-layer ABX, Pearson drops tor= 0.53, but remains significant ( p < 0.001). Figure 2: Language and meaning ABX discrimination scores across checkpoints (averaged over layers and all language pairs). Baseline score is 0.5. p < 0.001; Pearson’s r=−0.68,p < 0.001), computed over individual (language pair ×check- point) points. We also ensure that this correlation is not merely driven by training dynamics by exam- ining the final checkpoint (step 150,000) in isola- tion. The relationship remains strong (Spearman’s ρ=−0.83,p < 0.001; Pearson’s r=−0.72, p <0.001), confirming that
https://arxiv.org/abs/2505.17747v1
language pairs which are more separable by form tend to exhibit lower meaning preservation, even in the fully trained model. Layer-level patterns. To better understand how these abilities are distributed within the model, Fig- ure 3 plots discrimination scores across layers for the final checkpoint. LD is strongest in the lower layers and gradually decreases with depth, reach- ing a plateau in the upper layers before rising again in the final layer. In contrast, MD starts lower but quickly rises and remains high in the upper layers, but decreases slightly in the last layer. This pattern suggests that earlier layers focus more on identi- fying the language of the input, while later layers capture its meaning more effectively. We also find a significant negative correlation be- tween language and meaning discrimination across layers (Spearman’s ρ=−0.66,p <0.001; Pear- son’s r=−0.53,p <0.001). This indicates that, as representations evolve through the network, in- creases in meaning discrimination are generally accompanied by decreases in language separability. However, the correlation coefficients are weaker than those observed across language pairs, suggest- ing that this trade-off is not strictly enforced at the layer level. Instead, the model exhibits a more flex- ible allocation of representational capacity across form and meaning over its depth. 5 Figure 3: Language and meaning ABX discrimination scores across layers (averaged over all language pairs) for the last checkpoint (step 150 000) Joint checkpoint and layer dynamics. Figure 4 presents ABX discrimination scores as a function of both checkpoint and layer. LD (left panel) is initially high across most layers but gradually be- comes concentrated in the lower layers and the output layer as training advances. In contrast, MD (right panel) improves steadily across all layers, especially in the deeper ones. Taken together, these patterns suggest that the model initially relies heavily on language-specific features but gradually shifts toward encoding more abstract, language-invariant semantic structures. Importantly, the two forms of discrimination are not strictly opposed at the layer level. While a trade- off exists, its moderate strength suggests that the model can support both language sensitivity and semantic alignment to some degree simultaneously. We provide an additional analysis in Ap- pendix D, showing how both discrimination scores vary across individual languages and training checkpoints. 4.3 Discussion These findings support the view that pretraining leads to a progressive decoupling of surface form and semantic content. Early in training, language identity is clearly encoded across the model. As training proceeds, this information becomes in- creasingly concentrated in the lower layers, while deeper layers develop language-invariant semantic representations. This aligns with prior work sug- gesting that lower layers encode form-related prop- erties, while higher layers abstract away toward more conceptual information (Pires et al., 2019; Tenney et al., 2019). Notably, at convergence, sev- eral middle layers appear to support both types ofdiscrimination to a moderate degree, suggesting a partial overlap between structural and semantic signals rather than strict exclusivity. 5 Correlation of ABX discrimination metrics with linguistic learning We then examine whether the model’s discrimina- tion patterns relate to linguistic task performance, focusing on monolingual probing and cross-lingual
https://arxiv.org/abs/2505.17747v1
transfer. 5.1 Experimental Setup Following Blevins et al. (2022), we evaluate both monolingual probing and cross-lingual transfer to test how our ABX discrimination metrics relate to linguistic generalization. We use part-of-speech tagging (POS), named entity recognition (NER), and natural language inference (NLI) as represen- tative tasks. POS and NLI were used in the origi- nal analysis; we additionally include NER, which offers a complementary view of lexical-level infor- mation and the form–content divide. These tasks span different linguistic levels, from surface form to sentence-level semantics. All probes are trained independently per lan- guage with early stopping on validation loss. We run 6 iterations per setup with different random seeds and report average results. Unless otherwise specified, all experiments use the final XLM-R checkpoint (step 150,000). Part-of-Speech Tagging (POS) We use Univer- sal Dependencies (UD) (Nivre et al., 2020). Mono- lingual performance is evaluated on all 36 lan- guages from our ABX setup, using standard UD splits. For cross-lingual transfer, we follow Blevins et al. (2022) and use the Parallel UD (PUD) subset at test time, covering 18 languages (Appendix A). Named Entity Recognition (NER) We use WikiAnn (Rahimi et al., 2019), providing NER labels in 36 languages. Monolingual evaluation mirrors the POS setup. NER was not included in prior analyses and serves as a new probe of lexical- level representations. For cross-lingual transfer, we use the same 18-language subset used in POS. Natural Language Inference (NLI) For NLI, we use XNLI (Conneau et al., 2018), a multilin- gual extension of standard NLI benchmarks. We evaluate both monolingual and cross-lingual per- formance on the 13 XNLI languages that overlap with our 36-language set. 6 Figure 4: Evolution of ABX discrimination scores across model checkpoints and layers. Dark regions indicate higher discrimination scores. (Left: LD; right: MD) 5.2 Discrimination Scores and Monolingual Linguistic Probing Following prior work (Blevins et al., 2022), we find the best checkpoint for probe accuracy varies across tasks and languages (see Appendix E)6. To assess whether language (LD) or meaning dis- crimination (MD) predict probing performance, we regress POS, NER, and NLI accuracy against each language’s global LD and MD scores, using both the final checkpoint (Last) and the mean across checkpoints (Avg.). For each setting, we fit a mul- tiple linear regression of the form: Accuracy ∼β0+β1·LD+β2·MD+ϵ Table 1 summarizes the results7. For POS, lan- guage discrimination is a robust negative predictor of accuracy, while meaning discrimination shows no significant effect. This suggests that languages which are more easily distinguishable from others (i.e., with higher LD scores) tend to perform worse on syntactic probing tasks, consistent with the idea that strong language-specific encoding may hin- der generalization of structural information across languages (see Appendix F for visualization). In contrast, neither LD nor MD significantly pre- dicts performance on NER or XNLI. These tasks may depend less consistently on cross-lingual struc- tural overlap than POS, which could explain the absence of LD as a predictor. While one might ex- pect MD to be predictive, especially for NLI which 6We exclude checkpoint 450,000 from all analyses due to a training instability,
https://arxiv.org/abs/2505.17747v1
probably due to gradient clipping, that affects both probing and discrimination metrics (see Figure 2). 7We also ensure that these effects are not driven by training data size. Language-wise probing accuracy shows no signifi- cant correlation with pretraining data quantities (taken from Conneau et al. (2020a)).Setting Task Ckpt R2LD Coef (p) MD Coef (p) Prob. POS Avg. 0.395 −2.34(p< .01)−0.26(n.s.) Prob. POS Last 0.37 −1.78(p< .01)−0.36(n.s.) Prob. NER Avg. 0.085 −0.578 (n.s.) −0.074 (n.s.) Prob. NER Last 0.087 −0.584 (n.s.) −0.147 (n.s.) Prob. NLI Avg 0.275 −0.07(n.s.) −0.12(n.s.) Prob. NLI Last 0.224 −0.08(n.s.) −0.14(n.s.) CL POS Last 0.324 −1.66(p< .001 )−0.07(n.s.) CL NER Last 0.146 −0.51(p< .001 ) +0.06(n.s.) CL NLI Last 0.3009 −0.1(n.s.) −0.015 (n.s.) Table 1: Summary of linear regression results predicting POS and NER accuracy from language discrimination (LD) and meaning discrimination (MD) scores. Each row corresponds to a probing (Prob.) or cross-lingual (CL) evaluation setting. is explicitly semantic in nature, both tasks may rely on aspects of meaning not well captured by our ABX-based definition of semantic alignment We also explore whether ABX scores can guide language-specific checkpoint selection, under the hypothesis that lower language discrimination might signal better generalization. We find that LD-based ABX selection improves performance for POS (see Appendix G for details). 5.3 Discrimination Scores and Cross-Lingual Transfer We evaluate cross-lingual transfer on POS, NER, and NLI at the final checkpoint. As originally found by Blevins et al. (2022), transfer accuracy varies widely across source–target pairs (see Ap- pendix H for detailed heatmaps). To test whether ABX discrimination explains this variation, we fit linear regression models predicting transfer accu- racy from LD and MD scores between language pairs (see Table 1). We find that LD is a significant negative predictor for both POS and NER. Neither LD nor MD is predictive of NLI performance. This 7 supports the view that strong language-specific en- coding can hinder generalization across languages (see Appendix I for visualization). We also test whether ABX language discrimination can guide source language selection for transfer. While it does not consistently identify the single best source, it often selects competitive candidates and outper- forms random baselines (see Appendix J). 5.4 Discussion A key finding is that language discrimination neg- atively correlates with POS performance in both monolingual and cross-lingual settings. For NER, LD is predictive only in the cross-lingual setting, suggesting that language-specific encoding affects transfer between languages but has less impact on within-language structure. Importantly, ABX dis- crimination scores’ interpretation differs slightly between settings. In monolingual probing, LD/MD scores are averaged across all language pairs per language, while cross-lingual transfer uses pair- specific scores for each source–target combination. This finer granularity may help capture transfer- specific effects, explaining why LD predicts cross- lingual NER but not monolingual performance: in- terference may depend more on the relationship between particular languages than on a language’s overall discriminability. Overall, these results sug- gest that when a language is highly discriminable from others, its representations may become more isolated, reducing structural sharing and hindering transfer. In the case of POS, this is especially appar- ent in
https://arxiv.org/abs/2505.17747v1
monolingual probing, where high LD may reflect a failure to encode shared syntactic patterns. By contrast, MD does not significantly predict downstream accuracy in any task. While one might expect MD to relate to semantically oriented tasks like NLI, success there may depend on higher-level reasoning unaccounted for by our contrastive ABX metric. Prior work has also highlighted problem- atic annotation artifacts, not to mention hypothesis- only biases in the original SNLI dataset from which XNLI was developed, that limit its use for measur- ing semantic generalization (Poliak et al., 2018; Gururangan et al., 2018). 6 Conclusion and Future Work This work introduces ABX-style discrimination metrics for testing how multilingual encoder mod- els discriminate language identity (form) and se- mantic content (meaning). Adapting ABX totext-based multilingual models, we provide a lightweight, interpretable tool for analysing rep- resentational structure. Applied to XLM-R (Conneau et al., 2020a), our analysis reveals consistent trends across train- ing and depth: language discrimination decreases and concentrates in lower layers, while meaning discrimination increases and stabilizes in deeper ones. This suggests a shift from form-sensitive to meaning-oriented representations during train- ing, without implying a strict trade-off. We also examine how these metrics relate to downstream performance. Higher language discrimination cor- relates with lower accuracy on form-sensitive tasks such as POS and NER, while meaning discrimina- tion shows no consistent link, pointing to a possi- ble disconnect between representational alignment and task requirements. These findings position ABX discrimination as a useful metric for analyz- ing how multilingual models separate linguistic form from content. They offer a new lens on the evolving structure of multilingual representation spaces and the balance between language-specific and language-invariant information. These metrics could also support practical use cases, such as adap- tive checkpoint selection or lightweight diagnostics in multilingual pipelines. Future work can build on this in several direc- tions. First, discrimination patterns could be related to typological linguistic features, and it is worth investigating how this typological differences can influences form and content discrimination scores, as previous work has found positive transfer when pairing typologically similar languages (Wu and Dredze, 2020). Second, while we evaluated tasks spanning syntax and semantics (POS, NER, NLI), deeper semantic tasks could better test the role of meaning discrimination. Third, because ABX met- rics are architecture-agnostic, they can be applied to decoder-only LLMs, enabling cross-architecture comparisons. Finally, while ABX does not directly measure representational separation, high discrim- ination may suggest that a language occupies a distinct subspace. This raises a broader question: how much language sensitivity (i.e., the ability to discriminate languages) can a model have with- out harming cross-lingual transfer, and how can models balance this trade-off between promoting representational sharing and avoiding interference? 8 Limitations While our approach provides a detailed analysis of discrimination in multilingual models, it comes with a number of limitations that constrain its gen- erality and suggest directions for future research. Encoder architecture Our analysis focuses ex- clusively on encoder-only architectures, specifi- cally XLM-R. This choice is motivated by the fact that encoder models based on masked language modeling provide stable, structured, layer-wise rep-
https://arxiv.org/abs/2505.17747v1
resentations, which are well suited to probing and contrastive analysis. While this makes them a natu- ral starting point for validating our ABX discrim- ination framework, it remains an open question whether similar dynamics hold for decoder-only or encoder–decoder models, which are trained using autoregressive or sequence-level objectives. Ex- tending our framework to such architectures is an important direction for future work. Discrimination vs Separation Although we dis- tinguish clearly between language and meaning discrimination, we do not explicitly quantify sep- aration in the representation space (e.g., via clus- tering structure, inter-class variance). Our results suggest that discrimination scores may indirectly reflect separation, but further work is needed to validate this link and to determine whether a model can be discriminative without being structurally partitioned. Task coverage Our evaluation focuses on POS tagging, NER, and NLI, which primarily target syn- tactic and sentence-level semantic understanding. While these tasks are widely used and informative, they may not capture deeper semantic, pragmatic, or discourse-level capabilities. As a result, the role of meaning discrimination in supporting more ab- stract or context-sensitive generalization remains an open question. Cross-linguistic generality vs. language-specific phenomena Finally, our analysis examines lan- guage and meaning discrimination broadly across multiple languages, but does not investigate the intricacies of specific languages or language fam- ilies. Languages exhibit unique structural prop- erties, morphological complexity, and semantic nuances that may be represented differently in multilingual models. Future work should explore language-specific discrimination patterns, partic- ularly for typologically diverse languages, to bet-ter understand how models encode both universal and language-specific linguistic properties. This would provide insights into representational trade- offs that occur when accommodating multiple lan- guages within a shared parameter space. References Robin Algayres, Yossi Adi, Tu Nguyen, Jade Copet, Gabriel Synnaeve, Benoît Sagot, and Emmanuel Dupoux. 2023. Generative spoken language model based on continuous word-sized audio tokens. In Pro- ceedings of the 2023 Conference on Empirical Meth- ods in Natural Language Processing , pages 3008– 3028. Robin Algayres, Tristan Ricoul, Julien Karadayi, Hugo Laurençon, Salah Zaiem, Abdelrahman Mohamed, Benoît Sagot, and Emmanuel Dupoux. 2022. Dp- parse: Finding word boundaries from raw speech with an instance lexicon. Transactions of the Associ- ation for Computational Linguistics , 10:1051–1065. Mikel Artetxe and Holger Schwenk. 2019. Mas- sively multilingual sentence embeddings for zero- shot cross-lingual transfer and beyond. Transac- tions of the association for computational linguistics , 7:597–610. Yonatan Belinkov. 2022. Probing classifiers: Promises, shortcomings, and advances. Computational Linguis- tics, 48(1):207–219. Terra Blevins, Hila Gonen, and Luke Zettlemoyer. 2022. Analyzing the mono-and cross-lingual pretraining dy- namics of multilingual language models. In Proceed- ings of the 2022 Conference on Empirical Methods in Natural Language Processing , pages 3575–3590. Terra Blevins, Tomasz Limisiewicz, Suchin Gururan- gan, Margaret Li, Hila Gonen, Noah A Smith, and Luke Zettlemoyer. 2024. Breaking the curse of multi- linguality with cross-lingual expert language models. InProceedings of the 2024 Conference on Empiri- cal Methods in Natural Language Processing , pages 10822–10837. Maria Julia Carbajal, Emmanuel Dupoux, and 1 others. 2016. Modeling language discrimination in infants using i-vector representations. In Proceedings of the Annual Meeting of the Cognitive
https://arxiv.org/abs/2505.17747v1
Science Society , volume 38. Rochelle Choenni and Ekaterina Shutova. 2022. Inves- tigating language relationships in multilingual sen- tence encoders through the lens of linguistic typology. Computational Linguistics , 48(3):635–672. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Édouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020a. Unsupervised 9 cross-lingual representation learning at scale. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics , pages 8440– 8451. Alexis Conneau, Ruty Rinott, Guillaume Lample, Ad- ina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. Xnli: Evaluating cross- lingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natu- ral Language Processing , pages 2475–2485. Alexis Conneau, Shijie Wu, Haoran Li, Luke Zettle- moyer, and Veselin Stoyanov. 2020b. Emerging cross-lingual structure in pretrained language mod- els. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 6022–6034. Maureen de Seyssel. 2023. Unsupervised multilingual models of speech representation, an approach in- spired by cognitive science . Ph.D. thesis, Ecole Nor- male Supérieure (ENS). Maureen de Seyssel and Emmanuel Dupoux. 2020. Does bilingual input hurt? a simulation of language discrimination and clusteringusing i-vectors. In Pro- ceedings of the Annual Meeting of the Cognitive Sci- ence Society , volume 42. Maureen de Seyssel, Guillaume Wisniewski, and Em- manuel Dupoux. 2022. Is the language familiarity effect gradual? a computational modelling approach. InCogSci 2022-44th Annual Meeting of the Cogni- tive Science Society . Daniel Deutsch, Eleftheria Briakou, Isaac Caswell, Mara Finkelstein, Rebecca Galor, Juraj Juraska, Geza Kovacs, Alison Lui, Ricardo Rei, Jason Riesa, and 1 others. 2025. Wmt24++: Expanding the language coverage of wmt24 to 55 languages & dialects. arXiv preprint arXiv:2502.12404 . Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In Proceedings of the 2019 conference of the North American chapter of the association for com- putational linguistics: human language technologies, volume 1 (long and short papers) , pages 4171–4186. Ewan Dunbar, Xuan Nga Cao, Juan Benjumea, Julien Karadayi, Mathieu Bernard, Laurent Besacier, Xavier Anguera, and Emmanuel Dupoux. 2017. The zero resource speech challenge 2017. In 2017 IEEE Auto- matic Speech Recognition and Understanding Work- shop (ASRU) , pages 323–330. IEEE. Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Ari- vazhagan, and Wei Wang. 2022. Language-agnostic bert sentence embedding. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 878–891. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A Smith.2018. Annotation artifacts in natural language infer- ence data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 2 (Short Papers) , pages 107–112. Mark Hallap, Emmanuel Dupoux, and Ewan Dun- bar. 2022. Evaluating context-invariance in unsu- pervised speech representations. arXiv preprint arXiv:2210.15775 . John Hewitt and Percy Liang. 2019. Designing and in- terpreting probes with control tasks. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language
https://arxiv.org/abs/2505.17747v1
Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) , pages 2733–2743. Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin Johnson. 2020. Xtreme: a massively multilingual multi-task bench- mark for evaluating cross-lingual generalization. In Proceedings of the 37th International Conference on Machine Learning , pages 4411–4421. Yongxin Huang, Kexin Wang, Goran Glavaš, and Iryna Gurevych. 2024. Modular sentence encoders: Sep- arating language specialization from cross-lingual alignment. arXiv preprint arXiv:2407.14878 . Jindˇrich Libovick `y, Rudolf Rosa, and Alexander Fraser. 2020. On the language neutrality of pre-trained mul- tilingual representations. In Findings of the Associ- ation for Computational Linguistics: EMNLP 2020 , pages 1663–1674. Alireza Mohammadshahi, Rémi Lebret, and Karl Aberer. 2019. Aligning multilingual word embed- dings for cross-modal retrieval task. In Proceedings of the Beyond Vision and LANguage: inTEgrating Real-world kNowledge (LANTERN) , pages 11–17. Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Jan Hajic, Christopher D Manning, Sampo Pyysalo, Sebastian Schuster, Francis Tyers, and Daniel Zeman. 2020. Universal dependencies v2: An evergrowing multilingual treebank collection. In Proceedings of the Twelfth Language Resources and Evaluation Conference , pages 4034–4043. Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James Cross, Sebastian Riedel, and Mikel Artetxe. 2022. Lifting the curse of multilinguality by pre-training modular transformers. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 3479–3495. Fred Philippy, Siwen Guo, and Shohreh Haddadan. 2023. Towards a common understanding of contribut- ing factors for cross-lingual transfer in multilingual language models: A review. In Proceedings of the 61st Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers) , pages 5877–5891. 10 Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual bert? In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 4996–5001. Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language infer- ence. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics , pages 180– 191. Afshin Rahimi, Yuan Li, and Trevor Cohn. 2019. Mas- sively multilingual transfer for ner. In ACL 2019-57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference , pages 151–164. Association for Computational Linguistics- ACL. Kartik Ravisankar, Hyojung Han, and Marine Carpuat. 2025. Can you map it to english? the role of cross- lingual alignment in multilingual performance of llms. arXiv preprint arXiv:2504.09378 . Thomas Schatz. 2016. ABX-discriminability measures and applications . Ph.D. thesis, Université Paris 6 (UPMC). Thomas Schatz, Vijayaditya Peddinti, Francis Bach, Aren Jansen, Hynek Hermansky, and Emmanuel Dupoux. 2013. Evaluating speech features with the minimal-pair abx task: Analysis of the classi- cal mfc/plp pipeline. In INTERSPEECH 2013: 14th Annual Conference of the International Speech Com- munication Association , pages 1–5. Thomas Schatz, Vijayaditya Peddinti, Xuan-Nga Cao, Francis R Bach, Hynek Hermansky, and Emmanuel Dupoux. 2014. Evaluating speech features with the minimal-pair abx task (ii): resistance to noise. In INTERSPEECH , pages 915–919. Lisa Schut, Yarin Gal, and Sebastian Farquhar. 2025. Do
https://arxiv.org/abs/2505.17747v1
multilingual llms think in english? arXiv preprint arXiv:2502.15603 . Amitay Sicherman and Yossi Adi. 2023. Analysing dis- crete self supervised speech representation for spo- ken language modeling. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , pages 1–5. IEEE. Anirudh Sundar, Sinead Williamson, Katherine Met- calf, Barry-John Theobald, Skyler Seto, and Masha Fedzechkina. 2025. Steering into new embedding spaces: Analyzing cross-lingual alignment induced by model interventions in multilingual language mod- els.arXiv preprint arXiv:2502.15639 . Tianyi Tang, Wenyang Luo, Haoyang Huang, Dongdong Zhang, Xiaolei Wang, Wayne Xin Zhao, Furu Wei, and Ji-Rong Wen. 2024. Language-specific neurons: The key to multilingual capabilities in large language models. In Proceedings of the 62nd Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 5701–5715.Marc Tanti, Lonneke van der Plas, Claudia Borg, and Albert Gatt. 2021. On the language-specificity of multilingual bert and the impact of fine-tuning. In Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 214–227. Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. Bert rediscovers the classical nlp pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 4593–4601. Craig A Thorburn, Naomi H Feldman, and Thomas Schatz. 2019. A quantitative model of the language familiarity effect in infancy. In Proceedings of the conference on cognitive computational neuroscience . Maarten Versteegh, Roland Thiolliere, Thomas Schatz, Xuan-Nga Cao, Xavier Anguera, Aren Jansen, and Emmanuel Dupoux. 2015. The zero resource speech challenge 2015. In Interspeech , volume 15, pages 3169–3173. Elena V oita and Ivan Titov. 2020. Information-theoretic probing with minimum description length. In Pro- ceedings of the 2020 Conference on Empirical Meth- ods in Natural Language Processing (EMNLP) , pages 183–196. Chris Wendler, Veniamin Veselovsky, Giovanni Monea, and Robert West. 2024. Do llamas work in english? on the latent language of multilingual transformers. InProceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 15366–15394. Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of bert. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) , pages 833–844. Shijie Wu and Mark Dredze. 2020. Are all languages created equal in multilingual bert? In Proceedings of the 5th Workshop on Representation Learning for NLP, pages 120–130. Haoran Xu, Kenton Murray, Philipp Koehn, Hieu Hoang, Akiko Eriguchi, and Huda Khayrallah. 2024. X-alma: Plug & play modules and adaptive rejec- tion for quality translation at scale. arXiv preprint arXiv:2410.03115 . Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mt5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies , pages 483–498. 11 A Languages Used in Evaluations Table 2 lists all languages selected for our different evaluations, including ABX discrimination tasks and probing tasks. The selection covers a wide range of language
https://arxiv.org/abs/2505.17747v1
families, scripts, and typological characteristics. Code Language ABX POS NER NLI mono. CL mono. CL mono. CL ar Arabic ✓ ✓ ✓ ✓ ✓ ✓ ✓ bg Bulgarian ✓ ✓ ✓ ✓ ✓ ✓ ca Catalan ✓ ✓ ✓ cs Czech ✓ ✓ ✓ ✓ ✓ da Danish ✓ ✓ ✓ de German ✓ ✓ ✓ ✓ ✓ ✓ ✓ el Greek ✓ ✓ ✓ ✓ ✓ ✓ en English ✓ ✓ ✓ ✓ ✓ ✓ ✓ es Spanish ✓ ✓ ✓ ✓ ✓ ✓ ✓ et Estonian ✓ ✓ ✓ fa Persian ✓ ✓ ✓ fi Finnish ✓ ✓ ✓ ✓ ✓ fr French ✓ ✓ ✓ ✓ ✓ ✓ ✓ he Hebrew ✓ ✓ ✓ hi Hindi ✓ ✓ ✓ ✓ ✓ ✓ ✓ hr Croatian ✓ ✓ ✓ hu Hungarian ✓ ✓ ✓ is Icelandic ✓ ✓ ✓ ✓ ✓ it Italian ✓ ✓ ✓ ✓ ✓ ja Japanese ✓ ✓ ✓ ✓ ✓ ko Korean ✓ ✓ ✓ ✓ ✓ lv Latvian ✓ ✓ ✓ nl Dutch ✓ ✓ ✓ pl Polish ✓ ✓ ✓ ✓ ✓ pt Portuguese ✓ ✓ ✓ ✓ ✓ ro Romanian ✓ ✓ ✓ ru Russian ✓ ✓ ✓ ✓ ✓ ✓ ✓ sk Slovak ✓ ✓ ✓ sl Slovenian ✓ ✓ ✓ sr Serbian ✓ ✓ ✓ sv Swedish ✓ ✓ ✓ ✓ ✓ tr Turkish ✓ ✓ ✓ ✓ ✓ ✓ ✓ uk Ukrainian ✓ ✓ ✓ ur Urdu ✓ ✓ ✓ ✓ ✓ ✓ vi Vietnamese ✓ ✓ ✓ ✓ ✓ zh Chinese ✓ ✓ ✓ ✓ ✓ ✓ ✓ Table 2: Languages and related ISO codes used in discrimination evaluations (ABX), and probing tasks (mono for monolingual probing and CL for cross- lingual). A checkmark indicates the language is used in that task/subset. B Illustration of Language and Meaning ABX Setups Figure 5 illustrates our adaptation of the ABX discrimination paradigm for evaluating multilin- gual text representations. The figure depicts our two complementary evaluation setups: Language Discrimination (LD) and Meaning Discrimination (MD). In both setups, we follow a consistent structure where AandXshare the variable of interest (the property we want the model to discriminate), whileBandXshare a control variable (the property we want to control for). Success is measured by whether the model places Xcloser to Athan to B in the embedding space. For the Language Discrimination task (left panel), the variable of interest is language iden- tity, while meaning serves as the control variable. Specifically, AandXshare the same language (L1) but express different meanings, while Aand Bshare the same meaning but are expressed in different languages. When d(X, A)< d(X, B), the model successfully discriminates based on lan- guage identity despite semantic differences. An example is given below: •X: “The weather is nice today.” (English ( L1), meaning M1) •A: “I need to buy groceries.” (English ( L1), meaning M2) •B: “Je dois acheter des provisions.” (French (L2), meaning M2: “I need to buy groceries”) For the Meaning Discrimination task (right panel), the variable of interest is semantic content, while language identity serves as the control vari- able. Here,
https://arxiv.org/abs/2505.17747v1
AandXshare the same meaning ( M1) but are expressed in different languages, while A andBshare the same language ( L2) but express different meanings. When d(X, A)< d(X, B), the model successfully discriminates based on se- mantic similarity across languages despite surface form differences. Here is an example for the MD task: •X: “The weather is nice today.” (English ( L1), meaning M1) •A: “La météo est bonne aujourd’hui” (French (L2), meaning M1: “The weather is nice to- day”) •B: “Je dois acheter des provisions.” (French (L2), meaning M2: “I need to buy groceries”) This systematic approach allows us to isolate specific properties in multilingual representations by controlling for potential confounding factors. The ABX score for each task reflects the propor- tion of triplets where the model correctly places items sharing the variable of interest closer together than those sharing only the control variable, provid- ing a direct measure of how the model structures linguistic information along these dimensions. 12 ABlanguage L2meaning M2language L1meaning M2Xmeaning M1language L1(a) Language Discrimination ABmeaning M1language L2meaning M2Xlanguage L1meaning M1language L2 (b) Meaning Discrimination Figure 5: Illustration of the Language Discrimination (left) and Meaning Discrimination (right) ABX tasks. C Correlation Analysis Between Language and Meaning Discrimination in XLM-R This appendix provides additional analysis on the relationship between language discrimination (LD) and meaning discrimination (MD) in our model. We observe a strong overall negative correlation between LD and MD across all language pairs and checkpoints (Spearman’s ρ=−0.74,p <0.001; Pearson’s r=−0.68,p <0.001), computed at the (language pair ×checkpoint) level. This suggests that, throughout training, language pairs that are more separable in form tend to be less effective in preserving semantic structure. To verify that this effect is not simply an artifact of training progression, we examine the relation- ship at the final checkpoint (step 150,000) alone. The inverse correlation persists with even greater magnitude (Spearman’s ρ=−0.83,p <0.001; Pearson’s r=−0.72,p <0.001), confirming that the tradeoff between language and meaning dis- crimination remains pronounced even in the fully trained model. Figure 6 visualizes this relationship: the scatterplot reveals a clear monotonic trend, with almost no high–high co-occurrence (i.e., no lan- guage pairs simultaneously scoring high on both MD and LD), which supports the interpretation of a representational tradeoff. We further analyze the dynamics of this rela- tionship across training by computing correlation coefficients at each checkpoint (Figure 7). Spear- Figure 6: Scatterplot showing the relationship between language discrimination (x-axis) and meaning discrim- ination (y-axis) scores for all language pairs at check- point 150,000 (last). Each point represents a language pair. man’s correlation remains consistently strong and statistically significant across all training stages, suggesting a stable monotonic inverse relationship. Pearson’s correlation, while also consistently nega- tive, varies in magnitude but remains significant as training progresses, indicating that the relationship is not only ordinal but approximately linear in later stages. 13 Figure 7: Evolution of Spearman (top) and Pearson (bottom) correlation coefficients between language and meaning discrimination scores across training check- points. Statistically significant correlations (p < 0.05) are highlighted. DDiscrimination scores across languages and checkpoints To examine how discrimination evolves at the
https://arxiv.org/abs/2505.17747v1
in- dividual language level, we present heatmaps of language and meaning discrimination scores across checkpoints (Figure 8). Scores are normalised per language to highlight relative changes over time. For language discrimination (left), we observe a sharp decline during early training steps for most languages, followed by a partial recovery. However, the timing and extent of this rebound varies across languages, suggesting that some retain language- specific features more robustly. In contrast, mean- ing discrimination (right) increases steadily for all languages, but again at different rates, with cer- tain languages benefiting earlier from semantically structured representations. These differences may reflect both linguistic factors and data resource dis- parities. Additional views of final-layer behaviour are included in Figure 9. E Checkpoint-wise Probe Accuracy Figure 10 shows per-language probe accuracy across checkpoints for POS, NER and NLI high- lighting the variability in when each languagereaches its peak performance. F Additional Probing Analyses Figure 11 shows the negative relationship between ABX language discrimination and POS accuracy across languages. Higher language discrimina- tion scores are associated with lower probing per- formance, consistent with the idea that strong language-specific encoding may limit generaliza- tion. G ABX-Guided Checkpoint Selection Given that language discrimination is a strong global predictor of probing accuracy for POS, we ask whether ABX scores can serve as lightweight, unsupervised heuristics for language- specific checkpoint selection. Specifically, we evaluate whether selecting, for each language, the checkpoint with minimal LD brings the model closer to its optimal performance, compared to us- ing the final checkpoint uniformly. We compare probing accuracy at the ABX- selected checkpoint to that at the final training step, measuring their respective distances from each lan- guage’s best-performing checkpoint. ABX-guided selection yields a closer match to the best checkpoint in 29 out of 36 languages, with a mean improvement of 0.034 ±0.048, and a Wilcoxon signed-rank test confirming significance over choosing the final checkpoint ( p < 0.001). This suggests that LD dynamics during training can inform language-specific model selection, par- ticularly when the final checkpoint is suboptimal. These patterns are visualised in Figure 13, which presents per-language deltas. For each language, we compute: ∆ = Final−ABX where positive values indicate that ABX selection yields a checkpoint closer to the best-performing one. Bars are sorted by the absolute delta, high- lighting languages with the largest impact. H Cross-Lingual Transfer Accuracy Matrices Figure 14 shows the full cross-lingual probing re- sults for POS, NER, and XNLI at the final check- point. Each heatmap shows transfer accuracy from a source language (row) to a target language (col- umn). The highest-performing source language for each target is highlighted in yellow. 14 Figure 8: Heatmaps showing the evolution of language discrimination (left) and meaning discrimination (right) (averaged across layers) across checkpoints. Scores are normalized per language to highlight the differences across checkpoints (top row). We also provide the non-normalized scores (bottom row). Each heatmap’s row represents a language, and each column a checkpoint step. Bright regions indicate high relative discrimination at that training stage for the given language. I Visualization of Language Discrimination Effects on Cross-Lingual Performance Figure
https://arxiv.org/abs/2505.17747v1
15 illustrates the relationship between lan- guage discrimination scores and cross-lingual trans- fer accuracy for all source-target language pairs in our experiments. For both POS tagging and NER tasks, we observe a strong negative correlation: language pairs with higher discrimination scores (indicating more distinct linguistic forms) consis- tently show lower transfer performance. This vi- sualization reinforces our regression findings that language discrimination acts as a significant nega- tive predictor of cross-lingual transfer success. The scatter plots reveal that when models encode languages in ways that make their forms highly distinguishable from each other, their ability to transfer knowledge between those languages for form-focused task as POS and NER diminishes. Conversely, when language forms are less dis- criminable (more shared or mixed representations), cross-lingual transfer improves.J ABX-Guided Source Language Selection Inspired by our earlier use of ABX scores to guide checkpoint selection (Section G), we investigate whether ABX language discrimination can also inform source language selection in cross-lingual transfer. Specifically, for each target language, we test whether the source language with the lowest ABX language discrimination score yields the high- est transfer performance. Exact Match and Top-k Accuracy We first com- pare, for each target language, the true best source (i.e., the one yielding the highest transfer accuracy) with the ABX-selected source (i.e., the one with minimal ABX LD). Exact matches occur in 2/18 (POS) and 7/18 (NER) cases. When considering the top-3 sources, ABX guidance succeeds in 6/18 (POS) and 12/18 (NER) cases, suggesting it often identifies competitive transfer candidates. Comparison to Random Selection Next, we evaluate how ABX-guided selection compares to 15 Figure 9: Heatmaps showing the evolution of language discrimination (left) and meaning discrimination (right) on the last layer , across checkpoints. Scores are normalized per language to highlight the differences across checkpoints (top row). We also provide the non-normalized scores (bottom row). Each heatmap’s row represents a language, and each column a checkpoint step. Bright regions indicate high relative discrimination at that training stage for the given language. a naive random baseline. For each target, we com- pare the transfer accuracy of the ABX-selected source to that of 100 randomly sampled sources, and compute the proportion of wins. The ABX- guided source outperforms a random one in 73.0% ±27.5% of trials for POS, and 84.8% ±23.4% for NER. Figure 16 shows the full distribution of these per- target win rates. Most values exceed 70–80%, and very few fall below the 50% chance level, indicat- ing that ABX LD offers a consistent and effective heuristic for source selection. Conclusion While ABX-guided source selection does not always identify the single best trans- fer source, it reliably outperforms random base- lines. Compared to typological or lexical similarity heuristics (which are often noisy or task-specific) ABX LD offers a simple, data-driven alternative for identifying effective source languages in cross- lingual transfer. 16 Figure 10: Checkpoint-wise probe accuracy across languages for POS (top left), NER (right), and NLI (bottom left) normalized per language. Each row corresponds to a language, and red boxes mark the checkpoint at which that language reaches peak accuracy for the
https://arxiv.org/abs/2505.17747v1
probing task. Lighter regions mean higher accuracy scores. Figure 11: Relationship between ABX-based Language Discrimination scores and downstream probing POS accuracy, averaged across checkpoints. Each point rep- resents a single evaluation language. The x-axis shows how well the model distinguishes that language from others (higher = more discriminable), while the y-axis shows its average performance on the downstream task. Red lines indicate linear regression fits with shaded 95% confidence intervals. 17 Figure 12: POS Figure 13: Difference in performance gap to the best checkpoint for each language, comparing ABX-selected (lowest LD ABX) vs. final checkpoints for the POS task. Bars show the difference in delta (Final - ABX); positive values indicate that the ABX-selected checkpoint is closer to the best-performing one (i.e., smaller gap to optimal accuracy). 18 Figure 14: Cross-lingual probing accuracy at the final checkpoint for POS (top left), NER (top right), and XNLI (bottom). Each cell shows accuracy of a probe trained on the source language (row) and evaluated on the target (column). Best source for each target is highlighted in yellow. Figure 15: Relationship between language discrimination scores and cross-lingual transfer accuracy for POS tagging (left) and NER (right) across all source-target language pairs. Each point represents a language pair, with the x-axis showing the language discrimination score and the y-axis showing transfer accuracy. The downward trend demonstrates that higher language discrimination (more distinct language forms) is associated with lower cross-lingual transfer performance for POS and NER. 19 Figure 16: Distribution of ABX win rates across target languages for POS (blue) and NER (orange). Dashed lines indicate average win rate per task. A value above 0.5 reflects better-than-random performance. 20
https://arxiv.org/abs/2505.17747v1
arXiv:2505.17762v1 [cs.CL] 23 May 2025Resolving Conflicting Evidence in Automated Fact-Checking: A Study on Retrieval-Augmented LLMs Ziyu Ge1∗,Yuhao Wu1∗,Daniel Wai Kit Chin1,Roy Ka-Wei Lee1and Rui Cao2 1Singapore University of Technology and Design 2University of Cambridge {ziyu ge, roy lee}@sutd.edu.sg, {yuhao wu, daniel chin}@mymail.sutd.edu.sg, rc990@cam.ac.uk Abstract Large Language Models (LLMs) augmented with retrieval mechanisms have demonstrated signifi- cant potential in fact-checking tasks by integrat- ing external knowledge. However, their reliabil- ity decreases when confronted with conflicting ev- idence from sources of varying credibility. This paper presents the first systematic evaluation of Retrieval-Augmented Generation (RAG) models for fact-checking in the presence of conflicting ev- idence. To support this study, we introduce CON- FACT (Conflicting Evidence for Fact-Checking)1, a novel dataset comprising questions paired with conflicting information from various sources. Ex- tensive experiments reveal critical vulnerabilities in state-of-the-art RAG methods, particularly in resolving conflicts stemming from differences in media source credibility. To address these chal- lenges, we investigate strategies to integrate me- dia background information into both the retrieval and generation stages. Our results show that effec- tively incorporating source credibility significantly enhances the ability of RAG models to resolve con- flicting evidence and improve fact-checking perfor- mance. 1 Introduction Motivation. Fact-checking systems are essential tools for combating the spread of misinformation, as they help ver- ify claims by retrieving and analyzing evidence from di- verse sources [Guo et al. , 2022; Nakov et al. , 2021 ]. Mod- ern fact-checking pipelines increasingly rely on Retrieval- Augmented Generation (RAG) frameworks, which integrate external evidence into Large Language Models (LLMs) to verify claims [Lewis et al. , 2020; Guu et al. , 2020 ]. How- ever, a critical challenge arises when fact-checking systems encounter conflicting evidence —that is, when retrieved docu- ments present opposing stances on a claim, often originating from sources with varying levels of credibility [Guo et al. , 2022; Schlichtkrull, 2024; Hong et al. , 2024 ]. ∗These authors contributed equally. 1Dataset available at https://github.com/zoeyyes/CONFACT Claim : Paul Pogba retire d from international football in response to French President Macron's comments on Islamist terrorism . Document #1 Document #2 Stance #1: Refuted . Stance #2: Supported .Figure 1: The retrieved documents from Google to verify the claim. The retrieved documents from different media sources have different stances towards the claim. For example, consider the claim: “ Paul Pogba retired from international football in response to French President Macron’s comments on Islamist terrorism ”, the retrieved evi- dence might include conflicting documents, as shown in Fig- ure 1, such as one from BBC2, a highly credible source, and another from Mehr News Agency3, which is flagged as untrustworthy4. To fact-check this claim accurately, a fact- checking system must not only analyze the evidence but also assess the credibility of each source—prioritizing reliable in- formation while discounting less trustworthy content. This challenge is exacerbated by the rapid proliferation of low-credibility content and automated misinformation gen- erated by LLMs themselves [Chen and Shu, 2024; Wang et al., 2024a ]. Fact-checking in this context requires robust sys- tems capable of resolving conflicts in evidence while reason- ing about
https://arxiv.org/abs/2505.17762v1
source credibility—capabilities that are currently underexplored in fact-checking research. 2https://www.bbc.co.uk/sport/football/54691842 3https://en.mehrnews.com/news/165168/ Pogba-quits-intl-football-after-comments-from-Macron-report 4https://mediabiasfactcheck.com/mehr-news-agency/ Research Objectives. Addressing these gaps, this paper fo- cuses on the problem of fact-checking with conflicting evi- dence , where retrieved documents present opposing stances on a claim. Specifically, we aim to evaluate the ability of retrieval-augmented LLMs to identify, analyze, and resolve conflicts in evidence by determining which sources to trust for claim verification. To enable this, we introduce CON- FACT (Conflicting Evidence for Fact-Checking), a novel dataset designed to systematically study this challenge. Each instance in CONFACT comprises a claim paired with docu- ments exhibiting conflicting stances, annotated with source credibility ratings. We conduct extensive experiments to evaluate state-of-the- art RAG models on CONFACT, revealing critical limitations in their ability to reason through conflicting evidence and prioritize trustworthy sources. Motivated by these findings, we further explore strategies for incorporating media back- ground information—such as source metadata and credibility scores—into both the retrieval and generation processes. Our results demonstrate that effectively integrating source credi- bility enhances the robustness of retrieval-augmented LLMs in resolving conflicting evidence for fact-checking. Contributions. In this work, we made the following con- tributions in this work: •Dataset Creation : We introduce CONFACT , a novel dataset for studying fact-checking with conflicting evi- dence. The dataset includes claims paired with conflict- ing retrieved documents, annotated with source credibil- ity and stance labels to facilitate systematic evaluation. •Performance Evaluation : We conduct a comprehen- sive evaluation of RAG-based LLMs on CONFACT , revealing critical vulnerabilities in resolving conflicting evidence and reasoning about source credibility. •Methodological Innovations : We propose and evaluate multiple strategies for integrating media background in- formation into RAG pipelines, demonstrating significant improvements in fact-checking performance through ef- fective credibility-aware reasoning. 2 Related Work 2.1 RAG for Automated Fact-Checking Automated fact-checking (AFC) has gained significant atten- tion in recent years [Guo et al. , 2022; Nakov et al. , 2021 ]. While LLMs have demonstrated strong performance in vari- ous Natural Language Understanding (NLU) tasks [Liet al. , 2023 ], they remain limited in AFC, as fact-checking often requires evidence beyond the parametric knowledge stored within LLMs [Schlichtkrull et al. , 2023; Thorne et al. , 2018; Wang, 2017 ]. RAG [Lewis et al. , 2020; Ram et al. , 2023 ] facilitates the adaptation of LLMs to AFC by incorporat- ing external retrieved evidence to LLMs [Panet al. , 2023a; Panet al. , 2023b; Chen et al. , 2024; Zhang and Gao, 2024 ]. However, not all retrieved evidence is reliable [Guo et al. , 2022; Hong et al. , 2024 ], and information from untrust- worthy sources may contain misinformation, leading to con- flicting evidence. Recent studies have shown that retrieval- augmented LLMs are particularly vulnerable to contradic-tions in augmented texts [Min et al. , 2020; Lee et al. , 2024; Chen et al. , 2021; Amplayo et al. , 2023 ]. Given the risks posed by unreliable sources, it is crucial to investigate the ro- bustness of retrieval-augmented LLMs in AFC, particularly in handling conflicting evidence. 2.2 Source Credibility Estimation Source
https://arxiv.org/abs/2505.17762v1
credibility estimation is crucial, as not all media sources are reliable; however, this problem remains underex- plored. Early works addressed this issue by estimating media credibility through analysis of fake news records associated with sources [Mukherjee and Weikum, 2015; Popat et al. , 2016; Popat et al. , 2017 ]. The authors in [Baly et al. , 2018 ] introduced the first dataset with human-annotated factuality ratings of news sources and utilized various features, such as Wikipedia information and source URLs, for credibility es- timation. Subsequent studies proposed more robust models using diverse features of media sources [Zhang et al. , 2019; Baly et al. , 2020; Hounsel et al. , 2020 ]. In contrast to these classification approaches, the work in [Schlichtkrull, 2024 ] emphasized the generation of detailed background checks for media sources. Despite these advancements, the impact of estimated source credibility in fact-checking is still unknown. has re- ceived limited attention. To date, only [Schlichtkrull, 2024 ] conducted a small-scale experiment with 20 claims, examin- ing whether incorporating source background checks could benefit claim verification. In this paper, we extend this line of inquiry by comprehensively evaluating how media source backgrounds can facilitate fact-checking models, and explor- ing optimal strategies for integrating source credibility infor- mation into these systems. 3 CONFACT Dataset TheCONFACT dataset is specifically designed to facil- itate the study of fact-checking in scenarios where con- flicting evidence is retrieved from sources of varying credibility, thereby addressing a critical gap in existing datasets like A VERITEC [Schlichtkrull et al. , 2023 ]and FactCheckQA [Bashlovkina et al. , 2023 ]. The construction involved two key steps: 1) identifying claims likely to retrieve conflicting evidence – particularly those frequently associated with misinformation from untrustworthy sources, and 2) en- suring that retrieved documents for claim verification present conflicting stances. 3.1 Claim Collection To identify claims likely to retrieve conflicting evidence, we utilized two widely used fact-checking datasets: •A VERITEC. This dataset [Schlichtkrull et al. , 2023 ]contains 4,568 real-world claims fact-checked by 50 organizations, categorized as Conflicting Evidence/Cherry-picking ,Not Enough Evidence ,Re- futed , and Supported . We selected claims labeled as Refuted orSupported , which involve clear factuality. •FactCheckQA. This dataset [Bashlovkina et al. , 2023 ] includes 20,871 claims annotated as true,false , orother . We focused on claims labeled as true orfalse , which provide definitive factuality. Claims from these datasets were merged5, covering di- verse topics. This process resulted in 3,180 claims: 566 from A VERITEC and 2,614 from FactCheckQA. 3.2 Conflicting Evidence Collection To facilitate the study of conflicting evidence in fact- checking, we retrieved relevant documents for claim verifi- cation. Instead of directly querying Google with the origi- nal claims, we transformed each claim into a binary question regarding its veracity using GPT-46, following the approach outlined in [Schlichtkrull, 2024; Bashlovkina et al. , 2023 ]. For example, the claim Nigeria had a population of 45 mil- lion at the time of independence” was converted into the ques- tionDid Nigeria have a population of 45 million at the time of independence?” . Each question
https://arxiv.org/abs/2505.17762v1
was then submitted as a query on Google, from which we retrieved the top 10 web pages7. To ensure reproducibility, the retrieved web pages were archived using the Wayback Machine8. 3.3 Conflict Evidence Annotation Next, we annotated the stances of the collected evidence doc- uments using a two-stage process designed to identify con- flicting viewpoints. Stage 1: GPT-4 Annotation. We employed GPT-4 to clas- sify the stance of each document with respect to its corre- sponding claim as either supporting orrefuting . To enhance robustness, we used three distinct prompt variations: (i) clas- sify the stance based solely on the document URL; (ii) clas- sify the stance using the retrieved webpage content; and (iii) prompt GPT-4 to provide its reasoning prior to making a clas- sification. The specific prompts are detailed in Appendix A. The final stance for each document was determined through majority voting across these three approaches. We defined a claim as exhibiting conflicting evidence if it was associated with documents classified as both supporting and refuting . Out of 3,180 claims, 611 (17.8%) met this criterion and advanced to the next stage. Stage 2: Human Annotation. Human annotators subse- quently validated the conflicting evidence identified in Stage 1. For each claim, annotators reviewed pairs of documents— one labeled as supporting and another as refuting by GPT-4. The annotators verified the stances and assessed the credibil- ity of the sources on a 5-point scale (1 = least credible, 5 = most credible). Additionally, they categorized each source into one of the following groups: Mainstream News ,Govern- ment ,Non-profit ,Academic ,Social Media , orOther . Each document pair was independently reviewed by two annota- tors, with any disagreements resolved by a third annotator. Detailed annotation guidelines are provided in Appendix H. 5Claims from social media platforms were excluded as they are less findable by search engines. 6https://openai.com/index/gpt-4/ 7Searches and scraping were conducted within a single week (September 12–19, 2024) 8https://web.archive.org/Split Labels # Sources ModC 125 Yes; 486 No 2469 HumC 51 Yes; 236 No 1418 Table 1: Statistics of the ModC and HumC split of our CONFACT . 3.4 Dataset Analysis The final CONFACT dataset consists of two splits: Model Conflicts (ModC) and Human Conflicts (HumC). ModC com- prises claims with conflicting evidence identified by GPT- 4 during Stage 1. Given that GPT-4 is a powerful closed- source model, this split contains conflicts that may be partic- ularly challenging for most open-source models to resolve. HumC consists of claims where the evidence is conflict- ing from a human perspective, aiming to assess how effec- tively fact-checking systems can mitigate human uncertainty when verifying such evidence. The inter-annotator agree- ment for HumC, as measured by Krippendorff’s Alpha, was 0.586—indicating strong agreement while also reflecting the general confusion among annotators when dealing with con- flicting documents. Following prior work [Bashlovkina et al. , 2023; Schlichtkrull, 2024 ], we further formulate the claim verification task into a binary question regarding claim verac- ity, making it more naturally suited for retrieval-augmented LLMs. The binary questions were generated with GPT-4 as discussed in Section 3.2. Claims
https://arxiv.org/abs/2505.17762v1
labeled as true/supported correspond to questions with Yesas answers, and those la- beled as false/refuted correspond to questions with Noas an- swers. The statistics of CONFACT are provided in Table 1 and an illustration of a data sample from CONFACT is pro- vided in Appendix B. An analysis of document credibility revealed key chal- lenges in assessing source credibility. Annotators frequently overestimated the reliability of Mainstream News sources, with 95.8% of these sources rated as credible or neutral. Cross-referencing these ratings with expert annotations from Media Bias / Fact Check (MBFC)9showed that 30% of mis- leading sources were flagged as unreliable by MBFC, while annotators classified 69.8% of these as trustworthy. These findings underscore the challenges of accurately assessing credibility and highlight the importance of addressing con- flicting evidence in fact-checking tasks. More details for the distribution of source credibility over source types are avail- able in Appendix F. 4 Methodology In this section, we evaluate retrieval-augmented LLMs on the CONFACT dataset to assess their robustness in fact-checking when confronted with conflicting evidence. We begin by formally defining the task in Section 4.1. Next, in Sec- tion 4.2, we describe baseline retrieval-augmented LLMs for fact-checking. Finally, Section 4.3 presents strategies for in- corporating media source background information at various stages of the RAG pipeline. 4.1 Problem Definition Given a claim verification question Qwith its relevant N retrieved documents {Dn}N n=1from CONFACT , a retrieval- 9https://mediabiasfactcheck.com/ 1. Retrieval question 2. Ranking 3. Answer Generation Credible Score 𝒔𝑐𝑟𝑒𝑑 Relevance Score 𝒔𝑟𝑒𝑙 (a) (b) (c) (d)Documents {𝐷𝑛}𝑛=1𝑁Relevant Paragraphs {𝑃𝑘}𝑘=1𝐾 Filtered Documents Source Backgroun dFigure 2: (a) illustrates a general framework of RAG methods in- volving three stages: retrieval, ranking and answer generation. (b-d) demonstrate our source-aware retrieval-augmented LLMs, incorpo- rating source background information in three stages of the general RAG framework. augmented LLM is expected to generate an answer Ato the question that reflects the veracity of the original claim against available evidence. The system is evaluated on its accuracy in correctly predicting the veracity of claims (i.e., whether A exactly matches the ground-truth label ˆAfor the converted question for claim verification). In addition, we report the Macro-F1 score as an auxiliary metric to assess performance across classes, particularly given the imbalanced nature of the dataset. A typical retrieval-augmented fact-checking workflow consists of three main stages: retrieval ,ranking , and answer generation [Wang et al. , 2024b; Gao et al. , 2023 ], as illus- trated in Figure 2(a). •Retrieval : Given a claim verification question Q, a RAG model retrieves relevant documents from an external knowledge base, represented as {Dn}N n=1. Retrieved documents were provided on CONFACT to ensure re- producibility, as retrieval is time-varying. •Ranking : Retrieved documents are chunked into short passages, and a ranking function selects the top- Kmost relevant paragraphs {Pk}K k=1for fact-checking. •Answer Generation : The selected paragraphs {Pk}K k=1 are passed to an LLM to generate the final answer A. 4.2 Baseline Retrieval-Augmented LLMs Baseline retrieval-augmented LLMs adhere to the standard workflow illustrated in Figure 2(a). In this process, the most relevant set of paragraphs {Pk}K k=1are extracted
https://arxiv.org/abs/2505.17762v1
and used as input for answer generation. We evaluate multiple prompting strategies for leveraging these augmented contexts: •Direct Answer (DirA. ): The Kselected paragraphs are provided to the LLM along with the claim verificationquestion, and the model directly generates an answer. •Majority Vote (MajV . ): The model first predicts answer candidates Akfor each paragraph Pk. A majority vote is then conducted to select the final answer. •Discern and Answer (DisA. ): Inspired by [Hong et al. , 2024 ], an explicit instruction is added to filter out mis- leading passages before generating an answer. •Chain-of-Thought (CoT ): This strategy prompts the LLM to generate a rationale before predicting the an- swer [Wei et al. , 2022 ], improving reasoning in multi- step verification tasks. While these strategies perform well in standard question- answering tasks, they struggle when the retrieved evidence exhibits conflicting viewpoints. For instance, DirA. may con- flate misinformation with factual content, MajV . fails if mis- leading sources outnumber reliable ones, and DisA. depends on the LLM’s ability to filter unreliable information, which is not always effective. These limitations motivate the incorpo- ration of media background knowledge. 4.3 Retrieval-Augmented LLMs with Media Source Backgrounds To improve fact-checking performance in the presence of conflicting evidence, we propose integrating background in- formation from the source of the media at different stages of the RAG pipeline. Media Source Background Provider For each retrieved document, we extract background informa- tion about its source. The MBFC website serves as our pri- mary source background provider (GT-MB), offering expert annotations on media bias and factual reliability. If a source is available in MBFC, its credibility rating is retrieved. Oth- erwise, the background is marked as missing. To extend coverage beyond MBFC, we introduce a Hybrid-MB provider, combining MBFC annotations with an LLM-based background generator [Schlichtkrull, 2024 ]. The generator first retrieves real-time information about the source’s publisher, past credibility ratings, and history of mis- information via Google Search APIs10. It then processes this information using an in-context learning approach with a set of pre-defined prompts, generating a credibility sum- mary (denoted as B) that includes factual accuracy, bias, and misinformation history (Refer to Appendix I for the designed prompts). Although the generated source credibility description is comprehensive, it may not be directly applicable at all stages of retrieval-augmented LLMs. Therefore, we further map this description into a credibility score scred∈(0,1)using a pre- diction model πθ: scred=πθ(B). (1) The model is trained on [Baly et al. , 2018 ], which provides labeled credibility supervision. More details about the credi- ble score prediction are provided in Appendix I. 10https://developers.google.com/custom-search/v1/overview Media Background Incorporation We explore to incorporate source credibility information in three stages of the RAG pipeline: 1. Source Filtering in Retrieval ( SF): It aims to filter in- credible information in the document level. Documents from sources described as low credible according to Bare filtered before ranking (Figure 2(b)) (more details in Appendix G.1). The remaining documents are ranked, and the top- Kpara- graphs are used for answer generation. 2. Credibility Weighting in Ranking ( CW): Instead
https://arxiv.org/abs/2505.17762v1
of filter- ing in the document level, credibility scores influence ranking (Figure 2(c)). The final ranking score for a paragraph Pmis computed as: sm=srel,m+β∗scred,m, (2) where srel,mis the relevance score and βbalances relevance and credibility. We considered both a soft (CW soft) and a hard (CW hard) setting for leveraging the credible score where CW hardfurther maps scredinto 0 and 1. Specifically, if scredis below a threshold γ, it will be mapped to 0, otherwise, 1. 3. Source Backgrounds Augmentation in Generation (SBA): Source backgrounds are included at the answer gen- eration stage (Figure 2(d)). We evaluate four strategies: •SBA dir: Concatenates each paragraph with its source background for source-aware paragraphs ( [Pk,Bk]). TheKsource-aware paragraphs are fed to LLMs for a direct answer. •SBA CoT: Uses CoT prompt with source-aware para- graphs. •SBA exp: Receives source-aware paragraphs and uses ex- plicit instructions to filter unreliable sources. •SBA ens: Uses a two-stage process where candidate an- swers are generated per paragraph, and conflicts are re- solved based on source-aware rationales: Ak,Rk=LLM([Pk,Bk],Q) (3) A∗=LLM([A1,R1, . . . ,AD,RD],Q) (4) whereA∗is the final answer after considering all ratio- nales. Refer to Appendix G.2 for the designed prompts. 5 Experiments 5.1 Main Experimental Results We conducted extensive experiments on the ModC and HumC splits of CONFACT (for implementation details, see Appendix C) to evaluate the performance of retrieval- augmented LLMs in fact-checking scenarios involving con- flicting evidence. Our evaluation compares baseline RAG models that do not consider media source backgrounds (Base- line) against models that integrate source credibility data at different stages of the pipeline, using the strategies intro- duced in Section 4.3 (i.e., Source Filtering ( SF), Credibility Weighting ( CW[·]), and Source Background Augmentation (SBA [·])). The experiment results are presented in Table 2. Below, we analyze key findings from our experiments by ad- dressing three research questions.RQ 1 :How do vanilla retrieval-augmented LLMs perform when confronted with conflicting evidence from sources of varying credibility? As shown in the first block of Table 2, vanilla RAG models exhibit difficulties when dealing with conflicting evidence. Their performance is notably limited, as reflected by lower F1 scores, suggesting challenges in correctly classifying the minority class (i.e., claims where the majority of retrieved evidence is misleading). This is primarily due to three key issues. First, hallucination — when presented with con- flicting sources, LLMs sometimes generate factually incor- rect responses that do not accurately reflect the retrieved evi- dence. Second, over-reliance on high-frequency responses — the Majority V ote setting biases the system toward the dom- inant source perspective, often amplifying misinformation if it is overrepresented in retrieval. Third, inability to distin- guish misinformation from reliable sources — since vanilla RAG models do not assess source credibility, they treat all re- trieved documents as equally valid, leading to incorrect fact- checking outputs. Notably, using GPT-4o in RAG methods (Appendix J) showed no clear advantage over open-source models, highlighting the problem’s complexity. Among the baseline answering strategies, Discern-and- Answer ( DisA.) and Chain-of-Thought ( CoT ) prompting achieve better results than direct answer generation. This im- provement suggests
https://arxiv.org/abs/2505.17762v1
that prompting LLMs to explicitly reason about retrieved content helps mitigate the influence of un- reliable sources. However, despite these improvements, the overall accuracy and F1 scores remain suboptimal, highlight- ing the need for more effective mechanisms to incorporate source credibility into the fact-checking process. RQ 2 :Does incorporating media source backgrounds im- prove fact-checking performance in RAG-based LLMs? Incorporating media background information into RAG models generally leads to improved performance, although the degree of improvement varies across models. Specifically, LLaMA-3.1 shows a 10% absolute improvement in F1 score, while Mistral achieves a 5% in accuracy improvement when media backgrounds are integrated on the ModC split. Similar improvements are observed on HumC, with the incorporation of source credibility information. These results indicate that providing source credibility cues helps LLMs resolve con- flicting evidence more effectively. However, not all models benefit equally from media back- grounds. Specifically, Qwen-2 exhibits the least improve- ment, which we attribute to its weaker long-context process- ing capabilities. The inclusion of source background infor- mation significantly increases input length. In models that do not handle extended sequences efficiently, this can dilute rel- evant context, increase token misalignment, and disrupt self- attention mechanisms, ultimately leading to suboptimal fact- checking performance. This finding suggests that as LLM architectures improve in handling long inputs, the benefits of integrating source-aware fact-checking will likely become more pronounced. RQ 3 :What is the most effective strategy for incorporating media source backgrounds into retrieval-augmented LLMs? Different strategies for integrating media backgrounds show distinct patterns of performance across RAG mod- ModC HumC Set. Meth. LLaMA-3.1 Qwen-2 Mistral LLaMA-3.1 Qwen-2 Mistral Acc. F1 Acc. F1 Acc. F1 Acc. F1 Acc. F1 Acc. F1 Bsl.DirA. 71.36 66.13 78.40 45.96 77.58 70.82 70.03 63.81 77.70 67.83 75.96 67.87 MajV . 79.87 45.96 79.71 45.90 79.54 45.83 82.93 49.07 82.93 49.07 82.93 49.07 DisA. 72.18 63.26 78.89 69.54 76.10 70.70 69.69 57.87 80.14 70.04 77.35 71.16 CoT 77.58 68.50 72.34 67.74 75.46 71.52 75.96 65.50 72.47 64.32 73.87 67.99 GTSF 71.52 66.28 78.40 69.87 77.74 70.98 69.34 62.97 77.70 67.83 76.31 68.19 CW soft 67.76 62.75 75.61 65.57 75.12 67.82 67.25 60.45 73.87 61.65 73.52 64.10 CW hard 68.09 63.62 74.30 64.59 74.47 67.74 68.99 62.93 73.17 61.50 74.22 66.01 SBA dir 73.16 67.96 79.21 70.51 79.87 73.06 72.82 66.46 78.40 68.11 78.40 69.80 SBA CoT 78.07 68.32 71.85 67.83 76.92 73.94 78.40 66.07 72.47 67.75 77.00 72.44 SBA exp 74.47 64.57 80.03 70.13 73.65 68.98 71.40 58.31 80.49 67.44 75.26 69.20 SBA ens 76.76 68.35 67.92 64.39 66.61 64.53 75.61 65.19 67.94 63.13 68.29 64.79 Hyb.SF 67.10 62.30 78.07 68.96 75.45 68.62 64.46 58.72 77.70 65.65 74.91 66.31 CW soft 70.05 65.00 77.41 68.35 77.25 70.02 70.73 64.15 77.00 66.05 77.35 67.85 CW hard 70.38 65.49 77.74 68.96 76.92 69.72 70.38 64.36 77.35 67.14 75.26 66.30 SBA direct 74.96 69.30 79.05 70.20 80.03 73.46 74.91 68.06 78.75 68.06 78.75 70.76 SBA CoT 75.29 65.78 73.00 68.69 75.29 71.90 73.87 61.20 74.22 68.26 75.96 71.14 SBA exp 76.10 64.53 80.69 70.50 72.83 70.50 75.26 60.88 82.93 70.56 74.91 71.47 SBA
https://arxiv.org/abs/2505.17762v1
ens 76.76 67.75 66.78 63.25 67.27 63.81 74.91 63.39 64.11 58.01 66.55 60.07 Table 2: Performance of retrieval-augmented LLMs on the ModC andHumC splits of our CONFACT dataset. Baseline (Bsl.) denotes models without incorporating source backgrounds. GT-MB (GT) represents models that only consider incorporating source backgrounds with ground-truth human annotations. Hybrid-MB (Hyb. ) demonstrates models incorporated with both human-annotated source backgrounds as well as automatically generated media backgrounds. The best results (the highest summation of Acc. and F1) are underlined. els. Our results indicate that the most effective approach is to incorporate media backgrounds at the answer generation stage, combined with a structured reasoning strategy such as CoT prompting or explicit instructions to discern unreliable source. In contrast, strategies that introduce media backgrounds in earlier stages-such as retrieval or ranking-are less effective. This is likely due to information loss when converting de- tailed textual source descriptions into a single credibility level or a credibility score. The credibility score predictor, despite being trained on expert-annotated data, does not always pro- vide precise mappings between background descriptions and factual reliability, leading to potential misclassifications. Furthermore, credibility-aware ranking strategies ( CW soft and CW hard) sometimes degrade performance. This oc- curs because credibility and relevance are not always aligned—highly credible sources may not contain the most pertinent evidence for verifying a claim. Additionally, credibility-based filtering can risk removing crucial counter- evidence. Fact-checking often requires evaluating mislead- ing claims in context, and aggressively filtering out sources deemed unreliable may leave models without the necessary contrastive information to identify misinformation. As a re- sult, ranking methods that overly rely on credibility scores can paradoxically reduce fact-checking accuracy by limiting the model’s ability to reason over conflicting viewpoints. Comparing GT-MB (which uses expert-verified MBFC credibility labels) and Hybrid-MB (which estimates credibil- ity for missing sources using LLM-based retrieval), we do not observer obvious superiority of Hybrid-MB. This indicates that current source credibility estimation methods remain lim- ited, which could add noise to source credibility aware RAG methods. Manually curated credibility assessments are stillmore reliable than automated credibility prediction. Detailed error analysis is provided in Appendix E. Summary of Findings. Our results demonstrate that retrieval-augmented LLMs struggle with conflicting evidence when source credibility is not explicitly considered. Integrat- ing media backgrounds improves performance, but the effec- tiveness of this approach depends on how and where the infor- mation is introduced within the pipeline. The most effective strategy is incorporating background information at the an- swer generation stage, where structured reasoning techniques such as Chain-of-Thought prompting or explicit instructions to discern unreliable source to resolve conflicting claims more effectively. In contrast, relying solely on credibility-aware filtering or ranking may inadvertently introduce biases or re- move crucial context needed for fact-checking. Our findings also reveal a fundamental trade-off between using expert-verified credibility data (GT-MB) and automated credibility estimation (Hybrid-MB). While expert annota- tions provide higher reliability, automated credibility infer- ence allows for broader source coverage and scalability. Im- proving the accuracy of LLM-based credibility prediction remains a key open challenge for future research. These insights contribute to the broader field of
https://arxiv.org/abs/2505.17762v1
AI-driven fact- checking by demonstrating both the potential and limita- tions of leveraging source credibility to enhance retrieval- augmented generation for misinformation detection. 5.2 Ablation Studies To further understand the impact of media source back- grounds on fact-checking with conflicting evidence, we con- duct ablation studies focusing on GT-MB model on HumC, as they avoid noise from automated source estimation (Hybrid- MB) and HumC is more challenging as shown in Section 5.1. Top-K Chk. Meth.LLaMA-3.1 Qwen-2 Acc. F1 Acc. F1 Top-10 Para.SBA dir 71.78 65.04 79.09 69.76 SBA CoT 76.66 66.02 57.14 67.34 SBA exp 74,22 63.06 65.16 67.86 SBA ens 74.56 63.62 50.52 48.43 Top-5 Sent.SBA dir 62.37 57.52 72.82 62.28 SBA CoT 67.94 58.84 61.32 62.28 SBA exp 67.25 54.71 75.12 64.18 SBA ens 68.99 59.76 54.01 53.90 Table 3: Ablation results when using top- 10pieces of augmented context paragraph (para.) and top- 5sentence-level (sent.) chunking strategy. w/o Background GT Hybrid Acc. 49.45 50.34 48.47 Table 4: Human performance on CONFACT without background information, provided with GT background information, and hybrid background information. Here, we consider the most powerful way (i.e., in the answer generation stage) to incorporate source credibility informa- tion. Effect of the Number of Augmented Paragraphs. We as- sess whether increasing the number of retrieved paragraphs improves fact-checking performance by expanding the evi- dence set from 5 (Table 2) to 10 documents (the first block in Table 3). Surprisingly, this does not enhance accuracy as models may struggle with long inputs as well as be distracted from irrelevant information. The main reasons are twofold: (1) Increasing the number of retrieved documents introduces lower-relevance evidence, which makes it harder for the model to discern factual cor- rectness. (2) Longer input sequences overwhelm LLM atten- tion mechanisms, leading to poorer factual reasoning. These findings suggest that retrieving fewer but more relevant docu- ments is more effective than increasing retrieval breadth when dealing with conflicting claims. Impact of Chunking Strategies. We compare paragraph- level (Table 2) vs. sentence-level chunking (the second block in Table 3) for retrieved evidence in the fact-checking pipeline. Paragraph-level chunking consistently outperforms sentence-level chunking, as fragmented sentences often lack sufficient context to resolve factual disputes. However, longer paragraph inputs increase computational overhead. A potential solution is de-contextualization methods, where sentences are supplemented with surrounding context before being processed by LLMs. Future work could explore such strategies to maintain high-context resolution while min- imizing input length constraints. 5.3 Human Evaluation Beyond the quantitative analysis in Section 5.1, we con- duct a qualitative study to assess human fact-checking perfor- mance under conflicting evidence. We select 20 fact-checking claims from CONFACT and recruit four NLP researchers as human evaluators. Each human evaluator evaluates 10 claims across three settings, mirroring Section 5.1: (1) without any source background, (2) with curated media backgrounds from MBFC (GT), and (3) with both MBFC-curated and automat- ically generated source backgrounds (Hybird). The accuracyof human evaluations is summarized in Table 4. Our findings indicate that humans often respond with ”un- sure” when faced with conflicting evidence, mirroring model performance: while GT media backgrounds boost accuracy,
https://arxiv.org/abs/2505.17762v1
hybrid sources (including AI-generated backgrounds) tend to introduce noise and mislead evaluators. This suggests that un- reliable or AI-generated context can impair judgment rather than enhance it. These results have critical implications for real-world fact- checking organizations. Fact-checkers must adopt rigorous source verification methods to mitigate misinformation risks, and automated tools should prioritize high-fidelity data cura- tion over broad retrieval to reduce misleading noise. More- over, AI-generated evidence should be treated as assistive rather than authoritative, with human oversight ensuring ef- fective verification of conflicting claims. Overall, this human evaluation highlights the complexity of fact-checking amid conflicting evidence, reinforcing the need for high-quality evidence retrieval and robust verifica- tion mechanisms in both human and automated fact-checking systems. 6 Conclusion This study presents a systematic evaluation of RAG models in fact-checking scenarios involving conflicting evidence—a critical yet underexplored challenge. To support this, we introduce the CONFACT dataset, which pairs fact-checking claims with contradictory information from sources of vary- ing credibility. Our analysis indicates that existing RAG mod- els struggle when faced with conflicting evidence, often as- cribing undue reliability to less credible sources. To address this issue, we integrate background information from the media sources into the RAG pipelines. Our find- ings reveal that incorporating source credibility signals dur- ing answer generation significantly enhances performance by reducing the models’ susceptibility to misinformation. How- ever, challenges remain, particularly in accurately assessing source credibility and mitigating biases in evidence retrieval. These findings have practical implications for real-world fact-checking. Automated systems must go beyond naive re- trieval and adopt rigorous source validation to avoid ampli- fying unreliable claims. Moreover, AI-assisted verification should complement human expertise, ensuring that models serve as tools to support rather than replace professional fact- checkers. Future work should focus on refining automated credibility assessments, improving evidence ranking, and rea- soning under uncertainty. Despite the progress demonstrated, we acknowledge sev- eral limitations of our current framework, including biases in source labeling and the absence of more advanced baseline systems. These are discussed in Appendix D Overall, our study confronts the complexities of conflict- ing evidence in fact-checking, and underscores the urgent need for trustworthy, AI-driven verification systems. Ad- dressing these challenges is essential for strengthening re- silience against misinformation and ensuring the reliability of AI-assisted fact-checking in journalistic, policy, and pub- lic discourse contexts. Acknowledgement This research/project is supported by the National Research Foundation, Singapore under its National Large Language Models Funding Initiative (AISG Award No: AISG-NMLP- 2024-004). Any opinions, findings and conclusions or rec- ommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore. This research/project is supported by the Ministry of Education, Singapore, under its SUTD-SMU Joint Grant Call, if applicable). References [Amplayo et al. , 2023 ]Reinald Kim Amplayo, Kellie Web- ster, Michael Collins, Dipanjan Das, and Shashi Narayan. Query refinement prompts for closed-book long-form QA. InProceedings of the 61st Annual Meeting of the Associ- ation for Computational Linguistics (Volume 1: Long Pa- pers), ACL , pages 7997–8012, 2023. [Baly et al. , 2018 ]Ramy Baly, Georgi Karadzhov, Dimitar
https://arxiv.org/abs/2505.17762v1
Alexandrov, James R. Glass, and Preslav Nakov. Pre- dicting factuality of reporting and bias of news media sources. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing , pages 3528–3539, 2018. [Baly et al. , 2020 ]Ramy Baly, Georgi Karadzhov, Jisun An, Haewoon Kwak, Yoan Dinkov, Ahmed Ali, James R. Glass, and Preslav Nakov. What was written vs. who read it: News media profiling using text analysis and social me- dia context. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020 , pages 3364–3374, 2020. [Bashlovkina et al. , 2023 ]Vasilisa Bashlovkina, Zhaobin Kuang, Riley Matthews, Edward Clifford, Yennie Jun, William W. Cohen, and Simon Baumgartner. Trusted source alignment in large language models. CoRR , abs/2311.06697, 2023. [Chen and Shu, 2024 ]Canyu Chen and Kai Shu. Can llm- generated misinformation be detected? In The Twelfth International Conference on Learning Representations, ICLR , 2024. [Chen et al. , 2021 ]Anthony Chen, Pallavi Gudipati, Shayne Longpre, Xiao Ling, and Sameer Singh. Evaluating en- tity disambiguation and the role of popularity in retrieval- based NLP. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP , pages 4472–4485, 2021. [Chen et al. , 2024 ]Jifan Chen, Grace Kim, Aniruddh Sri- ram, Greg Durrett, and Eunsol Choi. Complex claim ver- ification with evidence retrieved in the wild. In Proceed- ings of the 2024 Conference of the North American Chap- ter of the Association for Computational Linguistics: Hu- man Language Technologies (Volume 1: Long Papers), NAACL , pages 3569–3587, 2024. [Dubey et al. , 2024 ]Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle,Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. [Gao et al. , 2023 ]Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, Qianyu Guo, Meng Wang, and Haofen Wang. Retrieval- augmented generation for large language models: A sur- vey. CoRR , abs/2312.10997, 2023. [Guo et al. , 2022 ]Zhijiang Guo, Michael Sejr Schlichtkrull, and Andreas Vlachos. A survey on automated fact- checking. Trans. Assoc. Comput. Linguistics , 10:178–206, 2022. [Guu et al. , 2020 ]Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. Retrieval aug- mented language model pre-training. In International con- ference on machine learning , pages 3929–3938. PMLR, 2020. [Hong et al. , 2024 ]Giwon Hong, Jeonghwan Kim, Junmo Kang, Sung-Hyon Myaeng, and Joyce Jiyoung Whang. Why so gullible? enhancing the robustness of retrieval- augmented models against counterfactual noise. In Find- ings of the Association for Computational Linguistics: NAACL , pages 2474–2495, 2024. [Hounsel et al. , 2020 ]Austin Hounsel, Jordan Holland, Ben Kaiser, Kevin Borgolte, Nick Feamster, and Jonathan R. Mayer. Identifying disinformation websites using infras- tructure features. In 10th USENIX Workshop on Free and Open Communications on the Internet, FOCI , 2020. [Kwon and others, 2023 ]Woosuk Kwon et al. Efficient memory management for large language
https://arxiv.org/abs/2505.17762v1
model serving with paged attention. In Proc. of the ACM SIGOPS 29th Symposium on Operating Systems Principles , 2023. [Leeet al. , 2024 ]Yoonsang Lee, Xi Ye, and Eunsol Choi. Ambigdocs: Reasoning across documents on different en- tities under the same name. CoRR , abs/2404.12447, 2024. [Lewis et al. , 2020 ]Patrick S. H. Lewis, Ethan Perez, Alek- sandra Piktus, Fabio Petroni, Vladimir Karpukhin, Na- man Goyal, Heinrich K ¨uttler, Mike Lewis, Wen-tau Yih, Tim Rockt ¨aschel, Sebastian Riedel, and Douwe Kiela. Retrieval-augmented generation for knowledge-intensive NLP tasks. In Advances in Neural Information Process- ing Systems 33: Annual Conference on Neural Informa- tion Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual , 2020. [Liet al. , 2023 ]Dongfang Li, Zetian Sun, Xinshuo Hu, Zhenyu Liu, Ziyang Chen, Baotian Hu, Aiguo Wu, and Min Zhang. A survey of large language models attribu- tion. CoRR , abs/2311.03731, 2023. [Min et al. , 2020 ]Sewon Min, Julian Michael, Hannaneh Hajishirzi, and Luke Zettlemoyer. Ambigqa: Answering ambiguous open-domain questions. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP , pages 5783–5797, 2020. [Mistral.AI, 2023 ]Mistral.AI. La plateforme, 2023. [Mukherjee and Weikum, 2015 ]Subhabrata Mukherjee and Gerhard Weikum. Leveraging joint interactions for credi- bility analysis in news communities. In Proceedings of the 24th ACM International Conference on Information and Knowledge Management, CIKM , pages 353–362, 2015. [Nakov et al. , 2021 ]Preslav Nakov, David P. A. Corney, Maram Hasanain, Firoj Alam, Tamer Elsayed, Alberto Barr´on-Cede ˜no, Paolo Papotti, Shaden Shaar, and Gio- vanni Da San Martino. Automated fact-checking for as- sisting human fact-checkers. In Proceedings of the Thir- tieth International Joint Conference on Artificial Intelli- gence, IJCAI , pages 4551–4558, 2021. [Panet al. , 2023a ]Liangming Pan, Xinyuan Lu, Min-Yen Kan, and Preslav Nakov. Qacheck: A demonstration sys- tem for question-guided multi-hop fact-checking. In Pro- ceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP , pages 264–273, 2023. [Panet al. , 2023b ]Liangming Pan, Xiaobao Wu, Xinyuan Lu, Anh Tuan Luu, William Yang Wang, Min-Yen Kan, and Preslav Nakov. Fact-checking complex claims with program-guided reasoning. In Proceedings of the 61st An- nual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), ACL , pages 6981–7004, 2023. [Popat et al. , 2016 ]Kashyap Popat, Subhabrata Mukherjee, Jannik Str ¨otgen, and Gerhard Weikum. Credibility assess- ment of textual claims on the web. In Proceedings of the 25th ACM International Conference on Information and Knowledge Management, CIKM , pages 2173–2178, 2016. [Popat et al. , 2017 ]Kashyap Popat, Subhabrata Mukherjee, Jannik Str ¨otgen, and Gerhard Weikum. Where the truth lies: Explaining the credibility of emerging claims on the web and social media. In Proceedings of the 26th Interna- tional Conference on World Wide Web Companion , pages 1003–1012, 2017. [Ram et al. , 2023 ]Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, and Yoav Shoham. In-context retrieval-augmented language models. Trans. Assoc. Comput. Linguistics , 11:1316– 1331, 2023. [Robertson and Zaragoza, 2009 ]Stephen E. Robertson and Hugo Zaragoza. The probabilistic relevance framework: BM25
https://arxiv.org/abs/2505.17762v1
and beyond. Found. Trends Inf. Retr. , 3(4):333–389, 2009. [Schlichtkrull et al. , 2023 ]Michael Schlichtkrull, Zhijiang Guo, and Andreas Vlachos. Averitec: A dataset for real- world claim verification with evidence from the web. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Sys- tems 2023, NeurIPS , 2023. [Schlichtkrull, 2024 ]Michael Schlichtkrull. Generating me- dia background checks for automated source critical rea- soning. CoRR , abs/2409.00781, 2024. [Thorne et al. , 2018 ]James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. FEVER:a large-scale dataset for fact extraction and VERification. InProceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics , pages 809–819, 2018. [Wang et al. , 2024a ]Lionel Z. Wang, Yiming Ma, Renfei Gao, Beichen Guo, Zhuoran Li, Han Zhu, Wenqi Fan, Zexin Lu, and Ka Chung Ng. Megafake: A theory-driven dataset of fake news generated by large language models. CoRR , abs/2408.11871, 2024. [Wang et al. , 2024b ]Xiaohua Wang, Zhenghua Wang, Xuan Gao, Feiran Zhang, Yixin Wu, Zhibo Xu, Tianyuan Shi, Zhengyuan Wang, Shizheng Li, Qi Qian, Ruicheng Yin, Changze Lv, Xiaoqing Zheng, and Xuanjing Huang. Searching for best practices in retrieval-augmented gener- ation. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, EMNLP , pages 17716–17736, 2024. [Wang, 2017 ]William Yang Wang. “liar, liar pants on fire”: A new benchmark dataset for fake news detection. In Pro- ceedings of the 55th Annual Meeting of the Association for Computational Linguistics , pages 422–426, 2017. [Weiet al. , 2022 ]Jason Wei, Xuezhi Wang, Dale Schuur- mans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V . Le, and Denny Zhou. Chain-of-thought prompt- ing elicits reasoning in large language models. In Ad- vances in Neural Information Processing Systems 35: An- nual Conference on Neural Information Processing Sys- tems 2022, NeurIPS , 2022. [Yang et al. , 2024 ]An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, et al. Qwen2 technical report. arXiv preprint arXiv:2407.10671 , 2024. [Zhang and Gao, 2024 ]Xuan Zhang and Wei Gao. Rein- forcement retrieval leveraging fine-grained feedback for fact checking news claims with black-box LLM. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING) , pages 13861–13873, 2024. [Zhang et al. , 2019 ]Yifan Zhang, Giovanni Da San Mar- tino, Alberto Barr ´on-Cede ˜no, Salvatore Romeo, Jisun An, Haewoon Kwak, Todor Staykovski, Israa Jaradat, Georgi Karadzhov, Ramy Baly, Kareem Darwish, James R. Glass, and Preslav Nakov. Tanbih: Get to know what you are reading. In Proceedings of the 2019 Conference on Em- pirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP , pages 223–228, 2019. APPENDIX A Prompts for Annotation with GPT-4 A. System Prompt You are an expert in fact-checking. Analyze the claim, evi- dence, and claim date. Consider the timeline and disregard post-claim events. Determine if the evidence supports, re- jects, or is inconclusive about the claim. B. URL Prompt Review
https://arxiv.org/abs/2505.17762v1
the URL content to determine its support, rejection, or neutrality toward the claim. Consider the claim date. Re- spond only with: - Support - Reject - Not enough evidence. No additional text. Claim: {claim} Date of Claim: {claim_date} URL: {evidence_url} C. Text Prompt without Justification Review the text to determine its position on the claim con- sidering the claim date. Respond only with: - Support - Reject - Not enough evidence. No additional text. Claim: {claim} Date when the claim was made: {claim_date} Scraped Content: {evidence_content} D. Text Prompt with Justification Evaluate the text against the claim date. Assess if it sup- ports, rejects, or is inconclusive about the claim. Provide up to 500 words of reasoning and conclude with: - Support - Reject - Not enough evidence. Start your conclusion with ’Final answer: ’. Claim: {claim} Date when the claim was made: {claim_date} Scraped Content: {evidence_content} In this section, we provide the specific prompts used for the first stage of conflicting evidence annotation with GPT-4 (Section 3.2). We used three variants prompts to query GPT- 4 in order to enhance the robustness of annotation. The three prompting strategies all leveraged the same system prompt as shown in Box A, and their individual prompting instructions are provided in Box B, C and D, respectively. B Illustration of Data on CONFACT Fig 3 presents a sample from the CONFACT dataset. Each instance in CONFACT comprises a question (converted from its original claim), a ground-truth answer, and a set of con- flicting evidence. C Implementation Details To handle long retrieved documents effectively, we apply a paragraph-based chunking strategy, where each document is Figure 3: A Data Sample from CONFACT split into passages of at most 256 words. This ensures that retrieved evidence remains contextually relevant while fitting within the token constraints of large language models. For re- trieval and ranking, we use BM25 [Robertson and Zaragoza, 2009 ]to select the top-5 most relevant paragraphs for each claim. While we experimented with more advanced neural ranking approaches [Schlichtkrull et al. , 2023 ], we did not observe any significant improvements in our setting. Inference is performed using the vLLM framework [Kwon and others, 2023 ], which optimizes key-value (KV) cache memory for efficient large-scale inference. We conducted all experiments with BFloat16 precision on a cluster of 2×NVIDIA A100-80 GPUs, employing greedy decod- ing for response generation. To evaluate RAG-based fact- checking performance, we experiment with three state-of- the-art LLMs: LLaMA-3.1-8B [Dubey et al. , 2024 ], Qwen- 2-8B [Yang et al. , 2024 ], and Mistral-v0.3-7B [Mistral.AI, 2023 ]. For methods that leveraging source credibility scores, we set the score threshold γto 0.3 and the balancing hyperpa- rameter βto 0.8, based on preliminary experiments optimiz- ing both accuracy and macro-F1 performance. Academic/ ResearchGovernmentMainstream News MediaNon-profit OrganisationSocial Media Others Very Unreliable 2 0 0 0 3 3 Unreliable 10 1 27 5 25 34 Neutral 18 7 231 59 47 95 Reliable 65 74 334 110 15 112 Very Reliable 20 65 27 33 0 4 Table 5: Reliability Breakdown for each Media Source Type D
https://arxiv.org/abs/2505.17762v1
Limitations We identify three main limitations of our study. Bias in Source Credibility Annotations. Our credibility an- notations rely on the Media Bias/Fact Check (MBFC) dataset, which may carry inherent biases. As a result, any systematic bias present in MBFC is inherited by our framework and may influence model behavior during training and evaluation. Context-Independent Credibility Assumptions. Our ap- proach treats media source credibility as static and context- independent. In reality, a source’s reliability may vary across topics, time periods, or issue-specific reporting. Future work could explore dynamic, context-aware credibility estimation methods to address this limitation. Simplified Baselines. While more sophisticated baselines that decompose claims into finer-grained components may improve factual verification, we deliberately adopt a simpli- fied setting to isolate the effect of source credibility. Inte- grating decomposition-based approaches remains an impor- tant direction for future work. E Error Analysis We conduct an error analysis to examine the limitations of best performing RAG-based LLM (Llama-3.1) using GT- MB. We randomly sampled 50 cases where the model pro- duced incorrect answers and categorized the errors. Errors Due to Conflicting or Irrelevant Retrieved Con- texts. These errors occur when the retrieved evidence either lacks relevance or presents conflicting claims. For example, in response to the question “ Has climate change increased hurricane frequency? ”, one retrieved source affirmed an in- crease, while another refuted it based on different datasets. Instead of reconciling the conflicting claims, the model incor- rectly aligned with the source that contained more surface- level keyword matches. This suggests that simple retrieval- based approaches struggle with conflicting evidence, rein- forcing the need for advanced ranking methods that assess both relevance and credibility before generating an answer. Future work could explore iterative ranking and selection strategies where the model answers a question only when a sufficient set of corroborative evidence is identified. Errors from Inaccurate Media Background Estimation. This error type is prevalent in the Hybrid-MB setting, where automatically generated media backgrounds misclas- sify source reliability, leading the model to prioritize mislead- ing information. For instance, in answering “ Has the deficit come down under the Conservatives? ,” the ground truth an- swer is Yes, supported by data from Full Fact, a highly cred- ible source. However, retrieved evidence from Tax ResearchUK, a critical government watchdog, suggested No. Due to an erroneous background classification labeling Tax Research UK as highly reliable, the model was misled and answered in- correctly. These findings highlight the risks of relying on gen- erated media backgrounds without rigorous validation, em- phasizing the necessity of robust source credibility estimation techniques. LLM Bias in Resolving Conflicting Evidence. Some er- rors stem from the model’s tendency to favor the majority viewpoint among retrieved contexts, even when the oppos- ing evidence is more credible. For example, in cases where three low-credibility sources supported one claim while a sin- gle authoritative source contradicted them, the model fre- quently defaulted to the majority position. This bias sug- gests that LLMs may lack the ability to critically weigh con- flicting evidence, underscoring the need for improved rea- soning mechanisms that incorporate source reliability
https://arxiv.org/abs/2505.17762v1
assess- ments. Mitigating such biases requires integrating structured fact-checking methodologies that encourage LLMs to assess the credibility of competing claims rather than defaulting to frequency-based heuristics. Overall, these error categories reveal fundamental chal- lenges in fact-checking with conflicting evidence. Address- ing them requires advancements in retrieval ranking, media background estimation, and bias mitigation to ensure auto- mated fact-checking systems align with real-world journalis- tic and verification practices. F Distribution of Source Credibility Table 5 shows the credibility annotations across different source types. We observed over-estimation of credibility for mainstream news media by non-expert annotators. They have annotated about 95%of mainstream news media as trustwor- thy probably due to their popularity. However, journalism have identified 30%of them as incredible. G Detailed Implementations for RAG Methods G.1 Credibility Related Information Extraction from Background E. Prompt to Classify Credibility You are InfoHuntGPT, a world-class AI assistant used by journalists to predict the credibility of media sources. Your task is to read the example media sources (with their corre- sponding credibility descriptions) and then assess the target media using only the details provided. Do not include any additional information, and follow these rules strictly: 1. Read all the information about each example media source and note its final credibility rating. 2. Examine the target media’s description and any rele- vant Wikipedia or article information, if provided. 3. Determine the target media’s credibility using one of these labels: high ,medium , orlow. •Low : The source demonstrates questionable re- liability, has a track record of publishing mis- information, failed fact checks, or lacks trans- parency. •Medium : The source is generally reliable but may display occasional bias or minor factual is- sues, with no consistent pattern of publishing false information. •High : The source consistently provides reliable, factual information, uses proper sourcing, and shows little to no history of failed fact checks. 4. Pay special attention to any indication of failed fact checks or repeated misinformation—this may reduce credibility. 5. If there is virtually no information or the details ap- pear highly suspicious, you may conclude low credi- bility. 6. Output only the final credibility assessment for the tar- get media (one word: high ,medium , orlow). 7. Do not provide disclaimers or references to your rea- soning. Do not include any additional commentary. Provide the credibility label in one line. Example 1 Media Description: [Media Background Details] Wikipedia: [Wikipedia Summary] Credibility: [High/Medium/Low] ... Example N ... Target Media Description : [Media Background De- tails] Target Media Wikipedia : [Wikipedia Summary] Target Media Credibility : Predicted source background descriptions are detailed and comprehensive, whereas contain irrelevant information about the source credibility (e.g., the founder of a source). Consid- ering the issue, we further distill the credibility related con- tent from the description using the prompt illustrated in BoxE. Specifically, we classify the credibility of a source into three levels: low,medium andhigh, following [Baly et al. , 2018 ]. To ensure the accuracy of our automatic source cred- ibility prediction framework, we further conducted a quanti- tative evaluation on [Baly et al. , 2018 ]. Our automatic source
https://arxiv.org/abs/2505.17762v1
credibility prediction framework achieved 70.04% in a zero- shot manner. G.2 Source Background Incorporation F. System Prompt You are given a question and several pieces of evidence. Your task is to analyze the evidence and provide a concise answer to the question. For each piece of evidence, the background of its source media is provided. When evaluating the evidence, it is cru- cial to take into account the credibility of the source media, as this can significantly influence the reliability of the evi- dence. Additionally, consider any potential biases that may be inherent in the source media, especially if they are ex- plicitly mentioned. This will help ensure a more nuanced and thorough evaluation of the evidence, factoring in both the content and the context in which it is presented. G. SBA dir Evidence 1: [Evidence text] Source Media Description: [Description of the source me- dia] . . . Evidence N: [Evidence text] Source Media Description: [Description of the source me- dia] Question: [Question to be answered based on the evidence provided] H. SBA CoT Question: [Question to be answered based on the evidence provided] Evidence 1: [Evidence text] Source Media Description: [Description of the source media] . . . Evidence N: [Evidence text] Source Media Description: [Description of the source media] Given the above evidence, first explain your reasoning for any contradictions or conflicting information. After your reasoning, provide your final answer to the question. Start your answer with ’Final Answer’ and clearly separate it from the rest of your analysis. Your final answer should be either ’yes’ or ’no’. Include only one final answer, and avoid adding any additional explanation after it. Output format: Analysis: [Your reasoning here] #*# Final Answer: [yes/no] Question...Supporting Evidence QuestionEvidence 1 Evidence N Refuting Evidence Categorizing Evidence LLMLLM Final Answer Generating AnswerFigure 4: Workflow of the Ensemble Method. I. SBA exp Question: [Question to be answered based on the evidence provided] Some evidence below may have been perturbed with wrong information. Find the perturbed passages and ignore them when eliciting the correct answer. Evidence 1: [Evidence text] Source Media Description: [Description of the source media] . . . Evidence N: [Evidence text] Source Media Description: [Description of the source media] First, thoroughly analyze all the provided evidence before making your final decision. Identify the perturbed sen- tences and carefully consider their implications in your analysis. Once you have completed your review, provide your final answer to the question based on the evidence you analyzed. Start your answer with ’Final Answer:’ and ensure it is clearly separated from your evidence analysis. Your final answer should be either ’yes’ or ’no’. Make sure to include only one final answer, and do not include any additional text after it. When incorporating source background in the generation stage of RAG methods, for the SBA dir, SBA CoTand SBA exp settings, the source-aware paragraphs (i.e., concatenation of the paragraph and the background description of its source) serve as the augmented context for answer generation, while using different instructions. In SBA CoT, the model is re- quired to provide a rationale
https://arxiv.org/abs/2505.17762v1
besides the answer; in SBA exp the model is explicitly instructed to ignore augmented texts from incredible sources. Detailed prompts are shown in Box G and H. For the SBA enssetting, a two stage prompting mechanism is exploited. As illustrated in Fig 4, we first employ a LLM to categorize each piece of evidence as either supporting or refuting (Box J). Next, we prompt the LLM to generate the final answer using the categorized evidence (Box K).J. SBA ens: Categorizing Evidence For each piece of evidence, the background of its source media is provided. When evaluating the evidence, it is cru- cial to take into account the credibility of the source media, as this can significantly influence the reliability of the evi- dence. Additionally, consider any potential biases that may be inherent in the source media, especially if they are ex- plicitly mentioned. This will help ensure a more nuanced and thorough evaluation of the evidence, factoring in both the content and the context in which it is presented. Question: [Question to Answer] Supporting Evidence: - Sentence: [Sentence/Paragraph] - Credibility Analysis: [Description of the source media] . . . Refuting Evidence: - Sentence: [Sentence/Paragraph] - Credibility Analysis: [Description of the source media] . . . Given the above support and oppose evidence, first explain your reasoning for any contradictions or conflicting infor- mation. Your analysis should be no more than 500 words. Please ignore the difference in the amount of supporting and opposing evidence and choose more detailed and truth- ful sentences of evidence. Once you have completed your analysis, provide your final answer to the question based on the evidence you analyzed. Start your answer with ’Final Answer:’ and ensure it is clearly separated from your evi- dence analysis. Your final answer should be either ’yes’ or ’no’. Make sure to include only one final answer, and do not include any additional text after it. K. SBA ens: Generating Final Answer Instructions 1.Comprehend the Question: • Carefully read the question to understand what is being asserted. • Identify the key components and assertions within the question. 2.Analyze the Sentence: • Examine the sentence to see how it relates to the question. • Determine if the sentence provides evidence, an example, or a counterpoint to the question. • Look for keywords or phrases that directly sup- port or refute the question. 3.Evaluate the Media Background: • Review the media background information to understand the broader context. • Consider the credibility of the sources men- tioned and any potential biases. • Identify any historical information or prior events that relate to the question. 4.Integrate Information: • Combine insights from the sentence and media background. • Assess whether the sentence, in the context of the media background, provides sufficient sup- port for the question. • Consider if there are contradictions or align- ments between the sentence and the media back- ground. 5.Logical Reasoning: • Use critical thinking to evaluate the connections. • Ask yourself if the evidence logically leads to the conclusion stated in the question. • Consider alternative interpretations or whether additional information
https://arxiv.org/abs/2505.17762v1
is needed. 6.Conclude: • Evaluate the reliability of the media background and determine whether the sentence supports the question. • Ensure that your conclusion is based solely on the information provided. 7.Answer: • Optionally, provide a justification based on the above steps, explaining your reasoning. Keep your justification under 300 words. • Provide a clear and concise Yes orNoanswer to the question. Question: [Question to Answer] Evidence: [Evidence] Media Background Analysis: [Source Media Description of the Evidence Provided] Based on the provided sentence and the media background, begin by thoroughly analyzing the evidence, giving special attention to the credibility and potential biases of the media source. After your analysis, provide your final answer to the question. Start your answer with ’Final Answer:’.H Annotation Instructions Participants were presented with a claim and a corresponding URL linking to an online article or post. They were instructed to visit these URLs, review the content, and assess whether it supported or contradicted the claim. Following this evalua- tion, they were required to rate the source’s credibility and classify its type. Figure 5 illustrates a screenshot of the user labeling process. Figure 5: Screenshot of Annotation Platform I Media Background Prediction I.1 Demonstrations for Media Background Prediction. As shown in Fig 6, to build a media background for an un- known media source, we begin by gathering foundational in- formation: specifically, we collect 10 article titles from the source and compile its Wikipedia summary to form an initial guess. Next, following the method in [Schlichtkrull, 2024 ], we conduct a structured Google search using 42 targeted queries about the media, collecting the resulting answers to supplement our initial information. Finally, we incorporate these newly acquired insights into the initial hypothesis to produce a comprehensive, final media background descrip- tion. An example of predicted media background is shown in Box L. Gather Inputs Article Titles Wikipedia InfoGoogle Search Question 1 ... Question 42 Media BackgroundUpdate initial guess with additional informationInitial Guess AnswersLLM LLM LLMAnswer questions based on search resultsGenerate initial guess Source Media DomainFigure 6: Illustration of the process to generate the media back- ground by LLMs. L. Generated Description for the Source Media History: Oxford University Press OUP is one of the oldest and largest university presses in the world, with a history dating back to 1586. academic.oup.com is the online platform for OUP that hosts a vast collection of scholarly journals, books, and articles. Funded by/Ownership: OUP is a department of the Uni- versity of Oxford and is self-funded through the sales of its publications. Analysis/Bias: The platform, academic.oup.com , publishes peer-reviewed journals and books from a wide range of disciplines, including humanities, social sciences, and natural sciences. The editorial process for these pub- lications is rigorous and follows the highest standards of academic integrity. While the platform may have a slight bias towards the perspectives of its authors and editors, it strives to provide balanced and informative content. Failed Fact Checks: Oxford University Press has no record of any failed fact-checks. They are considered a rep- utable source of academic information. They have over 300
https://arxiv.org/abs/2505.17762v1
journals and a range of books that cover most disciplines. I.2 Credibility Score Prediction We utilize the MBFC dataset to train a model for predicting media credibility scores. The dataset consists of media de- scriptions paired with credibility scores. The BigBird-RoBERTa model serves as the backbone of our architecture, combined with a regression head to output credibility scores. After training on the MBFC dataset, the model is evaluated and used to predict credibility scores for new media descriptions.Set. Meth.4o-mini Acc. F1 Bal. CoT 78.23 70.42 GT CoT 79.05 71.33 Table 6: Additional Experiment Results using the 4o-mini Model J Additional Experiment Results We conducted additional experiments using the GPT-4o-mini model under the top 5 paragraphs setting. The results are pre- sented in Table 6. The results showed the conflicting evidence in fact-checking is also challenging to strong closed-source LLMs, indicating the need for source-critical fact-checking methods.
https://arxiv.org/abs/2505.17762v1
arXiv:2505.17767v1 [cs.CL] 23 May 2025The Real Barrier to LLM Agent Usability is Agentic ROI Weiwen Liu1, Jiarui Qin2, Xu Huang3, Xingshan Zeng2, Yunjia Xi1, Jianghao Lin1,Chuhan Wu∗,Yasheng Wang2∗,Lifeng Shang2,Ruiming Tang2, Defu Lian3,Yong Yu1,Weinan Zhang1∗ 1Shanghai Jiao Tong University, 2Huawei Noah’s Ark Lab, 3University of Science and Technology of China wwliu@sjtu.edu.cn Abstract Large Language Model (LLM) agents represent a promising shift in human-AI interaction, moving beyond passive prompt-response systems to autonomous agents capable of reasoning, planning, and goal-directed action. Despite the widespread application in specialized, high-effort tasks like coding and scientific research, we highlight a critical usability gap in mass-market, high-demand applications. This position paper argues that the limited real-world adoption of LLM agents stems not only from gaps in model capabilities, but also from a fundamental tradeoff between the value an agent can provide and the costs incurred during real- world use. Hence, we call for a shift from solely optimizing model performance to a broader, utility-driven perspective: evaluating agents through the lens of the overall agentic return on investment (Agent ROI). By identifying key factors that determine Agentic ROI— information quality, agent time, and cost —we posit a zigzag development trajectory in optimizing Agentic ROI: first scaling up to improve the information quality, then scaling down to minimize the time and cost. We outline the roadmap across different development stages to bridge the current usability gaps, aiming to make LLM agents truly scalable, accessible, and effective in real-world contexts. 1 Introduction Large Language Model (LLM) agents have emerged as a novel paradigm for users to interact with AI systems in a more dynamic and autonomous manner [ 107,32]. Unlike static models that respond solely to discrete prompts [ 37,13], LLM agents are designed to reason, plan, and act within environments—whether digital or physical—toward user-defined goals. This has sparked immense interest in deploying LLM agents across domains ranging from customer support to scientific research [ 92,101,77,11,19]. Greg Brockman, president and co-founder of OpenAI, remarked that “2025 is the year of Agents ,” capturing the growing excitement and momentum in this space. Despite this promise, the actual deployment of LLM agents in mass-market, production-level applications remains sparse compared to more established AI systems, such as recommender systems [ 48,100,49] or search engines [ 115,21,60]. For instance, while Douyin (TikTok’s Chinese counterpart) boasts over 700 million monthly active users (MAUs) [ 64], OpenAI Plus, which includes access to advanced features like Deep Research, reports around 10 million users [ 15]. This suggests that there remains a substantial untapped market for LLM agents. At the same time, users are not yet fully satisfied with what current agents can do. ∗Corresponding author. Preprint. Research Coding Profession Assistant E-commerce Personal AssistantAgentic ROI User DemandOpenAI Deep Research Gemini Deep Research Perplexity Deep Research Elicit Consensus Scite.ai Semantic Scholar GitHub Copilot Cursor Claude Code Amazon Q Developer Tabnine AI Code Assistant Replit AI Bito Microsft 365 Copilot Genspark Spellbook Legal AI Acuity Financial AI Ada Salesforce Retail Microsft Copilot OpenAI Operator Usability GapFigure 1: Illustration of the usability for LLM agents across different application domains. User demand is estimated
https://arxiv.org/abs/2505.17767v1
based on the approximate monthly active users (MAU) of conventional applica- tions in each domain. Agentic ROI is a conceptual representation of relative trends. The listed agent products are illustrative and may not be exhaustive. In particular, the current generation of LLM agents focuses on specialized, professional tasks such as software development [ 97] and scientific research [ 24,65], where the typical users are already domain experts and occasional errors are acceptable. As a result, these agents remain largely out of reach for the general public, who may lack the necessary expertise. To understand the discrepancy, we argue that the gap lies in the broader socio-technical ecosystem: the usability of LLM agents depends not only on improvements in raw model intelligence or accuracy, but on maximizing agentic return on investment (Agentic ROI) . Agentic ROI quantifies the information gain an agent can provide relative to the costs incurred during real-world use. We define Agentic ROI as: Agentic ROI =Information Gain Cost=(Information Quality −τ)·(Human Time - Agent Time) Interaction Time ·Expense, where Information Quality refers to the accuracy, usefulness, and completeness of the outcome produced by the agent, τrepresents a minimum threshold for acceptable information quality. Agentic ROI is only defined when Information Quality > τ, ensuring that the agent output exceeds the user’s baseline requirement for usability. Human Time is the time a user would spend completing the task without assistance, while Agent Time is the time taken by the agent to complete the same task. We assume Agent Time <Human Time , indicating that the agent provides a positive net time-saving benefit. Interaction Time captures the total time spent in user-agent interaction, including efforts such as task description, intent clarification, and result verification. The total time of using an agent is the sum of Agent Time andInteraction Time .Expense refers to the monetary cost incurred ( e.g., API usage fees), with Expense >0. The Agentic ROI quantifies the value derived from the agent’s ability to reduce the time a human would otherwise spend to obtain similar-quality information, normalized by the total cost of applying the agent. A higher Agentic ROI reflects greater usability potential. This formulation reveals a key insight: while LLM agents can generate high-quality information, the benefits are often offset by the limited reduction in human labor and the financial cost required to effectively interact with the agent. This theoretical metric is supported by empirical observations of LLM agent deployment patterns across different domains. As illustrated in Figure 1, current agents are predominantly adopted in areas with high Agentic ROI, such as scientific research and code generation, where the baseline human time is inherently high. Tasks in these domains often require hours of reading, coding, or analysis, making even partial automation through agents a compelling proposition. In contrast, domains with the highest user demand, like e-commerce or personal assistance, involve low-effort interactions ( e.g., clicking, swiping, or simple queries). The user demand is approximated using monthly active users (MAU) in existing conventional services. In these low-effort domains, the information gain provided by current agents is often negligible or even
https://arxiv.org/abs/2505.17767v1
negative, despite meeting the minimum threshold for information quality. In some cases, the agent’s response time can exceed the time required for users to complete the task manually. As a result, the Agentic ROI in these domains remains low, limiting 2 /uni00000030/uni00000052/uni00000047/uni00000048/uni0000004f/uni00000003/uni00000036/uni0000004c/uni0000005d/uni00000048/uni00000018/uni00000018/uni00000019/uni00000013/uni00000019/uni00000018/uni0000001a/uni00000013/uni0000001a/uni00000018/uni0000001b/uni00000013/uni0000001b/uni00000018/uni0000001c/uni00000013/uni0000002a/uni00000033/uni00000034/uni00000024/uni00000003/uni00000033/uni00000048/uni00000055/uni00000049/uni00000052/uni00000055/uni00000050/uni00000044/uni00000051/uni00000046/uni00000048/uni00000052/uni00000014 /uni0000000b/uni00000014/uni00000015/uni00000012/uni00000015/uni00000013/uni00000015/uni00000017/uni0000000c /uni00000052/uni00000014/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c /uni0000000b/uni00000013/uni0000001c/uni00000012/uni00000015/uni00000013/uni00000015/uni00000017/uni0000000c/uni00000052/uni00000016 /uni0000000b/uni00000013/uni00000017/uni00000012/uni00000015/uni00000013/uni00000015/uni00000018/uni0000000c /uni00000052/uni00000016/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c /uni0000000b/uni00000013/uni00000014/uni00000012/uni00000015/uni00000013/uni00000015/uni00000018/uni0000000c/uni00000052/uni00000017/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c /uni0000000b/uni00000013/uni00000017/uni00000012/uni00000015/uni00000013/uni00000015/uni00000018/uni0000000cFigure 2: The zigzag performance trend of different OpenAI series models. Model size is estimated based on inference cost. Smaller models ( e.g., o3-mini, o4-mini) achieve performance comparable to larger predecessors ( e.g., o1, o3) from earlier generations. agent usability. The massive user demand and the low Agentic ROI highlight a critical usability gap in everyday, mass-market applications. To address this usability gap and provide guidance for future design of LLM agents, we identify three core factors that jointly determine Agentic ROI: information quality, agent time, and cost. Information Quality. The foundational determinant of an agent’s utility is the ability to produce accurate, relevant, and context-aware information that meets user needs. Improving the information quality relies on enhanced model performance, which comes from scaling strategies at multiple levels, e.g., pre-training, post-training, test-time deployment. Moreover, as agents become increasingly autonomous, ensuring the robustness and integrity of their behavior becomes crucial, yet remains less explored in improving the information quality. Agent Time. LLM agents are designed to offload human labor, enabling users to achieve goals with minimal manual effort. However, the difference between current agent time and human time is not always positive. While agents can operate continuously and at scale, their decision-making process often involves iterative reasoning, API calls, and environment interactions that introduce latency. In contrast, humans can often complete simple tasks much faster through intuitive knowledge and contextual understanding. Cost. Cost encompasses both interaction time and expense. Interaction time includes the time spent by users to craft effective prompts, understand agent behaviors, and interpret the outputs. Expense refers to the financial cost of running LLM agents at scale, particularly those requiring API access to large proprietary models with significant token usage. In this position paper, we argue that a more grounded, ROI-centric lens is necessary to evaluate the practical usability of LLM agents. This utility-centric thinking can help surface the hidden frictions that inhibit adoption and illuminate the design principles needed to make LLM agents broadly useful, trustworthy, and scalable. Through Agentic ROI, we invite a rethinking of what it means for LLM agents to be usable for daily applications. 2 Zigzag Development Trend in Optimizing Agentic ROI In the pursuit of higher Agentic ROI, we posit that the development trend of LLM agents may not be a linear trajectory, but rather a zigzag pattern. This observation is inspired by performance trends in the OpenAI model series, as shown in Figure 2. Specifically, we observe a recurring cycle: new model families often begin by scaling up model size and inference cost to push the frontier of performance ( i.e., Information Gain in Agentic ROI), as seen in the transition from o1-mini to o1. Subsequent iterations tend to scale down , introducing smaller, more efficient variants ( e.g., o3-mini, o4-mini) that deliver comparable performance
https://arxiv.org/abs/2505.17767v1
to their larger predecessors but at significantly lower cost ( e.g.o3-mini vs. o1, o4-mini vs. o3). In the context of LLM agents, what we see is a similar optimization process in which each generation makes tradeoffs along different axes of the ROI surface: information gain, agent time, and total cost— first scaling up to improve the information quality, then scaling down to reduce the agent time and cost. This zigzag development trend is not 3 unique to LLM agents. Similar patterns have been observed in other areas of technology innovation, e.g., computer processors, smartphones. Our current development phase for LLM agents is still in the “scaling up” stage. Large-scale models are being deployed to push the upper bounds of generalization, reasoning, and tool usage. The high information quality of LLM agents is achieved at the cost of increased agent time, user oversight, and infrastructure demands. As a consequence, we observe that LLM agents are currently deployed in high-human-time tasks, such as programming or research assistance, where the significant time savings justify the high computational and operational overhead. However, we can anticipate a subsequent phase of scaling down in the coming years, in which innovations in efficiency, specializa- tion, and agentic autonomy reduce overall costs. This will enable broader deployment in low-touch, high-demand applications such as customer support, e-commerce, or personal productivity. Recognizing this zigzag progression allows us to better calibrate expectations and design principles that are appropriate for the current phase of agent development. It also reinforces the need for the Agentic ROI metric that explicitly captures the temporal tradeoffs between short-term cost and long-term usability at scale. In the following section, we outline key principles for navigating this zigzag path, offering potential development directions that are aligned with optimizing Agentic ROI. 3 Scaling Up for Information Quality The current phase of LLM agent development prioritizes scaling up for information quality. This is often at the expense of agent time and cost to ensure the agent is sufficiently capable. In practice, this relies on scaling strategies at multiple levels, spanning from pre-training to post-training and into test-time deployment. Building a world model, along with ensuring robustness and security, are also critical components of the scaling-up process. 3.1 Pre-training Scaling By scaling the size of the model, data, and training compute, LLMs exhibit predictable power- law improvements in performance, significantly enhancing information quality [ 29]. Fundamental capabilities of LLM agents, such as language understanding, general reasoning, and world knowledge, are primarily established during the pre-training phase. In terms of agent usability, many agentic tasks are already well-captured in manuals, workflow documents, standard operating procedures, and troubleshooting guides. These corpora encode not just facts, but task decompositions, conditional logic, and action-oriented sequences. Scaling such data across various domains and modalities provides rich prior knowledge of task structures that, if properly internalized, can enable agents to perform complex tasks with minimal human guidance. Moreover, the information quality is further enhanced by context scaling. As context windows expand and memory mechanisms improve, agents are increasingly able to incorporate larger spans of relevant information, including prior steps,
https://arxiv.org/abs/2505.17767v1
user preferences, and external information sources. This allows agents to execute multi-step workflows, track long-term goals, and manage ambiguity more robustly. 3.2 Post-training Scaling Post-training scaling refers to the enhancement of a base language model’s capabilities through techniques such as supervised fine-tuning (SFT) or reinforcement learning (RL) [ 26,58]. We argue that post-training is where the information quality gets improved by aligning with human values, adapting to dynamic environments, and evolving through sustained experience. This section explains how post-training scaling improves information quality from an agent-centric perspective. •Alignment with Humans. To bridge the gap between machine-generated outputs and human expec- tations, agents must learn to produce responses that are personalized, interpretable, and harmless. These qualities go beyond linguistic fluency, requiring agents to embody social norms, individual preferences, and value-sensitive reasoning. Post-training efforts in this direction commonly rely on scaling human feedback, either collected from online human-bot interactions or curated through crowd-sourced annotations. 4 •Alignment with Environment. Effective agents must operate not only in dialogue but also within external environments—including APIs, user interfaces, or physical simulators. Consequently, post-training scaling must account for action grounding, ensuring that the agent’s decisions correspond to executable and meaningful operations in the target context. Moreover, by integrating environmental interaction trajectories—such as tool-use logs or state-action transitions—agents can reduce hallucinations [ 74] and improve consistency between perception, reasoning, and action. •Agent Evolution through Experience. As agents are deployed at scale, they naturally generate vast amounts of interaction data, including user feedback, task completions, error traces, and recovery attempts. These interaction histories form a rich foundation for continual post-training, enabling agents to refine their behavior over time. This forms a data flywheel: deployment yields training signals, which in turn improve future deployments. Such an adaptive loop constitutes the foundation for agentic evolution, where experience accumulation becomes a catalyst for performance scaling. 3.3 Test-time Scaling Test-time scaling encompasses strategies applied to dynamically expand or optimize agent behavior at inference time, which can be further scaled across multiple dimensions [62]. •Scaling reasoning process. Adaptively increasing the reasoning steps for complex tasks allows agents to produce more correct, complete, and trustworthy outputs [ 26], directly contributing to the information quality of Agentic ROI. Iterative and reflective reasoning, where agents critique, revise, or verify their own outputs, further enhances reliability. •Scaling multi-agent systems. Multi-agent systems represent a powerful form of test-time scaling, where different agents with specialized roles collaborate to solve parts of a task. Scaling up the number of agents with various sizes and domains can lead to emergent behaviors with better task decomposition. However, increasing the number of agents also introduces coordination overhead, longer inference latency, and potential inconsistencies. •Scaling tool calling. Expanding multiple and diverse tool calls allows the agent to break down rea- soning into smaller, verifiable steps, querying tools iteratively to validate intermediate results [ 112]. This is particularly valuable in multi-step tasks or tasks with missing information. •Scaling test-time training. Test-time training allows model parameters to be updated at test-time using the incoming unlabeled test data [ 125,6]. This enables fast adaptation to new domains or evolving user
https://arxiv.org/abs/2505.17767v1
preferences, improving its performance in real time based on personalized feedback. •Scaling towards Agentic ROI under budget constraints. By formalizing the concept of Agentic ROI, we envision that next-generation LLM agents will be capable of directly optimizing their Agentic ROI while operating under real-world budget constraints. These constraints may include limits on agent time, API usage costs, or energy consumption. Under such conditions, an agent must reason adaptively over a dynamic cost-benefit landscape, selecting among various scaling strategies, such as adjusting the number of reasoning steps, tool invocations, or collaborative agents involved. The agent must therefore learn to predict and evaluate the expected Agentic ROI of each possible action in real time. 3.4 Building World Model A world model is a simulation of the physical environment that enables LLM agents to interact with. We argue that building a world model not only provides an evaluation environment that captures the complexity and ambiguity of real-world deployment, but also a critical infrastructure for fundamentally scaling agentic training data. Unlike tasks with well-defined objectives ( e.g., coding or math), many agentic tasks involve open- ended, multi-step goals with context-sensitive success criteria. An agent may generate a grammatically correct email while failing to match tone or intent, resulting in a suboptimal outcome from the user’s perspective. This misalignment is especially pronounced in personalized settings, where “correctness” is subjective, and user feedback is noisy, delayed, or implicit. We propose that the world model should support the following key features: •Multi-modal interaction. Real-world tasks are inherently multi-modal. Agents must be able to interpret and generate not only natural language, but also images, documents, and audio. 5 Table 1: Existing popular environments for LLM agents. R.F. andP.F.denote the realistic feedback and personalized feedback, respectively. A.S. is the abbreviation for average steps of all tasks. Environment Domain R.F. P.F. Modality Uncertainty A.S. AlfWorld [82] Householding ✗ ✗ Text ✗ - ScienceWorld [95] Research ✗ ✗ Text ✗ - AgentBench [54] Hybrid ✗ ✗ Text ✗ 10.5 AgentSims [47] Social Life ✗ ✓ Text ✓ - WebArena [123] Web Browsing ✗ ✗ Text, Image ✗ - AppWorld [89] Control ✓ ✗ Text, Image ✓ 26.0 Mind2Web [16] Web Browsing ✓ ✗ Text, Image ✗ 7.3 Generative Agents [67] Social Life ✗ ✓ Text ✓ - WebShop [108] Web Browsing ✗ ✗ Text ✗ 11.3 AndroidEnv [87] Control ✓ ✗ Text, Image ✓ - Mobile-Env [78] Control ✓ ✗ Text, Image ✗ - WebCanvas [66] Web Browsing ✓ ✗ Text ✓ 8.4 AndroidWorld [75] Control ✓ ✗ Text, Image ✓ 27.2 MineCraft [61, 17] Gaming ✗ ✗ Text, Image, Audio ✓ - RecAgent [93] Recommendation ✗ ✓ Text ✓ - Virtualhome [69] Household Activity ✗ ✗ Text, Video ✗ 11.6 TheAgentCompany [104] Software Developing ✗ ✗ Text ✓ - MiniWob++ [51] Web Browsing ✗ ✗ Text, Image ✗ 3.6 WebLINX [56] Web Browsing ✓ ✗ Text, Image ✗ 43.0 AssistantBench [111] Web Browsing ✓ ✗ Text ✗ - VisualWebArena [38] Web Browsing ✓ ✗ Text, Image ✗ - VideoWebArena [34] Web Browsing ✓ ✗ Text, Image, Video ✗ - OSWorld [103] Control
https://arxiv.org/abs/2505.17767v1
✓ ✗ Text, Image ✓ - WorkArena [18] Web Browsing ✓ ✗ Text, Image ✗ 10.0 InfoDeepSeek [99] Web Search ✓ ✗ Text ✓ 5.0 •Multi-step dependent tasks. Many real-world goals cannot be achieved in a single step or response. The world model should support multi-turn interactions and long-horizon tasks, allowing agents to revise strategies, manage subgoals, and recover from partial failures. •Realistic and personalized feedback. Real-world users provide feedback that is inherently subjec- tive, shaped by individual preferences. Responses may vary across users, be influenced by prior interactions, or even be internally inconsistent. To realistically simulate this, the world model should account for preference diversity, evolving user intent, and continual adaptation. •Environmental uncertainty. In real applications, agents operate under uncertainty—they may lack full information about the environment, user intent may shift, and external conditions may change unexpectedly. The world model should be able to reflect uncertainty, including the presence of errors, interruptions, and ambiguous situations. However, upon examining the widely used environments listed in Table 1, we observe that most are designed for simplified settings and do not closely resemble real-world deployment environments. In practice, existing platforms such as e-commerce platforms, content recommendation systems, and customer service chatbots naturally encode rich, dynamic user interactions, evolving goals, and multi-turn feedback. These platforms present a promising foundation for constructing world models with simulated users [ 119,116]. Moreover, we advocate for small-scale, privacy-aware real-world deployment of LLM agents in such settings to gain insight into how agents perform under real-world contexts, so that we can build more realistic world models. 3.5 Ensuring Robustness and Security A critical yet underexplored factor in improving the information quality of LLM agents lies in the domain of ensuring robustness and security. The trustworthiness and reliability of information produced by agents are equally important. Robustness and security are not merely safety prerequisites, but essential enablers of sustainable information quality. Agents that mislead, behave inconsistently, or become vulnerable to adversarial control can erode user confidence and satisfaction, and ultimately lower the Agentic ROI. Robust Reward Design. One of the key challenges in the development of deployable LLM agents is their tendency to exploit flaws in reward design in reinforcement learning (RL), commonly known as reward hacking [ 7,2]. Rather than learning to complete tasks in ways aligned with human intent, agents may optimize for proxy signals or superficial success criteria. For instance, in code generation tasks, agents have been shown to manipulate test cases or selectively bypass verifications to create the illusion of correctness [ 8]. Recent study has revealed early signs of strategic deception [ 25], where models appear to selectively comply with its training objective in training, potentially to avoid 6 post-deployment modifications to their behavior. As LLM agents become embedded in everyday life, the consequences of such misalignments will become both more subtle and more systemic. In consumer settings, for example, personal assistant agents may learn to prioritize user satisfaction signals, such as agreement or emotional tone, over factual accuracy or sound judgment. With the continued scaling of model capabilities in generality and autonomy, deeper theoretical and
https://arxiv.org/abs/2505.17767v1
empirical understanding of deceptive behaviors is required. Such insights are crucial for designing robust reward functions that align more faithfully with human values and intent. Moreover, developing scalable transparency mechanisms, behavioral auditing methods, and interpretability-aware training could form the basis of a new generation of LLM agents. Training and Runtime Security. Training and runtime security are critical for maintaining long-term information quality in open-world environments. Autonomous agents often operate in unpredictable settings, where they may interact with untrusted data, users, or other agents. During training, malicious actors can compromise model behavior by poisoning the training data [ 70], manipulating reinforcement feedback [ 9,91], or embedding covert backdoors into the model [ 88]. At runtime, adversaries can exploit agents by injecting adversarial instructions into their input [ 55,122]. These adversarial prompts may be embedded in external content ( e.g., webpages, documents) or originate from other interacting agents. To mitigate these threats, integrating anomaly detection and provenance tracking into the training pipeline of LLM agents is essential. These techniques can help identify suspicious data or manipulated reward signals. Additionally, equipping agents with mechanisms for real-time fact-checking— leveraging external tools or trusted sources—can enhance their ability to detect inconsistencies, resist adversarial inputs, and remain aligned with user intent, even in adversarial environments. 4 Scaling Down to Reduce Agent Time and Cost While recent advances stem from scaling up for information gain, we argue that this capability expansion is followed by the next stage of scaling down to reduce agent time and cost. This section explores strategies for optimizing agent efficiency along two key dimensions: agent time—the duration required to complete a task, and cost—the interaction overhead of invoking the agents. 4.1 Reducing Agent Time In the pursuit of maximizing Agentic ROI, reducing agent time, i.e., the computational and reasoning duration an agent requires to complete its task, is also critical. An agent that can reach high-quality decisions or outputs more rapidly not only improves responsiveness but also enhances usability in real-world scenarios. Several orthogonal developments in AI research and system engineering are converging to dramatically reduce agent time. Memory: Experience Enables Efficiency. Agents endowed with memory can bypass redundant computation by reusing previously acquired knowledge, transforming time complexity from recom- putation to retrieval. This design mimics human efficiency [ 23,20]: experts solve problems faster not because they compute more, but because they remember better. Memory reduces the time required for task repetition, pattern abstraction, and preference learning, enabling temporal amortization across tasks. Rich memory architectures shift the cost center of intelligence from online reasoning to offline accumulation, compressing agent time via historical synthesis rather than immediate inference ( e.g., sleep-time compute [ 50]). However, integrating the memory module additionally introduces difficulty in framework design, where efficient memory management mechanisms and memory incorporation during LLM processing are needed to effectively leverage memory for problem solving. Agent Size: Distillation without Degradation. Agent time is functionally constrained by the scale of the model and the depth of its activation. Smaller models operate within tighter time budgets, but reducing size generally causes performance degradation [ 29]. The optimization problem
https://arxiv.org/abs/2505.17767v1
is thus not to minimize size only, but to minimize size conditional on performance. Shrinking agent size reduces latency, power consumption, and deployment friction, thereby directly lowering agent time. This requires smarter initialization, transfer, and specialization strategies to retain performance at lower computational depth. Distillation and related techniques can be leveraged to achieve this purpose, as depicted in Figure 2. By transferring capabilities from larger models, distillation enables the creation of smaller, more efficient models without significant loss in effectiveness. This approach has been 7 employed in recent models such as DeepSeek-R1 and the Qwen3 series [ 26,105], and we believe it is also used in OpenAI’s models. Thinking Length: From Deep to Efficient Reasoning. Agentic delay is not always computational— it can be cognitive. Long reasoning chains, recursive self-reflection, and redundant planning can inflate agent response time without proportional improvement in answer quality. Thus, reducing agent time also involves improving reasoning efficiency. Efficient agents are not those that think more, but those that think less but better. Time-optimal agents favor minimal path-length strategies and utility-aware reasoning halts. As a result, the design of the agent’s reasoning policy becomes a key lever for compressing agent time. AI Infrastructure: Hardware Still Matters. Infrastructure underpins the practical ceiling of agent time. Advances in AI-specific hardware ( e.g., Groq [ 28], Cerebras [ 46]), and the rise of inference- optimized software stacks ( e.g., vLLM [ 40], FlashAttention [ 14]) directly accelerate token generation and reduce latency bottlenecks. Improving agent time at scale requires co-evolution of software and hardware, including low-latency model execution environments, sparsity-aware compilers, and infrastructure that supports real-time orchestration. Agent usability, particularly in interactive or embedded contexts, is inseparable from the infrastructure’s capacity to meet low-latency demands. 4.2 Minimizing Cost Even a capable agent may have a low ROI due to its prohibitive cost. Cost in this context is divided into two dimensions: interaction time, reflecting the user’s interaction overhead of initiating agent behavior, and expense, encompassing the financial cost of executing that behavior. Interaction Time. Interaction time represents the user-side effort in translating intention into usable instructions. High interaction time implies that the agent demands excessive specificity, verbosity, or trial-and-error for effective operation, which imposes a hidden tax on the human cognitive loop. This burden manifests in the need to engineer, rephrase, clarify, debug, or strategically manipulate prompts, which cumulatively erodes usability and dampens adoption. Reducing interaction time requires moving beyond viewing prompting as a static input interface. Instead, an agent should behave more like a partner instead of a parser—where agents actively collaborate with users, resolve ambiguity, and contribute to intent clarification in real time. The future of prompt-efficient agents requires a shift from reactive input-taking to proactive goal inference, where agents can anticipate user needs and autonomously complete the task without explicit instructions [ 57]. Achieving this vision will also require new product designs that reduce user burden: minimizing manual prompting, shortening feedback loops, and embedding agents seamlessly into workflows. Until this paradigm or product shift occurs, interaction cost remains a principal bottleneck to ROI. Expense. Expense refers to
https://arxiv.org/abs/2505.17767v1
the financial cost incurred in using an agent, introducing a practical budget constraint: even perfect answers are of limited value if unaffordable, especially at scale or under continuous load. Crucially, the expense is influenced by the model inference time, memory usage, task complexity, reasoning depth, and tool integration. Agents that spawn subprocesses, call external APIs, or manage extended multi-turn contexts often incur compounding costs. These costs will be reflected in the user’s financial expenses. Hence, minimizing agent cost calls for more efficient model deployment, smarter context management, and adaptive computation strategies that balance performance with resource usage. 5 Literature Review Recent research on LLM agents can be broadly categorized into two main paradigms. Agent Workflow. This line of work uses predefined paths or pipelines to coordinate between LLMs and a set of tools, often involving deterministic planning, tool invocation, and human-in-the-loop decision making [ 43,84]. Agent workflows are built upon human prior knowledge and expert intuition, with developers designing a structured process that guides what the LLM should do at different stages of task execution. A central feature of these workflows is the use of prompts to control the agent’s behavior —not just what it does, but when it does it. For example, prompts may instruct: “If the user asks for today’s weather, call the weather API” [ 94,86,31,33,44,120,90]. Prompts can also guide the agent’s reasoning process [ 68,76,106,110], as seen in paradigms like ReAct [ 109] 8 and Reflexion [ 81]. Agent workflows are cost-effective, require minimal fine-tuning, and offer strong controllability by encoding logic directly into prompts or supporting code, which reduces the risk of unpredictable behaviors. As a result, many real-world frameworks adopt this paradigm to incorporate human priors and domain-specific knowledge [1], e.g., LlamaIndex [52], Open Deep Research [41]. However, rigid workflows can limit a model’s flexibility and cap its performance potential. As a result, recent trends in agent workflows have moved toward more abstract and loosely defined structures, allowing greater autonomy during task execution. Early systems like ChatDev [ 72] used fixed roles and steps, while later frameworks such as LangGraph [ 42] enable dynamic task decomposition and delegation [ 110,83,79,113]. Multi-agent systems like MetaGPT [ 30], AutoGen [ 98], and CrewAI [ 12] introduce agent collaboration [ 27,117,4], though each agent still typically follows a predefined workflow, often reflecting hierarchical structures such as supervisor-worker dynamics. Researchers also explore agents that can optimize their own workflows [ 73,102,63,39], such as FlowAgent [ 80] and AFlow [ 114], allowing more autonomy in structuring their processes. However, these agents still operate within the predefined, prompt-based frameworks. Agent Model. Agent models are capable of dynamically directing their own processes and tool usage, as they internalize both planning and action-control capabilities [ 43,118,53]. Rather than layering reasoning mechanisms on top of predefined workflows, agent models are typically trained end-to-end, often through Reinforcement Learning (RL). This training paradigm allows agents to freely explore and learn from the environment without depending on manually specified execution paths. A flagship example is OpenAI’s Deep Research model [ 3], which was built from the ground up
https://arxiv.org/abs/2505.17767v1
(rather than based on an existing LLM and predefined workflow) to perform search tasks autonomously. Its success has inspired a wave of research focused on improving search agents via RL [ 85,36,10,35,121,118]. Beyond search, many recent works apply similar end-to-end agent optimization in other domains. For example, AGILE [ 22] constructs a complex conversational agent by RL tuning to internalize various agent abilities including memory management, tool use, planning, and self-reflection. WebRL [ 71] trains LLM web agents via self-evolving online curriculum reinforcement learning. ReST [ 5] refines ReAct agents for multi-step reasoning using an iterative RL-based self-improvement algorithm. RAGEN [ 96] and SWEET-RL [ 124] both design specialized multi-turn reinforcement learning algorithms specifically for agents that require multi-round interactions. CORY [ 59] and MARFT [ 45] extend these ideas into multi-agent settings, enabling RL-fine-tuned coordination between agents. 6 Alternative Views and Broader Impacts Alternative Views. An alternative perspective to Agentic ROI may go beyond utility metrics and consider issues such as algorithmic bias, over-reliance, or privacy. In response, we position Agentic ROI as a foundational, but not exhaustive, metric for bridging the current usability gap in real-world deployment. We envision our utility-based approach as one critical component within a broader, multi-dimensional evaluation framework that also incorporates ethical, social, and regulatory factors. Broader Impacts. The broader impact of Agentic ROI can be understood across several dimensions. •Guiding Product Design. The current Agentic ROI contributes meaningfully not only to describe the usability gap for the end users. It is also suitable for guiding practitioners to design agent interfaces, workflows, and interaction paradigms. •Understanding Human-AI Co-Evolution. From a longer-term perspective, the human time—the manual effort required to complete a task without agents—may not be static, but co-evolves with agent time as part of a broader human-AI ecosystem. For example, users may gradually offload cognitive tasks to agents, reducing their own proficiency, thereby increasing the human time. •Democratizing AI Access. Focusing on Agentic ROI highlights usability barriers for non-expert users. By revealing where the costs outweigh the time savings, it guides efforts toward greater inclusivity, making LLM agents more accessible to the general public, not just technical experts. •Alternative Research Directions. Finally, this perspective invites a shift in research priorities—from pure model performance to end-to-end utility. It encourages interdisciplinary approaches that integrate economics and systems thinking, fostering a more holistic understanding of what it means for AI agents to serve real-world needs. 9 7 Conclusion In this position paper, we argue that the key barrier to the practical usability of LLM agents lies not in model capability alone, but in maximizing the value an agent can provide, while minimizing the costs incurred during real-world use. We formalize this usability gap through the Agentic ROI metric, which quantifies the tradeoff between information quality, agent time, and cost. We analyze the development trends in optimizing Agentic ROI and outline future directions for bridging current usability gaps, paving the way for LLM agents that are truly scalable, accessible, and effective in real-world applications. References [1]LangChain AI. Langchain, 2025. https://github.com/langchain-ai/langchain? tab=readme-ov-file , Accessed on 2025-05-17. [2]Open AI. Faulty reward
https://arxiv.org/abs/2505.17767v1
functions in the wild, 2016. https://openai.com/index/ faulty-reward-functions/ . [3]Open AI. Deep research system card, 2025. https://cdn.openai.com/ deep-research-system-card.pdf , Accessed on 2025-5-17. [4]Open AI. Swarm (experimental, educational), 2025. https://github.com/openai/swarm , Accessed on 2025-5-17. [5]Renat Aksitov, Sobhan Miryoosefi, Zonglin Li, Daliang Li, Sheila Babayan, Kavya Kopparapu, Zachary Fisher, Ruiqi Guo, Sushant Prakash, Pranesh Srinivasan, et al. Rest meets react: Self-improvement for multi-step reasoning llm agent. arXiv preprint arXiv:2312.10003 , 2023. [6]Ekin Akyürek, Mehul Damani, Linlu Qiu, Han Guo, Yoon Kim, and Jacob Andreas. The sur- prising effectiveness of test-time training for abstract reasoning, 2024. Preprint at https://arxiv. org/abs/2411.07279 . [7]Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. Concrete problems in ai safety. arXiv preprint arXiv:1606.06565 , 2016. [8]Bowen Baker, Joost Huizinga, Leo Gao, Zehao Dou, Melody Y Guan, Aleksander Madry, Wojciech Zaremba, Jakub Pachocki, and David Farhi. Monitoring reasoning models for misbehavior and the risks of promoting obfuscation. arXiv preprint arXiv:2503.11926 , 2025. [9]Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, et al. Open problems and fundamental limitations of reinforcement learning from human feedback. arXiv preprint arXiv:2307.15217 , 2023. [10] Mingyang Chen, Tianpeng Li, Haoze Sun, Yijie Zhou, Chenzheng Zhu, Haofen Wang, Jeff Z Pan, Wen Zhang, Huajun Chen, Fan Yang, et al. Research: Learning to reason with search for llms via reinforcement learning. arXiv preprint arXiv:2503.19470 , 2025. [11] Yuheng Cheng, Ceyao Zhang, Zhengwen Zhang, Xiangrui Meng, Sirui Hong, Wenhao Li, Zihao Wang, Zekai Wang, Feng Yin, Junhua Zhao, et al. Exploring large language model based intelligent agents: Definitions, methods, and prospects. arXiv preprint arXiv:2401.03428 , 2024. [12] CrewAI. Fast and flexible multi-agent automation framework, 2025. https://github.com/ crewAIInc/crewAI , Accessed on 2025-5-17. [13] Sumit Kumar Dam, Choong Seon Hong, Yu Qiao, and Chaoning Zhang. A complete survey on llm-based ai chatbots. arXiv preprint arXiv:2406.16937 , 2024. [14] Tri Dao, Dan Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. Flashattention: Fast and memory-efficient exact attention with io-awareness. Advances in neural information processing systems , 35:16344–16359, 2022. 10 [15] Demandsage. Chatgpt statistics (2025): Dau & mau data worldwide. 2025. https://www. demandsage.com/chatgpt-statistics/ . [16] Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Sam Stevens, Boshi Wang, Huan Sun, and Yu Su. Mind2web: Towards a generalist agent for the web. Advances in Neural Information Processing Systems , 36:28091–28114, 2023. [17] Yubo Dong, Xukun Zhu, Zhengzhe Pan, Linchao Zhu, and Yi Yang. VillagerAgent: A graph- based multi-agent framework for coordinating complex task dependencies in Minecraft. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Findings of the Association for Computational Linguistics: ACL 2024 , pages 16290–16314, Bangkok, Thailand, August 2024. Association for Computational Linguistics. [18] Alexandre Drouin, Maxime Gasse, Massimo Caccia, Issam H. Laradji, Manuel Del Verme, Tom Marty, David Vazquez, Nicolas Chapados, and Alexandre Lacoste. WorkArena: How capable are web agents at solving common knowledge work tasks? In Ruslan Salakhutdinov, Zico Kolter, Katherine Heller, Adrian Weller, Nuria Oliver, Jonathan Scarlett, and Felix Berkenkamp, editors, Proceedings of the 41st International Conference on Machine Learning , volume 235 of Proceedings of
https://arxiv.org/abs/2505.17767v1
Machine Learning Research , pages 11642–11662. PMLR, 21–27 Jul 2024. [19] Zane Durante, Qiuyuan Huang, Naoki Wake, Ran Gong, Jae Sung Park, Bidipta Sarkar, Rohan Taori, Yusuke Noda, Demetri Terzopoulos, Yejin Choi, et al. Agent ai: Surveying the horizons of multimodal interaction. arXiv preprint arXiv:2401.03568 , 2024. [20] Jonathan St BT Evans. In two minds: dual-process accounts of reasoning. Trends in cognitive sciences , 7(10):454–459, 2003. [21] Kim Falk. Practical recommender systems . Simon and Schuster, 2019. [22] Peiyuan Feng, Yichen He, Guanhua Huang, Yuan Lin, Hanchong Zhang, Yuchen Zhang, and Hang Li. Agile: A novel framework of llm agents. arXiv e-prints , pages arXiv–2405, 2024. [23] John DE Gabrieli. Cognitive neuroscience of human memory. Annual review of psychology , 49(1):87–115, 1998. [24] Google. Gemini deep research—your personal research assistant. 2024. https://gemini. google/overview/deep-research . [25] Ryan Greenblatt, Carson Denison, Benjamin Wright, Fabien Roger, Monte MacDiarmid, Sam Marks, Johannes Treutlein, Tim Belonax, Jack Chen, David Duvenaud, et al. Alignment faking in large language models. arXiv preprint arXiv:2412.14093 , 2024. [26] Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [27] Taicheng Guo, Xiuying Chen, Yaqi Wang, Ruidi Chang, Shichao Pei, Nitesh V Chawla, Olaf Wiest, and Xiangliang Zhang. Large language model based multi-agents: A survey of progress and challenges. arXiv preprint arXiv:2402.01680 , 2024. [28] Linley Gwennap. Groq rocks neural networks. Microprocessor Report, Tech. Rep , 2020. [29] Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556 , 2022. [30] Sirui Hong, Xiawu Zheng, Jonathan Chen, Yuheng Cheng, Jinlin Wang, Ceyao Zhang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, et al. Metagpt: Meta programming for multi-agent collaborative framework. arXiv preprint arXiv:2308.00352 , 3(4):6, 2023. [31] Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, and Tomas Pfister. Tool documentation enables zero-shot tool-usage with large language models. arXiv preprint arXiv:2308.00675 , 2023. 11 [32] Jie Huang and Kevin Chen-Chuan Chang. Towards reasoning in large language models: A survey. arXiv preprint arXiv:2212.10403 , 2022. [33] Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. In International conference on machine learning , pages 9118–9147. PMLR, 2022. [34] Lawrence Keunho Jang, Yinheng Li, Charles Ding, Justin Lin, Paul Pu Liang, Dan Zhao, Rogerio Bonatti, and Kazuhito Koishida. Videowebarena: Evaluating long context multimodal agents with video understanding web tasks. In NeurIPS 2024 Workshop on Open-World Agents , 2024. [35] Pengcheng Jiang, Jiacheng Lin, Lang Cao, Runchu Tian, SeongKu Kang, Zifeng Wang, Jimeng Sun, and Jiawei Han. Deepretrieval: Hacking real search engines and retrievers with large language models via reinforcement learning. arXiv preprint arXiv:2503.00223 , 2025. [36] Bowen Jin, Hansi Zeng, Zhenrui Yue, Jinsung Yoon, Sercan Arik, Dong Wang, Hamed Zamani, and Jiawei Han. Search-r1: Training llms to reason
https://arxiv.org/abs/2505.17767v1
and leverage search engines with reinforcement learning. arXiv preprint arXiv:2503.09516 , 2025. [37] Jin K Kim, Michael Chua, Mandy Rickard, and Armando Lorenzo. Chatgpt and large language model (llm) chatbots: The current state of acceptability and a proposal for guidelines on utilization in academic medicine. Journal of Pediatric Urology , 19(5):598–604, 2023. [38] Jing Yu Koh, Robert Lo, Lawrence Jang, Vikram Duvvur, Ming Lim, Po-Yu Huang, Graham Neubig, Shuyan Zhou, Russ Salakhutdinov, and Daniel Fried. VisualWebArena: Evaluating multimodal agents on realistic visual web tasks. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Proceedings of the 62nd Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers) , pages 881–905, Bangkok, Thailand, August 2024. Association for Computational Linguistics. [39] Mandar Kulkarni. Agent-s: Llm agentic workflow to automate standard operating procedures. arXiv preprint arXiv:2503.15520 , 2025. [40] Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles , 2023. [41] LangChain. Open deep research, 2025. https://github.com/langchain-ai/open_ deep_research , Accessed on 2025-05-17. [42] LangGraph. Langgraph, 2025. https://langchain-ai.github.io/langgraph/ , Ac- cessed on 2025-05-17. [43] LangGraph. Workflows and agents, 2025. https://langchain-ai.github.io/ langgraph/tutorials/workflows/ , Accessed on 2025-05-17. [44] Tao Li, Gang Li, Zhiwei Deng, Bryan Wang, and Yang Li. A zero-shot language agent for computer control with structured reflection. arXiv preprint arXiv:2310.08740 , 2023. [45] Junwei Liao, Muning Wen, Jun Wang, and Weinan Zhang. Marft: Multi-agent reinforcement fine-tuning. arXiv preprint arXiv:2504.16129 , 2025. [46] Sean Lie. Cerebras architecture deep dive: First look inside the hw/sw co-design for deep learning: Cerebras systems. In 2022 IEEE Hot Chips 34 Symposium (HCS) , pages 1–34. IEEE Computer Society, 2022. [47] Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, and Qin Chen. Agentsims: An open-source sandbox for large language model evaluation. arXiv preprint arXiv:2308.04026 , 2023. [48] Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Hao Zhang, Yong Liu, Chuhan Wu, Xiangyang Li, Chenxu Zhu, et al. How can recommender systems benefit from large language models: A survey. ACM Transactions on Information Systems , 43(2):1–47, 2025. 12 [49] Jianghao Lin, Rong Shan, Chenxu Zhu, Kounianhua Du, Bo Chen, Shigang Quan, Ruiming Tang, Yong Yu, and Weinan Zhang. Rella: Retrieval-enhanced large language models for lifelong sequential behavior comprehension in recommendation. In Proceedings of the ACM Web Conference 2024 , pages 3497–3508, 2024. [50] Kevin Lin, Charlie Snell, Yu Wang, Charles Packer, Sarah Wooders, Ion Stoica, and Joseph E Gonzalez. Sleep-time compute: Beyond inference scaling at test-time. arXiv preprint arXiv:2504.13171 , 2025. [51] Evan Zheran Liu, Kelvin Guu, Panupong Pasupat, Tianlin Shi, and Percy Liang. Reinforcement learning on web interfaces using workflow-guided exploration. In International Conference on Learning Representations (ICLR) , 2018. [52] Jerry Liu. LlamaIndex, 11 2022. [53] Weiwen Liu, Xu Huang, Xingshan Zeng, Xinlong Hao, Shuai Yu, Dexun Li, Shuai Wang, Weinan Gan, Zhengying Liu, Yuanqing Yu, et al. Toolace: Winning the points of llm function calling. arXiv preprint arXiv:2409.00920 ,
https://arxiv.org/abs/2505.17767v1
2024. [54] Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, and Jie Tang. Agentbench: Evaluating LLMs as agents. In The Twelfth International Conference on Learning Representations , 2024. [55] Yupei Liu, Yuqi Jia, Runpeng Geng, Jinyuan Jia, and Neil Zhenqiang Gong. Formalizing and benchmarking prompt injection attacks and defenses. In 33rd USENIX Security Symposium (USENIX Security 24) , pages 1831–1847, 2024. [56] Xing Han Lu, Zden ˇek Kasner, and Siva Reddy. WebLINX: Real-world website navigation with multi-turn dialogue. In Ruslan Salakhutdinov, Zico Kolter, Katherine Heller, Adrian Weller, Nuria Oliver, Jonathan Scarlett, and Felix Berkenkamp, editors, Proceedings of the 41st International Conference on Machine Learning , volume 235 of Proceedings of Machine Learning Research , pages 33007–33056. PMLR, 21–27 Jul 2024. [57] Yaxi Lu, Shenzhi Yang, Cheng Qian, Guirong Chen, Qinyu Luo, Yesai Wu, Huadong Wang, Xin Cong, Zhong Zhang, Yankai Lin, et al. Proactive agent: Shifting llm agents from reactive responses to active assistance. arXiv preprint arXiv:2410.12361 , 2024. [58] Trung Quoc Luong, Xinbo Zhang, Zhanming Jie, Peng Sun, Xiaoran Jin, and Hang Li. Reft: Reasoning with reinforced fine-tuning. arXiv preprint arXiv:2401.08967 , 3, 2024. [59] Hao Ma, Tianyi Hu, Zhiqiang Pu, Liu Boyin, Xiaolin Ai, Yanyan Liang, and Min Chen. Coevolving with the other you: Fine-tuning llm with sequential cooperative multi-agent reinforcement learning. Advances in Neural Information Processing Systems , 37:15497–15525, 2024. [60] Golla Madhu, Dr A Govardhan, and Dr TV Rajinikanth. Intelligent semantic web search engines: a brief survey. arXiv preprint arXiv:1102.0831 , 2011. [61] mcbench.ai. Mc-bench: Which ai generated this minecraft build better? https://mcbench. ai/, 2024. [62] Niklas Muennighoff, Zitong Yang, Weijia Shi, Xiang Lisa Li, Li Fei-Fei, Hannaneh Hajishirzi, Luke Zettlemoyer, Percy Liang, Emmanuel Candès, and Tatsunori Hashimoto. s1: Simple test-time scaling. arXiv preprint arXiv:2501.19393 , 2025. [63] Boye Niu, Yiliao Song, Kai Lian, Yifan Shen, Yu Yao, Kun Zhang, and Tongliang Liu. Flow: Modularized agentic workflow automation. In The Thirteenth International Conference on Learning Representations , 2025. [64] Business of Apps. Tiktok revenue and usage statistics (2025). 2025. https://www. businessofapps.com/data/tik-tok-statistics/ . 13 [65] OpenAI. Introducing deep research. 2025. https://openai.com/index/ introducing-deep-research/ . [66] Yichen Pan, Dehan Kong, Sida Zhou, Cheng Cui, Yifei Leng, Bing Jiang, Hangyu Liu, Yanyi Shang, Shuyan Zhou, Tongshuang Wu, and Zhengyang Wu. Webcanvas: Benchmarking web agents in online environments, 2024. [67] Joon Sung Park, Joseph O’Brien, Carrie Jun Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th annual acm symposium on user interface software and technology , pages 1–22, 2023. [68] Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A Smith, and Mike Lewis. Measuring and narrowing the compositionality gap in language models. arXiv preprint arXiv:2210.03350 , 2022. [69] Xavier Puig, Kevin Ra, Marko Boben, Jiaman Li, Tingwu Wang, Sanja Fidler, and Antonio Torralba. Virtualhome: Simulating household activities via programs. In Proceedings of the IEEE conference
https://arxiv.org/abs/2505.17767v1
on computer vision and pattern recognition , pages 8494–8502, 2018. [70] Xiangyu Qi, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia, Prateek Mittal, and Peter Henderson. Fine-tuning aligned language models compromises safety, even when users do not intend to! arXiv preprint arXiv:2310.03693 , 2023. [71] Zehan Qi, Xiao Liu, Iat Long Iong, Hanyu Lai, Xueqiao Sun, Wenyi Zhao, Yu Yang, Xinyue Yang, Jiadai Sun, Shuntian Yao, et al. Webrl: Training llm web agents via self-evolving online curriculum reinforcement learning. arXiv preprint arXiv:2411.02337 , 2024. [72] Chen Qian, Wei Liu, Hongzhang Liu, Nuo Chen, Yufan Dang, Jiahao Li, Cheng Yang, Weize Chen, Yusheng Su, Xin Cong, et al. Chatdev: Communicative agents for software development. arXiv preprint arXiv:2307.07924 , 2023. [73] Shuofei Qiao, Runnan Fang, Zhisong Qiu, Xiaobin Wang, Ningyu Zhang, Yong Jiang, Pengjun Xie, Fei Huang, and Huajun Chen. Benchmarking agentic workflow generation. arXiv preprint arXiv:2410.07869 , 2024. [74] Yuehan Qin, Shawn Li, Yi Nian, Xinyan Velocity Yu, Yue Zhao, and Xuezhe Ma. Don’t let it hallucinate: Premise verification via retrieval-augmented logical reasoning. arXiv preprint arXiv:2504.06438 , 2025. [75] Christopher Rawles, Sarah Clinckemaillie, Yifan Chang, Jonathan Waltz, Gabrielle Lau, Mary- beth Fair, Alice Li, William Bishop, Wei Li, Folawiyo Campbell-Ajala, et al. Androidworld: A dynamic benchmarking environment for autonomous agents. arXiv preprint arXiv:2405.14573 , 2024. [76] Matthew Renze and Erhan Guven. Self-reflection in llm agents: Effects on problem-solving performance. arXiv preprint arXiv:2405.06682 , 2024. [77] Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao, et al. Tptu: Task planning and tool usage of large language model-based ai agents. In NeurIPS 2023 Foundation Models for Decision Making Workshop , 2023. [78] Stefan Schneider, Stefan Werner, Ramin Khalili, Artur Hecker, and Holger Karl. mobile-env: An open platform for reinforcement learning in wireless mobile networks. In NOMS 2022-2022 IEEE/IFIP Network Operations and Management Symposium , pages 1–3. IEEE, 2022. [79] Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. Hugginggpt: Solving ai tasks with chatgpt and its friends in hugging face. Advances in Neural Information Processing Systems , 36:38154–38180, 2023. [80] Yuchen Shi, Siqi Cai, Zihan Xu, Yuei Qin, Gang Li, Hang Shao, Jiawei Chen, Deqing Yang, Ke Li, and Xing Sun. Flowagent: Achieving compliance and flexibility for workflow agents. arXiv preprint arXiv:2502.14345 , 2025. 14 [81] Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Re- flexion: Language agents with verbal reinforcement learning. Advances in Neural Information Processing Systems , 36:8634–8652, 2023. [82] Mohit Shridhar, Xingdi Yuan, Marc-Alexandre Cote, Yonatan Bisk, Adam Trischler, and Matthew Hausknecht. Alfworld: Aligning text and embodied environments for interactive learning. In International Conference on Learning Representations . [83] Significant-Gravitas. Autogpt: Build, deploy, and run ai agents, 2025. https://github. com/Significant-Gravitas/AutoGPT , Accessed on 2025-05-06. [84] Aditi Singh, Abul Ehtesham, Saket Kumar, and Tala Talaei Khoei. Enhancing ai systems with agentic workflows patterns in large language model. In 2024 IEEE World AI IoT Congress (AIIoT) , pages 527–532. IEEE, 2024. [85] Huatong Song, Jinhao Jiang, Yingqian Min, Jie Chen, Zhipeng Chen, Wayne Xin Zhao, Lei Fang, and
https://arxiv.org/abs/2505.17767v1
Ji-Rong Wen. R1-searcher: Incentivizing the search capability in llms via reinforcement learning. arXiv preprint arXiv:2503.05592 , 2025. [86] Claudio Spiess, Mandana Vaziri, Louis Mandel, and Martin Hirzel. Autopdl: Automatic prompt optimization for llm agents. arXiv preprint arXiv:2504.04365 , 2025. [87] Daniel Toyama, Philippe Hamel, Anita Gergely, Gheorghe Comanici, Amelia Glaese, Zafarali Ahmed, Tyler Jackson, Shibl Mourad, and Doina Precup. Androidenv: A reinforcement learning platform for android. arXiv preprint arXiv:2105.13231 , 2021. [88] Florian Tramèr and Javier Rando Ramirez. Universal jailbreak backdoors from poisoned human feedback. In The Twelfth International Conference on Learning Representations (ICLR 2024) . OpenReview, 2024. [89] Harsh Trivedi, Tushar Khot, Mareike Hartmann, Ruskin Manku, Vinty Dong, Edward Li, Shashank Gupta, Ashish Sabharwal, and Niranjan Balasubramanian. Appworld: A controllable world of apps and people for benchmarking interactive coding agents. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 16022–16076, 2024. [90] Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. V oyager: An open-ended embodied agent with large language models, 2023. URL https://arxiv. org/abs/2305.16291 , 2023. [91] Jiongxiao Wang, Junlin Wu, Muhao Chen, Yevgeniy V orobeychik, and Chaowei Xiao. Rlhf- poison: Reward poisoning attack for reinforcement learning with human feedback in large language models. arXiv preprint arXiv:2311.09641 , 2023. [92] Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, et al. A survey on large language model based autonomous agents. Frontiers of Computer Science , 18(6):186345, 2024. [93] Lei Wang, Jingsen Zhang, Xu Chen, Yankai Lin, Ruihua Song, Wayne Xin Zhao, and Ji-Rong Wen. Recagent: A novel simulation paradigm for recommender systems. arXiv preprint arXiv:2306.02552 , 2023. [94] Qian Wang, Tianyu Wang, Zhenheng Tang, Qinbin Li, Nuo Chen, Jingsheng Liang, and Bingsheng He. All it takes is one prompt: An autonomous llm-ma system. In ICLR 2025 Workshop on Foundation Models in the Wild , 2025. [95] Ruoyao Wang, Peter Jansen, Marc-Alexandre Côté, and Prithviraj Ammanabrolu. Science- world: Is your agent smarter than a 5th grader? In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing , pages 11279–11298, 2022. [96] Zihan Wang, Kangrui Wang, Qineng Wang, Pingyue Zhang, Linjie Li, Zhengyuan Yang, Kefan Yu, Minh Nhat Nguyen, Licheng Liu, Eli Gottlieb, et al. Ragen: Understanding self-evolution in llm agents via multi-turn reinforcement learning. arXiv preprint arXiv:2504.20073 , 2025. 15 [97] Michel Wermelinger. Using github copilot to solve simple programming problems. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V . 1 , pages 172–178, 2023. [98] Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, et al. Autogen: Enabling next-gen llm applications via multi-agent conversation. arXiv preprint arXiv:2308.08155 , 2023. [99] Yunjia Xi, Jianghao Lin, Menghui Zhu, Yongzhao Xiao, Zhuoying Ou, Jiaqi Liu, Tong Wan, Bo Chen, Weiwen Liu, Yasheng Wang, Ruiming Tang, Weinan Zhang, and Yong Yu. Infodeepseek: Benchmarking agentic information seeking for retrieval-augmented generation. arXiv preprint arXiv:2505.15872 , 2025. [100] Yunjia
https://arxiv.org/abs/2505.17767v1
Xi, Weiwen Liu, Jianghao Lin, Xiaoling Cai, Hong Zhu, Jieming Zhu, Bo Chen, Ruiming Tang, Weinan Zhang, and Yong Yu. Towards open-world recommendation with knowledge augmentation from large language models. In Proceedings of the 18th ACM Conference on Recommender Systems , pages 12–22, 2024. [101] Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, et al. The rise and potential of large language model based agents: A survey. Science China Information Sciences , 68(2):121101, 2025. [102] Ruixuan Xiao, Wentao Ma, Ke Wang, Yuchuan Wu, Junbo Zhao, Haobo Wang, Fei Huang, and Yongbin Li. Flowbench: Revisiting and benchmarking workflow-guided planning for llm-based agents. arXiv preprint arXiv:2406.14884 , 2024. [103] Tianbao Xie, Danyang Zhang, Jixuan Chen, Xiaochuan Li, Siheng Zhao, Ruisheng Cao, Toh Jing Hua, Zhoujun Cheng, Dongchan Shin, Fangyu Lei, Yitao Liu, Yiheng Xu, Shuyan Zhou, Silvio Savarese, Caiming Xiong, Victor Zhong, and Tao Yu. Osworld: Benchmarking multimodal agents for open-ended tasks in real computer environments. In A. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak, and C. Zhang, editors, Advances in Neural Information Processing Systems , volume 37, pages 52040–52094. Curran Associates, Inc., 2024. [104] Frank F Xu, Yufan Song, Boxuan Li, Yuxuan Tang, Kritanjali Jain, Mengxue Bao, Zora Z Wang, Xuhui Zhou, Zhitong Guo, Murong Cao, et al. Theagentcompany: benchmarking llm agents on consequential real world tasks. arXiv preprint arXiv:2412.14161 , 2024. [105] An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, et al. Qwen3 technical report. arXiv preprint arXiv:2505.09388 , 2025. [106] Hui Yang, Sifu Yue, and Yunzhong He. Auto-gpt for online decision making: Benchmarks and additional opinions. arXiv preprint arXiv:2306.02224 , 2023. [107] Yingxuan Yang, Huacan Chai, Yuanyi Song, Siyuan Qi, Muning Wen, Ning Li, Junwei Liao, Haoyi Hu, Jianghao Lin, Gaowei Chang, et al. A survey of ai agent protocols. arXiv preprint arXiv:2504.16736 , 2025. [108] Shunyu Yao, Howard Chen, John Yang, and Karthik Narasimhan. Webshop: Towards scalable real-world web interaction with grounded language agents. volume 35, pages 20744–20757, 2022. [109] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. In International Conference on Learning Representations (ICLR) , 2023. [110] yoheinakajima. Babyagi, 2025. https://github.com/yoheinakajima/babyagi , Ac- cessed on 2025-05-06. [111] Ori Yoran, Samuel Joseph Amouyal, Chaitanya Malaviya, Ben Bogin, Ofir Press, and Jonathan Berant. AssistantBench: Can web agents solve realistic and time-consuming tasks? In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen, editors, Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 8938–8968, Miami, Florida, USA, November 2024. Association for Computational Linguistics. 16 [112] Zhenrui Yue, Honglei Zhuang, Aijun Bai, Kai Hui, Rolf Jagerman, Hansi Zeng, Zhen Qin, Dong Wang, Xuanhui Wang, and Michael Bendersky. Inference scaling for long-context retrieval augmented generation. arXiv preprint arXiv:2410.04343 , 2024. [113] Chi Zhang, Zhao Yang, Jiaxuan Liu, Yanda Li, Yucheng Han, Xin Chen, Zebiao Huang, Bin Fu, and Gang Yu. Appagent: Multimodal agents as smartphone users. In Proceedings
https://arxiv.org/abs/2505.17767v1
of the 2025 CHI Conference on Human Factors in Computing Systems , pages 1–20, 2025. [114] Jiayi Zhang, Jinyu Xiang, Zhaoyang Yu, Fengwei Teng, Xionghui Chen, Jiaqi Chen, Mingchen Zhuge, Xin Cheng, Sirui Hong, Jinlin Wang, et al. Aflow: Automating agentic workflow generation, 2024. URL https://arxiv. org/abs/2410.10762 . [115] Weinan Zhang, Junwei Liao, Ning Li, Kounianhua Du, and Jianghao Lin. Agentic information retrieval. arXiv preprint arXiv:2410.09713 , 2024. [116] Xinnong Zhang, Jiayu Lin, Xinyi Mou, Shiyue Yang, Xiawei Liu, Libo Sun, Hanjia Lyu, Yihang Yang, Weihong Qi, Yue Chen, et al. Socioverse: A world model for social sim- ulation powered by llm agents and a pool of 10 million real-world users. arXiv preprint arXiv:2504.10157 , 2025. [117] Yusen Zhang, Ruoxi Sun, Yanfei Chen, Tomas Pfister, Rui Zhang, and Sercan Arik. Chain of agents: Large language models collaborating on long-context tasks. Advances in Neural Information Processing Systems , 37:132208–132237, 2024. [118] Yuxiang Zhang, Yuqi Yang, Jiangming Shu, Xinyan Wen, and Jitao Sang. Agent models: Inter- nalizing chain-of-action generation into reasoning models. arXiv preprint arXiv:2503.06580 , 2025. [119] Zijian Zhang, Shuchang Liu, Ziru Liu, Rui Zhong, Qingpeng Cai, Xiangyu Zhao, Chunxu Zhang, Qidong Liu, and Peng Jiang. Llm-powered user simulator for recommender system. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 39, pages 13339–13347, 2025. [120] Andrew Zhao, Daniel Huang, Quentin Xu, Matthieu Lin, Yong-Jin Liu, and Gao Huang. Expel: Llm agents are experiential learners. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 38, pages 19632–19642, 2024. [121] Yuxiang Zheng, Dayuan Fu, Xiangkun Hu, Xiaojie Cai, Lyumanshan Ye, Pengrui Lu, and Pengfei Liu. Deepresearcher: Scaling deep research via reinforcement learning in real-world environments. arXiv preprint arXiv:2504.03160 , 2025. [122] Peter Yong Zhong, Siyuan Chen, Ruiqi Wang, McKenna McCall, Ben L Titzer, Heather Miller, and Phillip B Gibbons. Rtbas: Defending llm agents against prompt injection and privacy leakage. arXiv preprint arXiv:2502.08966 , 2025. [123] Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, and Graham Neubig. Webarena: A realistic web environment for building autonomous agents. In The Twelfth International Conference on Learning Representations , 2024. [124] Yifei Zhou, Song Jiang, Yuandong Tian, Jason Weston, Sergey Levine, Sainbayar Sukhbaatar, and Xian Li. Sweet-rl: Training multi-turn llm agents on collaborative reasoning tasks. arXiv preprint arXiv:2503.15478 , 2025. [125] Yuxin Zuo, Kaiyan Zhang, Shang Qu, Li Sheng, Xuekai Zhu, Biqing Qi, Youbang Sun, Ganqu Cui, Ning Ding, and Bowen Zhou. Ttrl: Test-time reinforcement learning. arXiv preprint arXiv:2504.16084 , 2025. 17
https://arxiv.org/abs/2505.17767v1
EXECUTE: A Multilingual Benchmark for LLM Token Understanding Lukas Edman1,3Helmut Schmid2Alexander Fraser1,3,4 1School of Computation, Information and Technology, TU Munich 2Center for Information and Language Processing, LMU Munich 3Munich Center for Machine Learning 4Munich Data Science Institute lukas.edman@tum.de, schmid@cis.lmu.de Abstract The CUTE benchmark showed that LLMs struggle with character understanding in En- glish. We extend it to more languages with di- verse scripts and writing systems, introducing EXECUTE. Our simplified framework allows easy expansion to any language. Tests across multiple LLMs reveal that challenges in other languages are not always on the character level as in English. Some languages show word-level processing issues, some show no issues at all. We also examine sub-character tasks in Chi- nese, Japanese, and Korean to assess LLMs’ understanding of character components. 1 Introduction LLMs perform well on many tasks but struggle when they are asked to manipulate character se- quences, as shown by the CUTE benchmark (Ed- man et al., 2024). While CUTE tested Russian, showing this issue is not language-specific, it failed to consider other linguistic differences that may affect results. Language variation extends beyond script differences to writing system differences. En- glish and Russian use alphabets. Other languages use Abugidas, where letters are not strictly ordered within syllables, or Abjads, which mark vowels with diacritics or not at all. Chinese uses a lo- gographic script, where most words are just 1-2 characters long. Multilingual LLMs allocate to- kens unevenly across languages: high-resource lan- guages are well represented, but some low-resource languages are mainly processed at the byte level. We explore these languages in our benchmark EXECUTE: the Expandable X(Cross)-Lingual Extension of CUTE .1We mainly look at 8 lan- guages, shown in Table 1, which vary in script, writing system, tokenization, and resourcedness. We also provide a framework for adding languages to make this benchmark easily expandable. In our results and analysis, we find that: 1https://github.com/Leukas/EXECUTELanguage Script Writing System c/w t/w c/t Amharic Ge’ez Abugida 3.71 7.69 0.48 Arabic Arabic Abjad 4.63 2.43 1.90 Chinese Simpl. Han Logographic 1.51 1.25 1.20 English Latin Alphabet 4.04 1.32 3.05 Hindi Devanagari Abugida 3.66 2.80 1.31 Japanese Japanese Mixed 1.54 1.27 1.22 Korean Hangul Featural 3.38 2.71 1.25 Russian Cyrillic Alphabet 5.06 2.36 2.14 Table 1: CWT statistics of EXECUTE’s languages. c,w, andtdenote characters, words, and tokens. c/w refers to the average characters per word. tis the average token count across the 5 tokenizers used by the models. 1.Benchmark results for non-English languages often differ from the English results. 2.The results correlate with the languages’ CWT (character-word-token) statistics (see Table 1). 3.Surprisingly, the less an LLM knows a lan- guage, the better it performs on EXECUTE. 4.LLMs struggle with understanding sub- character components (see Figure 1). Our results provide more insight into how LLMs process tokens on different granularities. 2 Related Works Our work builds upon the CUTE benchmark (Ed- man et al., 2024), which showed that LLMs strug- gle with character manipulation tasks. CUTE was mainly created for English but also included Rus- sian tasks, showing similar results. Similar studies probe models to spell or modify text
https://arxiv.org/abs/2505.17784v1
on the char- acter level, but either first train the model (Itzhak and Levy, 2022; Kaushal and Mahowald, 2022), or focus on other topics than orthography (Huang et al., 2023; Efrat et al., 2023). Research on error correction, including spelling correction, has been done for many languages. Maxutov et al. (2024) found spelling correction to be “hard” for LLMs in Kazakh. Li et al. (2023) reported that LLMs perform worse than fine-tuned models for Chinese spelling correction. Similarly,arXiv:2505.17784v1 [cs.CL] 23 May 2025 Figure 1: EXECUTE benchmark. Prompts shortened for brevity. Example of full prompt in Appendix D. Kwon et al. (2023) showed that fine-tuned models outperform prompted LLMs for Arabic. Spelling correction requires both character-level and seman- tic knowledge to determine the correct replacement. EXECUTE, like CUTE, aims to remove contextual semantic understanding from the benchmark. Our sub-character experiments build on work by Wu et al. (2025) who released a detailed analy- sis of the information in Chinese characters. Our character-to/from-radical tasks resemble theirs, but they focus on simplified Chinese, while we also examine traditional characters via Japanese Kanji. Character-level LLMs have been proposed as a solution to CUTE and have been shown to outper- form subword LLMs in Pagnoni et al. (2024). 3 Benchmark Figure 1 exemplifies our EXECUTE benchmark. We use the same composition and manipulation tasks as CUTE but drop the similarity tasks which require static embeddings (such as word2vec) and fluent speakers to define similarity thresholds, which vary by language and lack clear criteria. Their removal makes EXECUTE easier to expand. Adding a new language Xnow only requires an English→Xtranslation system. As cross-language alignment is not crucial, translations do not need to be perfect: grammaticality is preferable but not necessary. We modify prompt examples and the dataset used, so English and Russian results differ from CUTE’s.2Although perfect translations are not required, we have fluent speakers verify that most translations preserve meaning and grammar. Table 1 lists these languages, covering eight major scripts and all known writing systems. While some widely used languages (e.g. Spanish) are missing, their script is represented, and Appendix B shows 2As our changes are minor, users of the English and Rus- sian datasets should cite Edman et al. (2024).the performance of languages using the same script is highly correlated. We keep the prompt texts in English but use language-specific examples, since fully Russian prompts did not improve performance for Russian (Edman et al., 2024). It also ensures that the LLMs understand the task consistently across languages. 3.1 Data Preparation We now describe the exact differences in prepro- cessing steps between our benchmark and the CUTE benchmark. Although there are several changes, we find that the scores from CUTE and EXECUTE are still largely comparable, as shown in Appendix A. To start, we use an updated subset of 5000 sto- ries from the TinyStories dataset (Eldan and Li, 2023), which used GPT-4 to produce outputs rather than the GPT-3.5 outputs, used in CUTE. We find this dataset to be cleaner (with no random foreign characters), and it is purported by the dataset au- thors to
https://arxiv.org/abs/2505.17784v1
also be of higher quality. For non-English languages, we translate all the stories using Google Translate. At this point, for Chinese and Japanese, it is necessary to apply word segmentation. For Chinese, we use jieba3, and for Japanese we use nagisa .4 We then generate a character set and vocabulary from the translated stories to use for our tasks. This is unlike what is used in CUTE, which predefined alphabets and vocabularies taken from the Trillion Word Corpus and Wikipedia. This change is neces- sary as it is more difficult to define a strict character set for some languages, and also more difficult to find a vocabulary. In the CUTE benchmark, the vocabulary also removed words less than 3 characters to maintain a level of difficulty for the tasks. We remove this cutoff for Chinese, Japanese, and Korean, as it is too restrictive. As the prompts are few-shot, we require language-specific examples in each prompt. For CUTE, these examples were created manually. In- stead, we generate 4 additional examples in the same manner as our test set, with a few additional stipulations: •At least 2 examples must use a word that con- tains duplicate letters. 3https://github.com/fxsjy/jieba 4https://github.com/taishi-i/nagisa • At least 1 example must operate on the dupli- cate letters when applicable. •For the contains tasks, 2 examples must have the label “yes” and 2 “no”. We specify the duplicate letter restrictions so that the LLM understands that it must modify allof the targeted characters. The first two restrictions were not applied for Chinese however, as it is exceed- ingly rare for a Chinese word to contain duplicate characters. The last restriction is intended to ensure the model is not biased to answering either “yes” or “no” due to its frequency in the examples, a phe- nomenon which has been shown to be problematic in Zhao et al. (2021). Diacritics Abugidas such as Hindi have diacrit- ics to mark vowel sounds, aspirations, and nasal- izations. Due to the complex rules surrounding valid diacritics, which also vary between languages, we opt to consider each “character” as the letter plus any diacritics attached, also known as the grapheme. This is already the case for Amharic, as the diacritics have become fused with consonants in the Ge’ez script itself. 3.2 Sub-Character Experiments Chinese, Japanese, and to a lesser extent Korean, have few characters per word, so we add language- specific tasks to assess their understanding of char- acter components. In Chinese, each character can be broken down into parts known as Kangxi radicals. An example of a decomposition is: 晚→ 日免, where indi- cates that 日should be placed to the left of 免. The radicals often have a related meaning to the com- posite:晚means evening ,日means sunand免 means avoid . Japanese Kanji characters originate from traditional Chinese characters and can also be decomposed into radicals. Korean Hangul char- acters denote syllables and can be split into Jamo, which correspond to phonemes. For example, ᄃ ᅮ ᆯ (dul) becomesㄷ(d),ㅜ(u), andㄹ(l). We test the LLMs’ ability to compose and de- compose CJK characters into their components. For Chinese
https://arxiv.org/abs/2505.17784v1
and Japanese, we ask the model to split characters into Kangxi radicals, and vice versa.5 Similarly, we decompose Hangul characters to Jamo and vice versa. These tasks are analogous to thespelling andinverse_spelling tasks. We 5One can further split Kangxi radicals down to strokes, but this showed very poor performance in initial tests.further add a task (similar to contains ) which asks if a character contains a Kangxi radical or Jamo. Japanese can either be written with Kanji charac- ters or with phonetic Hiragana characters. We test LLMs’ ability to convert Kanji in Appendix C. 3.3 Models We test 5 popular open-source multilingual LLMs: Aya Expanse, Gemma 2, Llama 3.1 and 3.3, Qwen 2.5, and Mistral (Dang et al., 2024; Gemma Team et al., 2024; Dubey et al., 2024; Qwen et al., 2025; Jiang et al., 2023). Their sizes range from 7B to 70B parameters, and their vocabularies contain be- tween 128k and 256k tokens. 4 Results Eng Amh Tzm Sat Cipher Byte Reg Spell 96.3 100.0 97.6 100.0 85.0 99.5 Inv Spell 99.8 100.0 99.3 100.0 0.0 99.6 Cont Char 91.8 100.0 98.0 98.6 82.8 75.7 Cont Word 99.6 99.2 98.9 99.0 96.7 99.9 Ins Char 97.8 97.8 98.2 98.6 20.9 13.5 Ins Word 92.8 94.3 91.9 97.1 1.4 96.6 Del Char 97.6 99.7 98.7 98.9 78.8 67.5 Del Word 97.6 76.2 88.8 95.6 3.7 96.5 Sub Char 96.6 98.4 98.3 95.5 61.5 51.4 Sub Word 96.2 96.6 90.1 98.4 5.9 98.5 Swap Char 93.7 97.6 92.8 98.3 29.0 12.7 Swap Word 97.3 87.9 90.0 95.9 6.6 90.9 Avg 96.4 95.6 95.2 98.0 39.4 75.2 Table 2: Llama 3.3 on low-resource languages. We first examine results by language, showing the best model performance for each in Figure 2.6 Russian and Arabic results resemble English re- sults. Hindi and Korean perform better at the word level than the character level, though the gap is smaller than for English, with stronger results in character-level insertion and swapping. Japanese and Chinese perform better on the character level, which is expected since each character is a word or almost a word. However, word-level tasks may simply be harder in these languages, as they require modifying multiple tokens instead of just one. 4.1 Amharic and Low-Resourcedness Amharic stands out from the rest of the results in that the performance is nearly perfect in the best-case scenario. This is particularly surpris- ing as Amharic is the lowest-resource language of the 8, and most characters are split into bytes 6We show the results per task, as well as results for Aya and Mistral, in Appendix E. SwapSubDelInsContInv SpellSpellAmharic Word CharArabic Chinese English 0102030405060708090100 Accuracy (%)SwapSubDelInsContInv SpellSpellHindi 0102030405060708090100 Accuracy (%)Japanese 0102030405060708090100 Accuracy (%)Korean 0102030405060708090100 Accuracy (%)RussianFigure 2: The best result of all models for each language and task. by the tokenizers, meaning each character is rep- resented by 3 tokens. We suspect that the good performance might actually be because of this low- resourcedness. As seen in Edman et al. (2024), and also observed in this work, LLMs are biased to generating real words and grammatical sentences. Their lack of
https://arxiv.org/abs/2505.17784v1
understanding of Amharic might weaken this bias. Gemma 2 Llama 3.1 Llama 3.3 Qwen 2.5 9B 27B 8B 70B 70B 7B 32B Amh 80.5 85.3 75.7 95.9 96.4 41.9 74.4 Ara 51.6 62.3 52.1 68.1 67.8 47.2 68.6 Zho 70.2 74.4 71.3 81.1 79.7 70.4 83.6 Eng 64.8 71.6 61.9 75.7 75.2 62.1 77.3 Hin 47.9 47.1 43.8 54.0 56.4 43.5 86.2 Jpn 60.1 65.2 58.8 73.1 74.6 62.1 77.9 Kor 73.6 80.8 62.1 76.9 76.1 60.2 80.8 Rus 53.6 62.6 51.0 67.8 67.8 52.1 71.2 Avg 62.8 68.7 59.6 74.1 74.3 54.9 77.5 Table 3: Average score per language. Best in bold. We provide further evidence that language knowledge inversely correlates with EXECUTE performance by adding two low-resource lan- guages, Tamazight and Santali. Their unique scripts (Tifinagh and Ol Chiki) are not used by any higher resource languages, forcing LLM tokenizers to operate at the byte level. These languages were likely seen rarely, if ever, during training. We com- pare results on Amharic’s best-performing model, Llama 3.3. Additionally, we test two variations of English: one encodes text using a cipher that maps Latin to Amharic characters, and the other forces the inputs to be byte-level (retaining the Latin alphabet). These experiments assess whether byte-level operation alone improves performance or if eliminating English recognition via cipher- ing is also necessary. We expect ciphered English to perform similarly to Amharic. Table 2 shows that Llama achieves near-perfect results in the low-resource languages, as well as the ciphered English. Byte-level English improves character tasks but fails on word tasks, partly due to bias. The model rarely sees English at the byte level except in so- cial media, leading to random casing and antspeak (extra spacing) in the output. Degenerate output (e.g. “1 1 1 ...”) also occurs. Some fine-tuning with this byte-level approach would likely increase performance considerably. Does Amharic’s near-perfect score mean character- and word-level processing is solved? No, it shows LLMs can perform arbitrary manipulations but are hampered by their language understanding. As training data increases, Amharic performance will likely decline. So, this benchmark should com- plement standard NLU benchmarks for a complete assessment. 4.2 Language Clusters Table 1 groups languages with similar CWT statis- tics into five categories: 1) Arabic & Russian, 2) Hindi & Korean, 3) Japanese & Chinese, 4) Amharic, and 5) English. Their similar benchmark performance suggests that segmentation, whether natural or from tokenization, impacts results. As expected, the statistics of Tamazight (4.37, 8.83, 0.49) and Santali (3.54, 8.38, 0.42) closely align with Amharic. 4.3 Model Performance Table 3 shows model performance. Larger models generally perform better. However, this trend does not hold across model families, as Qwen 2.5 (32B) outperforms the larger 70B Llama 3 models. Llama 3.3, despite its stronger performance than Llama 3.1 on standard benchmarks, performs similarly here. Aya Expanse Gemma 2 Llama 3.1 Llama 3.3 Qwen 2.5 Mistral 8B 32B 9B 27B 8B 70B 70B 7B 32B 8B 24B ZhoChar to Rad 0.0 2.0 0.0 0.8 0.0 1.4 1.8 1.4 16.4 0.8 3.5 Rad to Char 1.4 8.2 2.5 0.8 2.5 10.7 11.9
https://arxiv.org/abs/2505.17784v1
7.6 22.8 2.5 8.2 Contains Rad 55.4 65.5 81.1 79.3 69.4 73.5 72.9 68.0 78.8 62.2 74.5 JpnChar to Rad 0.0 0.7 0.7 0.0 0.0 0.0 2.2 2.2 9.2 0.4 3.7 Rad to Char 2.6 8.5 2.2 0.4 1.9 13.7 13.3 5.2 20.3 1.5 7.4 Contains Rad 57.9 61.6 86.4 72.7 69.7 73.1 76.0 73.4 76.0 65.7 83.4 KorHangul to Jamo 7.5 48.1 54.6 65.3 24.5 48.8 45.6 24.7 57.4 35.6 66.4 Jamo to Hangul 24.7 49.0 47.2 63.9 41.7 24.5 24.3 25.9 42.2 28.1 51.5 Contains Jamo 63.7 76.2 93.2 96.6 92.1 87.5 90.3 75.3 93.4 73.2 88.7 Table 4: Sub-character-level results on CJK languages. Best in bold. Edman et al. (2024) found that more training data improved results on CUTE, but we find no such trend. Among 7-9B models, Gemma was trained on 8T tokens, Llama on 15T, and Qwen on 18T (Gemma Team et al., 2024; Qwen et al., 2025; Dubey et al., 2024), yet their performance is in- versely correlated. While this may be coincidental, results on Amharic, Tamazight, and Santali raise doubts about whether more training data improves performance on this benchmark. 4.4 Sub-Character Performance Table 4 shows sub-character results. For Japanese and Chinese, models struggle to translate charac- ters to and from their radical components but per- form better on the Contains task, as it only re- quires identifying one radical. While some char- acters, like 晚(evening ), have components that clearly contribute to meaning, others are more am- biguous. For example, 木(tree) is likely easier for models to identify in 樟(camphor tree ) compared to章(chapter ,seal). LLMs are notably better at converting between Hangul and Jamo, likely due to Hangul’s simpler structure or its more frequent decomposition in training data. However, the conversion still falls short of the near-perfect scores seen in the main Spelling andInverse Spelling tasks. 5 Conclusion We present a multilingual, multi-script extension of the CUTE benchmark to test token understand- ing in a variety of languages. The benchmark is designed to be easily expanded to new languages, allowing the token understanding of LLMs to be tested in any language. Our findings show that ma- nipulation on the character level is challenging in some non-English languages, but word-level ma- nipulation is challenging for some languages too. Understanding the components of characters in Chi-nese, Japanese, and Korean is also lacking. The performance of a language can be somewhat pre- dicted by its character-word-token ratios. Surpris- ingly, LLMs perform better on lower-resourced lan- guages, due to their knowledge of high-resourced languages acting as a bias against the benchmark’s tasks. While Edman et al. (2024) hypothesized that character-level models would be promising for solving the CUTE benchmark, EXECUTE demon- strates an additional need for debiasing models so they can temporarily forget what they know about a language. 6 Limitations We limit ourselves to 8 languages for the majority of this work. While we argue that the languages in the same scripts as the ones tested will likely have similar results, pointing to the correlation in results between English, Spanish, German, and Xhosa in Appendix B,
https://arxiv.org/abs/2505.17784v1
we cannot know for sure without test- ing them all. Several other scripts are not covered which may have differing performances. We also do not test very large language models above 70B parameters due to compute constraints. The CUTE benchmark added scores for the 405B parameter Llama 3.1 and found it made improve- ments across the board, but was still lacking on character-level insertion and swapping. We would expect similar improvements for our English re- sults, but it is unclear how it would perform for other languages. 7 Acknowledgments The work was supported by the European Research Council (ERC) under the European Union’s Hori- zon Europe research and innovation programme (grant agreement No. 101113091) and by the Ger- man Research Foundation (DFG; grant FR 2829/7- 1). References John Dang, Shivalika Singh, Daniel D’souza, Arash Ahmadian, Alejandro Salamanca, Madeline Smith, Aidan Peppin, Sungjin Hong, Manoj Govindassamy, Terrence Zhao, et al. 2024. Aya expanse: Combin- ing research breakthroughs for a new multilingual frontier. arXiv preprint arXiv:2412.04261 . Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 . Lukas Edman, Helmut Schmid, and Alexander Fraser. 2024. CUTE: Measuring LLMs’ understanding of their tokens. In Proceedings of the 2024 Confer- ence on Empirical Methods in Natural Language Processing , pages 3017–3026, Miami, Florida, USA. Association for Computational Linguistics. Avia Efrat, Or Honovich, and Omer Levy. 2023. LMen- try: A language model benchmark of elementary language tasks. In Findings of the Association for Computational Linguistics: ACL 2023 , pages 10476– 10501, Toronto, Canada. Association for Computa- tional Linguistics. Ronen Eldan and Yuanzhi Li. 2023. Tinystories: How small can language models be and still speak coherent english? Preprint , arXiv:2305.07759. Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupati- raju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, Johan Ferret, Peter Liu, Pouya Tafti, Abe Friesen, Michelle Casbon, Sabela Ramos, Ravin Kumar, Charline Le Lan, Sammy Jerome, Anton Tsitsulin, Nino Vieillard, Piotr Stanczyk, Sertan Girgin, Nikola Momchev, Matt Hoffman, Shantanu Thakoor, Jean-Bastien Grill, Behnam Neyshabur, Olivier Bachem, Alanna Wal- ton, Aliaksei Severyn, Alicia Parrish, Aliya Ah- mad, Allen Hutchison, Alvin Abdagic, Amanda Carl, Amy Shen, Andy Brock, Andy Coenen, An- thony Laforge, Antonia Paterson, Ben Bastian, Bilal Piot, Bo Wu, Brandon Royal, Charlie Chen, Chintu Kumar, Chris Perry, Chris Welty, Christopher A. Choquette-Choo, Danila Sinopalnikov, David Wein- berger, Dimple Vijaykumar, Dominika Rogozi ´nska, Dustin Herbison, Elisa Bandy, Emma Wang, Eric Noland, Erica Moreira, Evan Senter, Evgenii Elty- shev, Francesco Visin, Gabriel Rasskin, Gary Wei, Glenn Cameron, Gus Martins, Hadi Hashemi, Hanna Klimczak-Pluci ´nska, Harleen Batra, Harsh Dhand, Ivan Nardini, Jacinda Mein, Jack Zhou, James Svens- son, Jeff Stanway, Jetha Chan, Jin Peng Zhou, Joana Carrasqueira, Joana Iljazi, Jocelyn Becker, Joe Fer- nandez, Joost van Amersfoort, Josh Gordon, Josh Lipschultz, Josh Newlan, Ju yeong Ji, Kareem Mo- hamed, Kartikeya Badola, Kat Black, Katie Mil- lican, Keelin McDonell, Kelvin Nguyen, Kiranbir Sodhia, Kish Greene, Lars Lowe Sjoesund, Lau- ren Usui, Laurent Sifre, Lena
https://arxiv.org/abs/2505.17784v1
Heuermann, Leti- cia Lago, Lilly McNealus, Livio Baldini Soares,Logan Kilpatrick, Lucas Dixon, Luciano Martins, Machel Reid, Manvinder Singh, Mark Iverson, Mar- tin Görner, Mat Velloso, Mateo Wirth, Matt Davi- dow, Matt Miller, Matthew Rahtz, Matthew Watson, Meg Risdal, Mehran Kazemi, Michael Moynihan, Ming Zhang, Minsuk Kahng, Minwoo Park, Mofi Rahman, Mohit Khatwani, Natalie Dao, Nenshad Bardoliwalla, Nesh Devanathan, Neta Dumai, Nilay Chauhan, Oscar Wahltinez, Pankil Botarda, Parker Barnes, Paul Barham, Paul Michel, Pengchong Jin, Petko Georgiev, Phil Culliton, Pradeep Kup- pala, Ramona Comanescu, Ramona Merhej, Reena Jana, Reza Ardeshir Rokni, Rishabh Agarwal, Ryan Mullins, Samaneh Saadat, Sara Mc Carthy, Sarah Cogan, Sarah Perrin, Sébastien M. R. Arnold, Se- bastian Krause, Shengyang Dai, Shruti Garg, Shruti Sheth, Sue Ronstrom, Susan Chan, Timothy Jor- dan, Ting Yu, Tom Eccles, Tom Hennigan, Tomas Kocisky, Tulsee Doshi, Vihan Jain, Vikas Yadav, Vilobh Meshram, Vishal Dharmadhikari, Warren Barkley, Wei Wei, Wenming Ye, Woohyun Han, Woosuk Kwon, Xiang Xu, Zhe Shen, Zhitao Gong, Zichuan Wei, Victor Cotruta, Phoebe Kirk, Anand Rao, Minh Giang, Ludovic Peran, Tris Warkentin, Eli Collins, Joelle Barral, Zoubin Ghahramani, Raia Hadsell, D. Sculley, Jeanine Banks, Anca Dragan, Slav Petrov, Oriol Vinyals, Jeff Dean, Demis Hass- abis, Koray Kavukcuoglu, Clement Farabet, Elena Buchatskaya, Sebastian Borgeaud, Noah Fiedel, Ar- mand Joulin, Kathleen Kenealy, Robert Dadashi, and Alek Andreev. 2024. Gemma 2: Improving open language models at a practical size. Preprint , arXiv:2408.00118. Jing Huang, Zhengxuan Wu, Kyle Mahowald, and Christopher Potts. 2023. Inducing character-level structure in subword-based language models with type-level interchange intervention training. In Find- ings of the Association for Computational Linguistics: ACL 2023 , pages 12163–12180, Toronto, Canada. As- sociation for Computational Linguistics. Itay Itzhak and Omer Levy. 2022. Models in a spelling bee: Language models implicitly learn the character composition of tokens. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 5061–5068, Seattle, United States. Association for Computational Lin- guistics. Albert Q Jiang, Alexandre Sablayrolles, Arthur Men- sch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guil- laume Lample, Lucile Saulnier, et al. 2023. Mistral 7b.arXiv preprint arXiv:2310.06825 . Ayush Kaushal and Kyle Mahowald. 2022. What do tokens know about their characters and how do they know it? In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies , pages 2487–2507, Seattle, United States. Association for Computational Linguistics. Sang Kwon, Gagan Bhatia, El Moatez Billah Nagoudi, and Muhammad Abdul-Mageed. 2023. Beyond En- glish: Evaluating LLMs for Arabic grammatical er- ror correction. In Proceedings of ArabicNLP 2023 , pages 101–119, Singapore (Hybrid). Association for Computational Linguistics. Yinghui Li, Haojing Huang, Shirong Ma, Yong Jiang, Yangning Li, Feng Zhou, Hai-Tao Zheng, and Qingyu Zhou. 2023. On the (in)effectiveness of large lan- guage models for chinese text correction. Preprint , arXiv:2307.09007. Akylbek Maxutov, Ayan Myrzakhmet, and Pavel Braslavski. 2024. Do LLMs speak Kazakh? a pi- lot evaluation of seven models. In Proceedings of the First Workshop on Natural Language Processing for Turkic Languages (SIGTURK 2024) , pages
https://arxiv.org/abs/2505.17784v1
81– 91, Bangkok, Thailand and Online. Association for Computational Linguistics. Artidoro Pagnoni, Ram Pasunuru, Pedro Rodriguez, John Nguyen, Benjamin Muller, Margaret Li, Chunt- ing Zhou, Lili Yu, Jason Weston, Luke Zettlemoyer, Gargi Ghosh, Mike Lewis, Ari Holtzman, and Srini- vasan Iyer. 2024. Byte latent transformer: Patches scale better than tokens. Preprint , arXiv:2412.09871. Qwen, :, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tianyi Tang, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. 2025. Qwen2.5 technical report. Preprint , arXiv:2412.15115. Xiaofeng Wu, Karl Stratos, and Wei Xu. 2025. The impact of visual information in chinese characters: Evaluating large models’ ability to recognize and utilize radicals. Preprint , arXiv:2410.09013. Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improv- ing few-shot performance of language models. In International conference on machine learning , pages 12697–12706. PMLR. A Comparison to CUTE In Table 5, we run Llama 3.1 8B on CUTE and compare the results to English and Russian EXE- CUTE. The results are very similar, with Insert Word appearing slightly easier in CUTE. This con- firms that our changes did not dramatically alter any results.EXECUTE CUTE Eng Rus Eng Rus Spell 98.7 72.1 99.8 64.5 Inv Spell 96.2 37.9 98.4 74.1 Cont Char 65.1 57.1 67.1 68.4 Cont Word 97.3 97.8 86.8 97.4 Ins Char 4.4 6.7 4.2 7.6 Ins Word 48.2 48.5 62.0 59.2 Del Char 56.1 33.1 56.6 43.7 Del Word 76.2 91.8 83.7 82.3 Sub Char 39.3 29.0 34.4 33.8 Sub Word 94.1 87.0 90.4 76.7 Swap Char 6.6 4.8 6.1 5.2 Swap Word 60.4 46.5 63.7 33.3 Average 61.9 51.0 62.8 53.9 Table 5: EXECUTE versus CUTE with Llama 3.1 8B. B Language Similarity We conduct similarity tests to see how similar the trends are across languages. We conduct a Pearson correlation between two languages for each task for a given model and average the models’ correlations together. We show the similarity of the languages as determined by the average correlation of the results from the 5 LLMs of size 7-9B in Table 6. The languages are not particularly similar to one another, apart from Japanese and Chinese (which share some characters) and Arabic and Russian. The similarity between Arabic and Russian is not entirely clear, though it could be that their ratios of characters-per-word and characters-per-token are quite similar (such is also the case for Japanese and Chinese). Ara Zho Eng Hin Jpn Kor Rus Amh 0.64 0.01 0.66 0.33 0.23 0.62 0.60 Ara -0.11 0.76 0.44 0.13 0.85 0.92 Zho 0.17 0.65 0.93 0.26 -0.06 Eng 0.45 0.36 0.77 0.86 Hin 0.75 0.71 0.38 Jpn 0.49 0.17 Kor 0.84 Table 6: Average correlations between the
https://arxiv.org/abs/2505.17784v1
results for each language pair. We also correlate the results from English to other Latin-scripted languages, German, Spanish, and Xhosa, in Table 7. Here we see the average correlation is at least 95% between English, Ger- man, and Spanish, and at least 85% to Xhosa. This suggests that the results for other Latin-scripted languages will likely not deviate too much, even if the languages are distant in relation and differing in resourcedness. Deu Spa Xho Eng 0.96 0.95 0.85 Deu 0.99 0.90 Spa 0.90 Table 7: Average correlations between the results for Latin-scripted languages. Aya 8BAya 32B Gemma 9BGemma 27B Llama31 8BLlama31 70B Llama33 70BQwen 7BQwen 32B Mistral 8BMistral 24B01020304050607080Accuracy (%)Furigana Figure 3: Performance on Kanji to Hiragana conversion. C Japanese Furigana Aside from Kanji, Japanese has two other writing forms: Hiragana and Katakana. Typical Japanese text will use all three forms, with several words being a combination of Kanji and Hiragana, and even in rare cases, all three. While Kanji is logo- graphic like Chinese, Hiragana and Katakana are syllabaries. Kanji and Hiragana are the most used, while Katakana is typically only used for foreign words or onomatopoeiae. As such, we focus on Kanji and Hiragana. All Kanji characters can be written as Hiragana, and Kanji is sometimes anno- tated with its corresponding Hiragana as a method of learning the pronunciation of Kanji characters. This practice is known as Furigana. So we use this Furigana method as a test in our benchmark, prompting the model to translate Kanji to Hira- gana.7With this, we are essentially testing if the LLMs have a phonetic understanding of the Kanji. In Figure 3, we see the models’ results on the Furigana task. Similar to the Korean Hangul to Jamo, the Kanji to Hiragana tasks show that the LLMs generally understand the task, but have not perfected it. Unlike the other sub-character tasks, converting from Kanji to Hiragana cannot be done purely visually. This requires knowledge of how a 7We do not do the reverse as multiple Kanji can have the same phoneme, e.g. 考and好both denote ko.Kanji sounds, and which Hiragana denote which sounds. From this, we can see a partial understand- ing. [INST] Spell out the word, putting spaces between each letter, based on the following examples: 1. Spell out the word “ かわいい ”. Answer: “かわいい ” 2. Spell out the word “ 出し”. Answer: “ 出し” 3. Spell out the word “ 応援”. Answer: “ 応援” 4. Spell out the word “ 親友”. Answer: “ 親友” Question: Spell out the word “ 実行 する”. [/INST] Answer: “ 実行する” Figure 4: Example of full prompt for Japanese spelling , with intended output in red. [INST] and [/INST] denote any tokens added to enable normal be- havior from each LLM. D Full Prompt Example We show an example of a full prompt in Figure 4. E Full Results The complete results on EXECUTE for all the mod- els tested are shown in Tables 8 and 9. Aya Expanse Gemma 2 Llama 3.1 Llama 3.3 Qwen 2.5 Mistral 8B 32B 9B 27B 8B 70B 70B 7B
https://arxiv.org/abs/2505.17784v1
32B 8B 24B AmharicSpell 25.6 72.4 99.5 91.2 98.6 98.4 96.3 0.9 14.3 97.0 99.8 Inv Spell 77.6 71.5 99.1 91.4 98.8 99.8 99.8 8.2 52.4 99.1 100.0 Cont Char 58.5 85.8 73.0 81.9 90.4 91.8 91.8 63.2 93.5 94.7 95.8 Cont Word 55.9 69.5 97.0 98.3 78.5 99.5 99.6 71.7 99.9 95.9 98.8 Ins Char 35.2 58.2 57.1 65.5 26.1 92.3 97.8 10.3 57.9 60.6 92.2 Ins Word 70.7 66.7 95.6 91.7 28.7 93.3 92.8 58.4 92.6 70.5 87.9 Del Char 54.5 85.0 84.5 89.4 89.6 96.8 97.6 43.5 78.6 85.3 97.7 Del Word 85.8 91.2 63.1 79.2 93.2 99.1 97.6 70.0 91.1 91.9 89.4 Sub Char 63.6 82.9 70.1 85.2 87.5 94.9 96.6 29.6 67.3 76.6 99.1 Sub Word 91.9 98.1 98.1 93.2 91.6 96.7 96.2 79.2 90.9 90.3 96.5 Swap Char 20.2 53.8 53.0 70.7 52.8 91.3 93.7 12.8 65.8 48.8 86.4 Swap Word 60.7 84.5 75.7 86.0 73.0 96.5 97.3 55.0 88.1 66.0 94.9 ArabicSpell 48.7 74.1 36.0 69.9 50.8 84.7 81.0 20.7 52.8 19.3 40.0 Inv Spell 48.4 64.7 48.3 63.6 44.9 69.7 63.0 39.2 76.4 27.6 60.9 Cont Char 63.9 74.5 70.3 70.0 70.6 77.4 76.4 74.0 90.7 72.0 78.6 Cont Word 88.1 97.9 99.0 98.7 96.9 99.1 99.1 95.5 99.4 88.7 99.4 Ins Char 13.7 8.3 7.6 16.1 2.9 15.7 17.8 11.7 31.4 4.7 12.5 Ins Word 35.8 61.1 90.8 97.5 51.2 89.1 96.4 61.3 96.3 45.1 86.4 Del Char 36.0 56.4 36.3 45.5 45.0 55.8 59.4 40.7 53.1 20.8 29.9 Del Word 74.0 90.4 64.6 83.4 92.5 95.0 88.6 82.0 91.6 63.7 88.8 Sub Char 17.6 26.6 20.7 33.9 24.3 38.7 36.0 23.0 42.2 11.7 20.7 Sub Word 72.7 92.6 95.0 97.3 90.8 97.2 98.0 79.3 95.4 77.1 92.2 Swap Char 5.9 9.1 4.5 8.0 8.1 17.0 16.7 4.0 14.3 2.5 7.5 Swap Word 26.3 55.1 46.3 63.1 47.6 78.0 81.3 34.8 79.8 29.0 62.0 ChineseSpell 83.2 93.0 84.3 91.3 93.6 98.4 98.0 96.3 98.2 93.4 90.9 Inv Spell 98.5 97.3 95.1 96.2 98.6 98.4 98.7 98.9 99.8 98.8 99.6 Cont Char 84.0 97.2 96.4 95.4 92.5 97.3 91.1 98.7 98.9 96.0 98.7 Cont Word 84.1 99.0 99.4 98.8 91.7 95.1 86.7 94.3 98.1 94.2 94.8 Ins Char 70.0 57.2 78.6 81.4 67.3 90.6 92.6 73.6 95.4 53.2 89.3 Ins Word 28.4 33.2 41.9 53.0 20.5 46.5 47.5 43.7 53.4 34.1 50.5 Del Char 79.6 90.0 85.4 88.1 86.8 97.0 97.6 89.2 97.3 88.3 94.8 Del Word 38.4 57.3 46.8 56.9 59.4 71.1 70.1 54.0 65.1 53.2 67.4 Sub Char 60.6 68.4 69.9 75.1 84.0 94.2 94.8 80.0 94.6 75.0 91.3 Sub Word 40.2 54.9 55.3 64.7 47.5 56.3 53.4 43.1 66.8 46.1 66.5 Swap Char 63.9 71.9 73.3 69.8 90.6 92.0 92.1 62.6 92.0 75.4 90.5 Swap Word 14.2 26.6 15.6 21.9 22.8 36.1 33.3 10.7 43.9 15.1 35.0 EnglishSpell 96.7 98.5 99.3 99.5 98.7 99.5 99.5 94.7 98.6 97.3 99.0 Inv Spell 95.4 98.5 99.3 99.6 96.2 99.8 99.6 98.3 99.2 91.0 98.7 Cont Char 62.6 73.1 68.0 69.5 65.1 80.3 75.7 81.9 94.8 66.7 83.5 Cont Word 94.3 99.4 99.7 99.7
https://arxiv.org/abs/2505.17784v1
97.3 100.0 99.9 98.1 100.0 93.3 99.9 Ins Char 11.8 6.0 9.2 7.8 4.4 10.9 13.5 7.1 15.9 7.4 4.4 Ins Word 39.9 60.6 86.7 96.8 48.2 94.9 96.6 70.2 97.6 51.9 72.9 Del Char 35.0 56.3 58.5 80.4 56.1 68.3 67.5 56.8 70.5 33.6 72.4 Del Word 60.3 77.5 53.7 77.7 76.2 97.1 96.5 78.5 95.7 74.3 69.8 Sub Char 27.7 42.2 35.4 60.5 39.3 53.5 51.4 29.0 52.1 27.7 51.7 Sub Word 82.4 96.1 94.9 97.9 94.1 98.1 98.5 92.9 99.0 92.9 97.0 Swap Char 6.0 9.8 6.9 8.9 6.6 14.9 12.7 3.2 11.5 6.8 11.0 Swap Word 22.6 64.5 66.2 60.8 60.4 91.0 90.9 34.7 92.4 42.2 85.9 Table 8: Results for Amharic, Arabic, Chinese, and English. Aya Expanse Gemma 2 Llama 3.1 Llama 3.3 Qwen 2.5 Mistral 8B 32B 9B 27B 8B 70B 70B 7B 32B 8B 24B HindiSpell 48.4 69.4 12.6 16.0 46.2 72.5 71.3 20.5 57.7 23.4 58.7 Inv Spell 76.8 83.2 71.9 83.4 76.0 92.7 92.9 71.2 94.6 61.9 87.2 Cont Char 68.4 85.6 75.1 75.7 72.6 87.3 83.2 90.9 98.3 82.4 88.9 Cont Word 89.4 98.7 95.7 98.5 93.3 98.9 93.0 94.7 99.3 81.1 94.8 Ins Char 41.0 10.8 25.8 29.4 15.8 29.0 32.9 45.9 89.0 27.5 36.5 Ins Word 6.3 11.7 77.3 41.8 13.6 25.5 47.6 31.8 95.5 9.1 16.4 Del Char 58.8 66.5 50.9 67.6 66.5 76.9 76.9 50.9 80.7 44.3 79.5 Del Word 6.6 26.6 26.6 24.4 40.6 25.7 31.7 29.4 90.8 13.9 19.7 Sub Char 45.7 66.4 38.2 56.0 42.1 61.4 63.6 50.8 90.2 33.1 65.7 Sub Word 3.7 14.9 62.3 15.9 19.1 23.8 25.4 15.3 93.8 4.9 17.0 Swap Char 14.5 24.5 8.4 29.4 19.5 15.1 22.2 9.9 62.8 21.6 33.6 Swap Word 2.1 16.9 30.6 27.5 20.1 39.4 35.7 10.3 82.2 3.0 11.5 JapaneseSpell 52.9 83.2 68.5 71.0 73.2 93.3 92.2 72.3 86.5 77.2 84.5 Inv Spell 92.4 88.4 87.8 90.5 78.1 96.9 95.1 90.4 95.9 88.5 96.7 Cont Char 67.9 91.6 90.9 88.8 87.0 95.2 93.4 93.7 97.8 86.7 93.0 Cont Word 82.4 92.7 98.8 97.3 90.0 89.4 85.5 89.9 94.8 84.6 93.9 Ins Char 48.3 31.5 58.3 68.8 18.6 72.1 77.6 69.7 86.8 29.3 78.5 Ins Word 21.4 34.2 41.7 61.0 10.6 42.1 47.2 44.0 65.9 30.4 46.2 Del Char 62.0 78.3 60.5 64.4 79.8 88.2 90.0 70.0 84.3 67.1 86.5 Del Word 31.9 60.4 49.3 52.5 62.4 68.5 74.2 52.3 61.4 38.0 63.5 Sub Char 57.2 68.8 58.5 66.3 75.8 83.2 83.7 64.4 85.7 59.3 81.7 Sub Word 32.4 55.1 48.9 58.8 50.9 62.8 65.1 50.9 69.3 38.5 60.6 Swap Char 40.8 52.0 50.2 49.2 66.6 63.9 66.7 40.4 76.6 45.7 70.1 Swap Word 5.9 13.2 8.1 13.3 12.8 21.9 25.0 7.5 29.4 9.3 20.9 KoreanSpell 43.6 71.5 67.5 85.4 51.5 78.0 73.4 42.5 66.6 42.3 56.1 Inv Spell 85.0 93.8 82.6 86.8 71.7 88.9 88.8 84.2 94.8 64.6 92.2 Cont Char 71.1 84.5 81.1 86.2 81.6 85.3 74.4 90.7 95.5 76.4 89.0 Cont Word 92.6 97.2 98.8 99.0 95.6 99.4 99.1 96.7 99.6 84.9 98.7 Ins Char 33.6 20.6 47.4 54.6 21.0 52.5 60.8
https://arxiv.org/abs/2505.17784v1
39.6 69.7 20.2 41.7 Ins Word 36.4 72.0 93.3 98.6 50.8 91.8 94.1 66.8 92.5 45.0 87.7 Del Char 44.7 64.6 69.1 75.6 62.5 62.2 63.2 44.6 64.1 43.2 56.6 Del Word 56.5 81.8 77.5 88.6 91.9 94.3 86.3 76.7 91.0 75.4 90.7 Sub Char 39.1 62.0 71.4 77.4 56.0 65.5 63.9 50.4 67.6 37.9 53.0 Sub Word 48.9 78.7 90.1 96.7 73.0 91.2 92.1 68.8 90.5 65.1 89.6 Swap Char 27.3 33.3 48.9 47.5 37.3 42.0 44.0 31.4 58.1 20.0 33.0 Swap Word 22.6 48.3 55.8 73.6 52.5 71.1 72.8 29.9 79.3 27.1 61.1 RussianSpell 40.9 80.4 54.5 88.3 72.1 97.6 96.7 54.0 88.9 79.8 94.8 Inv Spell 54.1 78.7 71.4 90.3 37.9 86.6 86.0 81.9 94.2 39.5 88.0 Cont Char 55.0 68.8 59.7 58.7 57.1 60.2 59.1 75.3 84.9 63.3 77.6 Cont Word 96.9 98.1 99.6 99.8 97.8 99.8 99.9 95.8 99.5 94.7 99.5 Ins Char 10.9 3.8 7.0 9.4 6.7 11.0 12.0 7.2 21.1 5.7 11.3 Ins Word 29.3 61.9 90.6 95.4 48.5 90.7 93.1 77.8 95.1 62.9 84.2 Del Char 12.7 34.7 20.9 38.1 33.1 49.9 50.9 24.8 44.7 18.9 45.1 Del Word 68.6 85.0 75.0 81.0 91.8 97.3 96.7 83.1 90.7 80.1 85.7 Sub Char 15.5 26.0 17.0 35.3 29.0 38.7 40.2 22.6 44.3 16.2 39.7 Sub Word 63.8 86.2 94.5 95.1 87.0 95.6 95.8 80.8 95.2 86.5 95.6 Swap Char 1.3 4.8 2.6 4.0 4.8 12.5 13.6 1.4 16.0 4.4 12.0 Swap Word 26.1 48.1 50.7 55.8 46.5 74.3 70.0 20.7 79.9 32.6 79.5 Table 9: Results for Hindi, Korean, Japanese, and Russian.
https://arxiv.org/abs/2505.17784v1
Compression Hacking: A Supplementary Perspective on Informatics Metric of Language Models from Geometric Distortion Jianxiang Zang1, Meiling Ning2, Yongda Wei3, Shihan Dou1, Jiazheng Zhang1, Nijia Mo4, Binhong Li5, Tao Gui1∗, Qi Zhang1, Xuanjing Huang1∗ 1Fudan University,2Beijing University of Posts and Telecommunications,3George Mason University 4Shanghai University of International Business and Economics 5Hong Kong University of Science and Technology (Guangzhou) zjxhgg@gmail.com, tgui@fudan.edu.cn Abstract Recently, the concept of “compression as intel- ligence” has provided a novel informatics met- ric perspective for language models (LMs), em- phasizing that highly structured representations signify the intelligence level of LMs. However, from a geometric standpoint, the word represen- tation space of highly compressed LMs tends to degenerate into a highly anisotropic state, which hinders the LM’s ability to comprehend instructions and directly impacts its perfor- mance. We found this compression-anisotropy synchronicity is essentially the “ Compression Hacking ” in LM representations, where noise- dominated directions tend to create the illu- sion of high compression rates by sacrificing spatial uniformity. Based on this, we propose three refined compression metrics by incorpo- rating geometric distortion analysis and inte- grate them into a self-evaluation pipeline. The refined metrics exhibit strong alignment with the LM’s comprehensive capabilities, achieving Spearman correlation coefficients above 0.9, significantly outperforming both the original compression and other internal structure-based metrics. This confirms that compression hack- ing substantially enhances the informatics in- terpretation of LMs by incorporating geometric distortion of representations. 1 Introduction Recently, significant efforts have been devoted to exploring the mechanisms by which language mod- els (LMs) process information internally, driving the development of LM self-evaluation (Wei et al., 2024; Wang et al., 2024a,b) independent of specific tasks and model outputs. The concept of “compres- sion as intelligence” (Sutskever, 2023; Deletang et al., 2023; Chen et al., 2025) has provided a novel Informatics interpretation for LMs, emphasizing that LMs eliminate redundant information through training while their representation spaces typically ∗Corresponding author.evolve from disordered to structured states. This property leads to a compression-based evaluation metric for LMs that utilizes differential entropy of representations, aiming to reflect model capabili- ties with their internal structural organization (Pich- ler et al., 2022; Zhouyin and Liu, 2023; Li et al., 2025). Existing studies have demonstrated strong alignment between this metric and LM scale (Wei et al., 2024; Li et al., 2025), which we have also empirically validated. However, as evidenced by the intuitive case where 175B GPT-3 (Brown et al., 2020) exhibits inferior overall capabilities com- pared to 32B Qwen2.5-Instruct (Hui et al., 2024), compression from a purely informatics standpoint, cannot fully align with LM capabilities, especially when comparing models from different families. Therefore, our research motivation is: Beyond infor- mation compression, what other properties should a metric quantify to effectively interpret the LMs’ intelligence level, and how should we model the relationships between these properties? Relevant studies have shown that differences in model architecture and training paradigms in- evitably lead to variations in the geometric struc- ture of representations (Mimno and Thompson, 2017; Gao et al., 2019a; Skean et al., 2025). From a geometric standpoint, we were surprised to ob- serve that
https://arxiv.org/abs/2505.17793v1
LMs with high information compression tend to exhibit representation spaces that degener- ate into highly anisotropic, distorted states. Highly anisotropic representations indicate varying sensi- tivity to semantic changes across different dimen- sions, which can hinder language models’ ability to comprehend instructions and consequently degrade their performance (Demeter et al., 2020; Yu et al., 2022; Rudman and Eickhoff, 2024). In this study, we quantitatively analyze this compression-anisotropy synchronicity and validate its statistical significance. Through mechanistic analysis, we find that this phenomenon reflects the “Compression Hacking ” in LM representations,arXiv:2505.17793v1 [cs.CL] 23 May 2025 where noise-dominated directions tend to create the illusion of high compression rates by sacrificing spatial uniformity . According to this characteristic, we propose the integration of geometric perspec- tive to refine the information compression metric. Specifically, we introduce the following strategies: (1) a spectral entropy quantification compression metric to model the properties of eigenvalue distri- butions; (2) a semantic coefficient of variation to measure anisotropy relative to compression; and (3) amanifold correction protocol that uses Principal Component Smoothing (PCS) as an “anisotropy razor” to decouple the influence of anisotropy on compression. These refined metrics are integrated into a self-evaluation pipeline that relies entirely on the LM’s internal structure. Using this framework, we evaluate 18 open- source LMs and conduct meta-evaluations on fac- tuality, reasoning, math, and knowledge tasks to obtain ground-truth capability scores. Extensive experiments demonstrate that the refined metrics exhibit strong alignment with the LM’s comprehen- sive capabilities, achieving Spearman correlation coefficients above 0.9, which significantly outper- forms both the original compression and other in- ternal structure-based metrics. This validating the compression hacking substantially enhances the informatics interpretation of LMs by incorporat- ing geometric distortion analysis of representations. The main contributions are summarized as follows: •We introduce a significant characteristic in LM representations termed “compression hacking”, which complements the concept of “compression as intelligence” from the perspective of geomet- ric distortion. •According to compression hacking, we propose three refinements of compression metrics in- corporating geometric insights: spectral entropy quantification, semantic coefficient of variation, and manifold correction protocol. •The refined metrics exhibit significantly stronger alignment with LM’s comprehensive capabil- ities compared to original compression met- ric, thereby establishing a task-agnostic self- evaluation perspective for LMs. Anonymous codes available here. 2 Compression Hacking In this section, we analyze the compression- anisotropy synchronicity in LM representations,where highly compressed LMs tend to exhibit word representations with strong anisotropy. Our inves- tigation proceeds in two stages: First, we quantify both compression and anisotropy metrics by exam- ining the internal structure of LM representations (covariance matrices). We then fit regression curves to model the relationship between anisotropy and compression, verifying it’s statistical significance. Second, through mechanistic analysis, we identify the underlying cause of this phenomenon, what we term “compression hacking”. The covariance matrix of LM representations re- flects their internal structure. For the hidden states Z={z(w)|w∈ V} , where wrepresents a word andVrepresents the sample vocabulary space, the construction of the covariance matrix is as formu- lated in Eq. 1. Here, z(w)∈RDrepresents the token embeddings, which has been normalized. Z is a zero-mean matrix. ΣZ=1
https://arxiv.org/abs/2505.17793v1
|V|Z⊤Z+αID (1) Here, ΣZ∈RD×Ddenotes the covariance ma- trix, and a regularization term αIDis added to ensure it is full rank. The matrix ΣZis positive definite and can be decomposed using eigenvalue decomposition as ΣZ=QΛQ⊤. The eigenvalues from Λare{λd}D d=1, arranged in descending or- der by default, and {qd}D d=1are the corresponding eigenvectors. 2.1 Preliminary: Differential Entropy based Compression Metric The compression perspective provides an information-theoretic foundation for LM evalua- tion, revealing the intrinsic connections between model scale, generalization capability, and data volume, thus offering theoretical guidance for optimizing model design (Pichler et al., 2022; Sutskever, 2023; Deletang et al., 2023; Wei et al., 2024; Chen et al., 2025). Related studies have shown that the differential entropy HDE(Z) =−Ew∼Vz(w) logz(w)of LM rep- resentations z(w)can reflect their compression capacity (Chen et al., 2023a; Zhouyin and Liu, 2023; Li et al., 2025). Lower differential entropy suggests that the representations formed by nonlinear transformation, which removes redun- dant information, are closer to optimal coding. These representations exhibit more concentrated distributions and lower uncertainty, reflecting more efficient information compression (Delétang et al., 2023). Semantic V olume leverages this property to model representation uncertainty (Li et al., 2025). We thus define compression metric as the neg- ative differential entropy of representations (i.e., CDE(Z)def=−H DE(Z)). Since the differential en- tropy is equivalent to the logdet estimator (Chen et al., 2023a) of their covariance matrix, the com- pression metric follows the definition in Eq. 2. CDE(Z)def=−1 2logdet (ΣZ) =−1 2DX d=1logλd (2) OPT-0.125bOPT-1.3b OPT-2.7b OPT-6.7b OPT-13b0.51.01.52.02.53.03.54.04.5Compression (×10 ) 0.5241.4081.7622.8233.531 Qwen-3b -Instruct Qwen2.5-7b LLaMA3.1-8bOPT-13b1.52.02.53.03.54.0Compression (×10 ) 1.3862.4492.8133.531 2530354045 Ground Truth30.1234.9838.4440.9742.09 404550556065 Ground Truth60.23 57.24 50.23 42.09Compression Ground Truth Model Size Figure 1: Comparison of compression metrics across dif- ferent models and their corresponding ground-truth com- prehensive capabilities, categorized into intra-family and cross-family comparsions. We first conducted preliminary exploration to assess whether differential entropy-based compres- sion metrics effectively reflect LM capabilities. Our evaluation included both intra-family (OPT fam- ily) and cross-family tests (Qwen2.5-3b-Instruct, Qwen2.5-7b, LLaMA3.1-8b, and OPT-13b), with ground-truth settings following Section 4.1. As shown in Figure 1, we found that compression met- rics showed only positive correlations with model scale, consistent with related studies (Wei et al., 2024; Li et al., 2025). However, Figure 1(left) indi- cates that differential entropy-based compression is effective only for intra-family evaluation, while Fig- ure 1(right) reveals its limited applicability across diverse architectures and training paradigms. These findings prompted our integration of geometric properties into compression analysis. 2.2 Anisotropy: The Geometric Property Correlated with Compression The anisotropy of language models is a geometric property of representations that reflects the non- uniform distribution of semantics across different directions in the representation space (Ethayarajh,2019; Cai et al., 2019; Demeter et al., 2020). Highly anisotropic representations hinder LMs’ ability to comprehend instructions (Yu et al., 2022; Rudman and Eickhoff, 2024), directly impairing their over- all capabilities. We performed principal component analysis to visualize the word representation spaces of the aforementioned four models. As shown in Figure 2, we made the intriguing observation that models with higher compression levels consistently exhibited greater unevenness in their dimensional distributions, namely, higher anisotropy.
https://arxiv.org/abs/2505.17793v1
This sug- gests a potential synergistic relationship between compression and anisotropy. If we can quantify this relationship and confirm its statistical significance, it could provide valuable guidance for refining com- pression metrics. Current tools for qualitatively and quantita- tively analyzing the anisotropy of language models mainly rely on similarity computations of represen- tations (Ethayarajh, 2019; Cai et al., 2019; Rud- man et al., 2022). However, what we need is an anisotropy metric that can establish a connection with entropy-based information compression. Rele- vant studies (Arora et al., 2016; Mu and Viswanath, 2018) have shown that the anisotropy measure A is mathematically defined as formulated in Eq. 3. We aim to extend this measure to relate to the inter- nal structure of representations (eigenvalues of the covariance matrix). A=max∥c∥=1Z(c) min∥c∥=1Z(c)(3) where Z(c) =P w∈Vexp c⊤z(w) is the original partition function should approximately be a constant for any unit vector c.Ais a num- ber greater than 1, where larger values indicate stronger anisotropy in the represrntation space. Ide- ally, this value should be as close to 1 as pos- sible. Considering that arg max ∥c∥=1Z(c)and arg min ∥c∥=1Z(c)do not have closed-form solu- tions, we attempt to approximate Z(c)via Taylor expansion as formulated in Eq. 4. Z(c) =|V|+1⊤ |V|Zc+1 2c⊤Z⊤Zc +∞X m=31 m!X w∈V c⊤z(w)m (4) Considering that Zis zero-mean data, the mean of z(w)is 0. Therefore, the linear term can also be simplified to 0, that is, 1⊤ |V|Zc= 5 0 56 4 2 02468Qwen2.5-3b-Instruct 5 0 5 10 155 0510Qwen2.5-7b 5 0 5 10 1510 5 0510LLaMA3.1-8b 20 10 0 10 20 3010 01020OPT-13b 0 500 1000 1500 2000 Index0.25 0.000.250.500.751.001.251.50 Eigenvalues -Log Eigenvalues 0 1000 2000 3000 4000 Index0.25 0.000.250.500.751.001.251.50 Eigenvalues -Log Eigenvalues 0 1000 2000 3000 4000 Index0.25 0.000.250.500.751.001.251.50 Eigenvalues -Log Eigenvalues 0 1000 2000 3000 4000 5000 Index0.25 0.000.250.500.751.001.251.50 Eigenvalues -Log Eigenvalues 2.5 0.02.55.07.510.012.515.017.5 -logdet=1.39(×10 ) cond=20.26(×10 ) 2.5 0.02.55.07.510.012.515.017.5 -logdet=2.45(×10 ) cond=27.06(×10 ) 2.5 0.02.55.07.510.012.515.017.5 -logdet=2.81(×10 ) cond=78.38(×10 ) 0.02.55.07.510.012.515.017.5 -logdet=3.53(×10 ) cond=113.61(×10 ) Figure 2: Visualization of distribution of word representations and the eigenvalues across different models. P w∈Vz(w)⊤c=0⊤c= 0 which will not affect the relative changes of Z(c)in different di- rections. The quadratic term involves the spectral properties of the matrix, whose eigenvalues de- scribe the directional variability of Z⊤Z, playing a dominant role in the changes of Z(c)in different directions. Expanding cin the eigenvector basis, we have c=Qu, where ∥u∥=∥c∥= 1 and {ud}D d=1are the components of u. Based on the eigenvalue decomposition, the calculation of Eq. 5 is made. c⊤Z⊤Zc= (Qu)⊤Z⊤Z(Qu) =u⊤Λu(5) Accordingly, we can further obtain the second- order estimate of Aas formulated in Eq. 6. A ≈|V|+ max ∥c∥=11 2c⊤Z⊤Zc |V|+ min ∥c∥=11 2c⊤Z⊤Zc =|V|+ max ∥u∥=11 2P dλdu2 d |V|+ min ∥u∥=11 2P dλdu2 d(6) When the components of the vector uare en- tirely concentrated in the direction correspond- ing to the maximum (minimum) eigenvalue, u⊤Λu= max dλd(min dλd). We observed that the anisotropy of the representation can be mea- sured by the condition number of the matrix, as for- mulated in Eq. 7. The condition number reflects the sensitivity of the
https://arxiv.org/abs/2505.17793v1
covariance matrix and reveals the characteristics of ill-conditioning from an intrinsic structural perspective, making it the first anisotropy metric entirely based on internal structure. A(Z)def= cond (Σ Z) =maxD d=1λd minD d=1λd(7)2.3 Systematic Analysis Mechanistic Analysis As shown in Figure 2, by performing eigenvalue decomposition on the co- variance matrix of the representations, we discov- ered a distinctive partitioning phenomenon in the eigenvalues of the LM covariance matrix. The lead- ing principal components exhibit an exponential decay in eigenvalues, effectively condensing the model’s core semantic information, while the nu- merous subsequent minor components demonstrate clustered, nearly constant low eigenvalues, forming spatially anisotropic perturbation sources. Interest- ingly, when measuring information compression using a negative logarithmic scale, the minor com- ponents show dramatically inflated compression metrics due to their infinitesimal original eigen- values, creating an inverted relationship with the principal component region. This seemingly para- doxical phenomenon actually reveals the compres- sion hacking in model representations, where noise- dominated directions tend to create the illusion of high compression rates by sacrificing spatial uniformity , while in reality this “compression” rep- resents either information loss or noise amplifica- tion, with truly effective information compression being exclusively accomplished by the principal components. Significance Analysis Next, we analyze the signif- icance of compression hacking, which manifests as compression-anisotropy synchronicity. Based on the aforementioned metrics, we calculated the estimates of both compression and anisotropy for instruction representations across four LMs in our preliminary experiments, both of which can be ex- clusively represented by the eigenvalues of the rep- resentation covariance matrix. Given their charac- 11.0 11.5 12.0 12.5 13.0 13.5 14.0 14.5 15.0 15.5 Ansiotropy (log)13.413.513.613.713.8Compression (scaled)Qwen2.5-3b-Instruct: R² = 0.80, P = **** Qwen2.5-7b: R² = 0.80, P = **** LLaMA3.1-8b: R² = 0.89, P = **** OPT-13b: R² = 0.83, P = **** 13.55 13.60 13.65 13.70 13.75 13.80 13.85 13.90 13.95 **** ********************Mann-Whitney UFigure 3: Regression fitting curves of compression ver- sus anisotropy for different models, along with Mann- Whitney U tests between them. Here, **** denotes sta- tistical significance at the 0.01% levels respectively. teristic patterns, we modeled a linear regression of compression against the logarithmic values of anisotropy, as shown in Figure 3. The regression analysis reveals two key findings through R ²and p-values: (1) compression as the dependent vari- able can be well and significantly explained by anisotropy , and (2) Mann-Whitney U tests (McK- night and Najab, 2010) confirm statistically signifi- cant differences in regression curves across differ- ent models . 3 Methodology 3.1 Refined Metrics We have demonstrated that the compression- anisotropy synchronicity caused by compression hacking in LMs is a statistically significant char- acteristic. This implies that we can develop more comprehensive metrics by jointly considering the compression and anisotropy of representations, as well as modeling their correlation. In this section, we formalize our approach through three strategies: Spectral Entropy Quantification Figure 2 illus- trates that, from the perspective of eigenvalue distri- bution, the mechanism of compression hacking is that the secondary components causing anisotropy (λd) are homologous to the principal components of the compression part ( −logλd). Interesting, spectral
https://arxiv.org/abs/2505.17793v1
entropy (Roy and Vetterli, 2007) precisely models this characteristic, and it is formally equiv- alent to a compression metric weighted by eigen- values (Compression (SE)), as formulated in Eq. 8.CSE(Z)def=−tr(Σ Zlog Σ Z) =−DX d=1λdlogλd (8) Semantic Coefficient of Variation Just as compression-anisotropy synchronicity serves as a distinct manifestation of compression hacking, where compression is characterized by the mean of eigenvalue logarithms (reflecting the overall vol- ume of the embedding space (Li et al., 2025)), while anisotropy corresponds to the ratio of ex- treme eigenvalues (quantifying the variation of se- mantic embeddings across different dimensions). Thus, we formulate their ratio as the Semantic Co- efficient of Variation (Semantic CV) in Eq. 9. This metric accurately characterizes the magnitude of anisotropy relative to information compression in the representation space Z. CVSem.(Z)def=A(Z) CDE(Z)(9) Manifold Correction Protocol Numerous studies have proposed train-free “anisotropy razors” to re- duce the anisotropy of representation space in a train-free manner, thereby enhancing representa- tional capacity (Mu and Viswanath, 2018; Su et al., 2021). This inspires us to decouple anisotropy from compression by selecting an appropriate anisotropy razor. Considering the exponential sharp decline in the eigenvalues of principal components corre- sponding to preceding dimensions due to compres- sion hacking, we propose Principal Component Smoothing (PCS) as an anisotropy razor, inspired by the LW-shrinkage (Ledoit and Wolf, 2004). By setting a smoothing coefficient β∈[0,1](default value is set to 0.9), we shift the representation space toward principal directions, resulting in a flatter transformed feature spectrum. This transformation is based on the covariance matrix of the represen- tation and is achieved by defining the mapping TPCSas formulated in Eq. 10, thereby refining the compression metric Compression (PCS). In The- orem B.2, we prove that under sparse spectrum conditions, the PCS estimator exhibits higher sta- tistical stability than the LW shrinkage. TPCS(ΣZ)def= (1−β)ΣZ+βDmax d=1λdID(10) 3.2 Evaluation Pipeline In this section, we integrate the three refined met- rics into a unified evaluation framework, which is a 250 500 750 1000 1250 1500 1750 2000 Compression (DE,x10²) 303540455055606570Ground Truth y = 0.02x + 38.83 = 0.445 0 1 2 3 4 5 Compression (SE) 303540455055606570Ground Truth y = 8.90x + 30.83 = 0.917 0 25 50 75 100 125 150 175 200 Semantic-CV 303540455055606570Ground Truth y = 219.83/x + 38.30 = 0.926 0 5 10 15 20 25 30 35 40 45 Compression (PCS,x10²) 303540455055606570Ground Truth y = 1.04x + 36.78 = 0.965 Qwen2.5 Family OPT Family LLaMA3 Family w/ Instruct w/o Instruct Model Size Figure 4: Scatter plots of ground truth values across different models for the four metrics, along with fitted regression equations and Spearman correlation coefficients. MetricGlobal Qwen2.5-Instruct OPT LLaMA3 Size Ground truth Size Ground truth Size Ground truth Size Ground truth Compression (DE) 0.935 0.445 1.000 0.829 1.000 1.000 0.956 0.886 Compression (SE) 0.430 0.917 0.486 0.714 0.829 0.829 0.598 0.657 Semantic CV 0.805 0.926 0.829 1.000 1.000 1.000 0.956 0.943 Compression (PCS) 0.708 0.965 0.829 1.000 1.000 1.000 0.956 0.943 Table 1: The Spearman correlation coefficients within model groups (Qwen2.5-Instruct, OPT, and LLaMA3 families) and across all models (Global), including the
https://arxiv.org/abs/2505.17793v1
correlations between the four metrics, and both model size (size) and comprehensive capabilities (Ground truth). The bold-highlighted components represent our refined metrics. task-agnostic pipeline operating purely from a rep- resentational perspective. Our evaluation paradigm associates the sampled data batch Bwith a deci- sion score s=F(B, fLM). The decision function F(·)operates through two sequential processes: (1) the projection step extracts hidden represen- tations Z(p)=FProjection (p, fLM)for each data sample p∈ B; (2) the decision step computes the batch-level score s=Ep∼BMetric (Zp)based on the refined metrics. Notably, our dataset require- ment specifies that the sample’s word representa- tion space should effectively estimate the model’s complete word representation space given suffi- cient sampling, ensuring convergence of our pro- posed metrics. We discuss the impact of sampling size on metric convergence in Section D. 4 Experiments In this section, we employ meta-evaluation to in- vestigate whether the refined metrics can achieve strong alignment with the comprehensive capabil- ities of LMs. This serves to validate whether in- corporating the geometric distortion perspective of representations through compression hacking can enhance the informatics interpretation of LMs. 4.1 Setup Models Since our evaluation focuses on the internal structure of model representations, we evaluated 18 open-source language models fromthree different model families with varying sizes. These families are the LLaMA3 family (Grattafiori et al., 2024) (LLaMA3.2-1B, LLaMA3.2- 1B-Instruct, LLaMA3.2-3B, LLaMA3.2-3B- Instruct, LLaMA3.1-8B, LLaMA3.1-8B-Instruct), Qwen2.5-Instruct family (Hui et al., 2024) (0.5B, 1.5B, 3B, 7B, 14B, 32B), OPT family (Zhang et al., 2022a) (0.125B, 1.3B, 2.7B, 6.7B, 13B, 30B). Meta Evaluation To evaluate the alignment between our metrics and LM capabilities, we employed meta-evaluation by calculating the Spearman correlation coefficient between human- annotated ground truth benchmarks and our pro- posed refined informatics metrics. For the meta- evaluation experiments, we selected six bench- mark datasets spanning four major domains as ground truth, corresponding to four key dimensions of large language model capabilities: Factuality: TruthfulQA (Lin et al., 2022), FACTOR (Muhl- gay et al., 2024), Math: MATH (Hendrycks et al., 2021), Reasoning: CommonsenseQA (Talmor et al., 2019), TheoremQA (Chen et al., 2023b), Knowl- edge: MMLU (Hendrycks et al., 2020). We use the mean of all benchmark scores as the ground truth for the model’s comprehensive evaluation (CE). Baseline Metrics We selected purely representation-based baseline metrics that operate independently of ground-truth labels and model sampling, encompassing both informatics and geometric perspectives. The informatics MetricProperty Factuality Reasoning Math KnowledgeCEInfo. Geom. TruthfulQA FACTOR Common.QA Theo.QA MATH MMLU Semantic V olume ✓ 0.429 0.414 0.441 0.483 0.420 0.409 0.442 Curvature ✓ 0.355 0.372 0.342 0.365 0.303 0.309 0.302 Diff-eRank ✓ ✓ 0.476 0.461 0.494 0.521 0.424 0.452 0.492 Compression(DE) ✓ 0.458 0.488 0.481 0.471 0.490 0.471 0.482 Anisotropy ✓ 0.715 0.702 0.702 0.792 0.673 0.709 0.701 Compression (SE) ✓ ✓ 0.895 0.861 0.892 0.921 0.824 0.852 0.912 Semantic CV ✓ ✓ 0.946 0.905 0.916 0.926 0.857 0.917 0.926 Compression (DE) w/ Remove Directions ✓ ✓ 0.053 0.102 0.042 0.142 0.211 0.093 0.110 w/ Whitening ✓ ✓ 0.487 0.498 0.502 0.482 0.423 0.456 0.472 w/ LW Shrinkage ✓ ✓ 0.458 0.488 0.481 0.471 0.490 0.471 0.482 w/ PCS ✓ ✓
https://arxiv.org/abs/2505.17793v1
0.962 0.955 0.923 0.967 0.846 0.923 0.965 Table 2: The Spearman correlation coefficient between the metrics based on the representation properties and the ground truth benchmark, where gray-highlighted components represent refined metrics we proposed. metrics include Compression (DE) and Semantic V olume (Li et al., 2025), while the geometric metrics consist of Curvature (Hosseini and Fedorenko, 2023) quantifying manifold curvature characteristics, and anisotropy. Diff-eRank (Wei et al., 2024) is the metric that simultaneously mod- els both information compression and geometric structure in language model representations, yet neglecting their direct synergistic relationship. Baseline Anisotropy Razors In addition to PCS as the anisotropy razor for decoupling anisotropy from compression, we selected three anisotropy razors as baselines. Remove Directions (Mu and Viswanath, 2018) is a post-processing method for eliminating noisy directions. Whitening (Su et al., 2021) eliminates correlations between features through global scaling, normalizing the eigenvalues to have the same mean and variance. LW Shrink- age (Ledoit and Wolf, 2004), on the other hand, adjusts extreme eigenvalues linearly towards the mean via Bayesian shrinkage. 4.2 Main Results Figure 4 and Table 1 present the regression equa- tions and Spearman correlations among the orig- inal compression metric (compression (DE)), our three proposed refined metrics, and comprehen- sive capabilities as ground truth. The original com- pression (DE) exhibits strong correlations of 0.935 with model size across all models, reaching 1.000, 1.000, and 0.956 within model families, confirm- ing the high consistency between original compres- sion capability and model scale in language mod- els. However, this metric achieves only 0.445 cor- relation with comprehensive capabilities in cross-architecture global analysis, maintaining higher cor- relations (0.829, 1.000, 0.886) only within model families, suggesting model size’ applicability for LM capability assessment is confined to homoge- neous architectural systems. Among our refined metrics, compression (SE) shows reduced size correlation (0.430 globally) but achieves 0.917 cross-architecture capability corre- lation, demonstrating its effectiveness in capturing capability differences across diverse architectures. Both semantic CV and compression (PCS) main- tain dual high correlations with size and capabil- ities within model families while sustaining sta- ble cross-architecture capability correlations (0.926 and 0.965, respectively), with size correlations moderately decreasing to 0.805 and 0.708. This demonstrates that our refined metrics achieve sig- nificantly stronger alignment with LMs’ compre- hensive capabilities compared to the original com- pression metrics. Through compression hacking, we substantially enhance the informatics interpre- tation of LMs from the geometric distortion per- spective of representations, thereby extending the “compression as intelligence” concept. 4.3 Comparison with Baseline Metrics Table 2 systematically presents the Spearman corre- lation coefficients between the ground truth bench- marks, and both the baseline metrics based on in- ternal representations and our refined metrics. The property column identifies whether the metric de- scribes informatics (Info.) or geometric (Geom.) property. Notably, metrics that model only a single property (either informational or geometric prop- erty), such as semantic volume, curvature, com- 0.0 0.1 0.2 0.3 Original Eigenvalues0.000.050.100.150.200.250.300.35Processed EigenvaluesRemove Directions 0.0 0.1 0.2 0.3 Original Eigenvalues0.000.050.100.150.200.250.300.35Processed EigenvaluesWhitening 0.0 0.1 0.2 0.3 0.4 Original Eigenvalues0.000.050.100.150.200.250.300.350.40Processed EigenvaluesLW Shrinkage 0.0 0.1 0.2 0.3 Original Eigenvalues0.000.050.100.150.200.250.300.35Processed EigenvaluesPCS 0.4 0.6 0.8 1.0 1.2
https://arxiv.org/abs/2505.17793v1
1.4 1.6 Partition Function Z(c)050100150200250FrequencyBefore Remove Directions After Remove Directions 0.4 0.6 0.8 1.0 1.2 1.4 1.6 Partition Function Z(c)050100150200250FrequencyBefore Whitening After Whitening 0.4 0.6 0.8 1.0 1.2 1.4 1.6 Partition Function Z(c)050100150200250FrequencyBefore LW Shrinkage After LW Shrinkage 0.4 0.6 0.8 1.0 1.2 1.4 1.6 Partition Function Z(c)050100150200250FrequencyBefore PCS After PCSFigure 5: The qqplot of the eigenvalue distribution before and after using different anisotropy razors, and the distribution of the partition function Z(c). pression (DE), anisotropy, and their modified ver- sions (w/ remove directions), all exhibit correlation coefficients with the comprehensive score below 0.5. Although Diff-eRank incorporates spectral en- tropy characteristics, its results still fail to reflect comprehensive capabilities, possibly because this metric focuses on the noise reduction process of knowledge acquisition while neglecting the syn- ergy between information and geometric properties. Experiments show that the compression methods modified by whitening and LW shrinkage, although aiming to decouple anisotropic features, still do not significantly improve capability alignment. It is noteworthy that our refined metrics in Figure 4 demonstrate significant advantages over the base- line metrics. 4.4 Effect of Anisotropy Razors Table 2 reveals that as “anisotropy razor” methods, remove directions, whitening, and LW shrinkage all fail to effectively improve the reflection of compre- hensive capabilities, whereas PCS exhibits a signif- icant improvement. In this section, we investigate the structural changes in representations before and after processing with these anisotropy razors, con- ducting an in-depth mechanistic analysis of PCS’s advantages over other methods. The qqplot in Figure 5 illustrates the eigenvalue distributions before and after applying these four razors. The first three methods decouple anisotropy while maintaining the linear geometric structure of the data, resulting in eigenvalues that still ex- hibit distinct partitioning. In contrast, PCS upscales the low-eigenvalue region, ensuring that the cor-rected compression relies entirely on the contri- butions of the principal components. The formal method for anisotropy detection involves examin- ing the “self-normalization” property (i.e., Z(c) tending toward a constant, independent of c) (Mu and Viswanath, 2018). Figure 5 illustrates the distri- bution of Z(c)before and after applying different anisotropy razors. We observe that remove direc- tions leads to a more dispersed Z(c)distribution, increasing anisotropy. This occurs because truncat- ing certain directions causes the remaining ones to spread more extremely. In contrast, whitening, LW shrinkage, and PCS concentrate the Z(c)dis- tribution. Notably, PCS achieves more pronounced anisotropy elimination than the other methods by rigidly correcting the eigenvalue distribution. 5 Conclusion We introduce a notable characteristic in language models termed “compression hacking”, where the noisy directions in LM representations feign high compression rates by sacrificing spatial uniformity, thereby distorting information compression metrics. Through spectral entropy quantification, semantic coefficient of variation, and a manifold correction protocol based on principal component smooth- ing, we refine the compression measurement frame- work. Extensive experiments on 18 mainstream lan- guage models demonstrate that the refined metrics achieve strong alignment with models’ actual capa- bilities. These results prove that incorporating the geometric distortion perspective through compres- sion hacking significantly enhances the informatics interpretation of LMs. 6 Limitations In fact, the metrics we propose still have broader application
https://arxiv.org/abs/2505.17793v1
scenarios worth exploring. For instance, practical techniques such as pruning, quantization, and distillation could potentially benefit from these indicators that reveal internal redundancies. Our proposed metrics help better identify compressible components in models without causing significant information loss. We anticipate that these refined metrics may open new avenues for future research, exploring how such internal representation indica- tors can be applied to various potential scenarios. References Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. 2016. A latent variable model approach to pmi-based word embeddings. Transac- tions of the Association for Computational Linguis- tics, 4:385–399. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems , 33:1877–1901. Xingyu Cai, Jiaji Huang, Yuchen Bian, and Kenneth Church. 2019. Isotropy in the contextual embedding space: Clusters and manifolds. In International con- ference on learning representations . Asli Celikyilmaz, Elizabeth Clark, and Jianfeng Gao. 2020. Evaluation of text generation: A survey. arXiv preprint arXiv:2006.14799 . Chao Chen, Kai Liu, Ze Chen, Yi Gu, Yue Wu, Mingyuan Tao, Zhihang Fu, and Jieping Ye. 2023a. Inside: Llms’ internal states retain the power of hal- lucination detection. In The Twelfth International Conference on Learning Representations . Jun Chen, Yong Fang, Ashish Khisti, Ayfer Özgür, and Nir Shlezinger. 2025. Information compression in the ai era: Recent advances and future challenges. IEEE Journal on Selected Areas in Communications . Wenhu Chen, Ming Yin, Max Ku, Pan Lu, Yixin Wan, Xueguang Ma, Jianyu Xu, Xinyi Wang, and Tony Xia. 2023b. Theoremqa: A theorem-driven question answering dataset. In Proceedings of the 2023 Con- ference on Empirical Methods in Natural Language Processing . Association for Computational Linguis- tics. Mike Conover, Matt Hayes, Ankit Mathur, Jianwei Xie, Jun Wan, Sam Shah, Ali Ghodsi, Patrick Wendell, Matei Zaharia, and Reynold Xin. 2023. Free dolly: Introducing the world’s first truly open instruction- tuned llm.Gregoire Deletang, Anian Ruoss, Paul-Ambroise Duquenne, Elliot Catt, Tim Genewein, Christopher Mattern, Jordi Grau-Moya, Li Kevin Wenliang, Matthew Aitchison, Laurent Orseau, et al. 2023. Lan- guage modeling is compression. In The Twelfth Inter- national Conference on Learning Representations . Grégoire Delétang, Anian Ruoss, Paul-Ambroise Duquenne, Elliot Catt, Tim Genewein, Christopher Mattern, Jordi Grau-Moya, Li Kevin Wenliang, Matthew Aitchison, Laurent Orseau, et al. 2023. Lan- guage modeling is compression. arXiv preprint arXiv:2309.10668 . David Demeter, Gregory Kimmel, and Doug Downey. 2020. Stolen probability: A structural weakness of neural language models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 2191–2197. Kawin Ethayarajh. 2019. How contextual are contextu- alized word representations? comparing the geometry of bert, elmo, and gpt-2 embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) . Association for Computational Linguistics. Wikimedia Foundation. 2025. Wikimedia downloads. Accessed: 2025-03-02. Jun Gao, Di He, Xu Tan, Tao Qin, Liwei Wang, and Tie-Yan Liu. 2019a. Representation degeneration problem in training natural language generation mod- els.arXiv preprint
https://arxiv.org/abs/2505.17793v1
arXiv:1907.12009 . Jun Gao, Di He, Xu Tan, Tao Qin, Liwei Wang, and Tieyan Liu. 2019b. Representation degeneration problem in training natural language generation mod- els. In International Conference on Learning Repre- sentations . Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. Simcse: Simple contrastive learning of sentence em- beddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing , page 6894. Association for Computational Linguis- tics. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. 2024. The llama 3 herd of models. arXiv e-prints , pages arXiv–2407. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language under- standing. arXiv preprint arXiv:2009.03300 . Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Ja- cob Steinhardt. 2021. Measuring mathematical prob- lem solving with the math dataset. arXiv preprint arXiv:2103.03874 . Eghbal Hosseini and Evelina Fedorenko. 2023. Large language models implicitly learn to straighten neural sentence trajectories to construct a predictive repre- sentation of natural language. Advances in Neural Information Processing Systems , 36:43918–43930. Binyuan Hui, Jian Yang, Zeyu Cui, Jiaxi Yang, Day- iheng Liu, Lei Zhang, Tianyu Liu, Jiajun Zhang, Bowen Yu, Keming Lu, et al. 2024. Qwen2. 5-coder technical report. arXiv preprint arXiv:2409.12186 . Ting Jiang, Jian Jiao, Shaohan Huang, Zihan Zhang, Deqing Wang, Fuzhen Zhuang, Furu Wei, Haizhen Huang, Denvy Deng, and Qi Zhang. 2022. Prompt- bert: Improving bert sentence embeddings with prompts. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing , pages 8826–8837. Olivier Ledoit and Michael Wolf. 2004. A well- conditioned estimator for large-dimensional covari- ance matrices. Journal of multivariate analysis , 88(2):365–411. Xiaomin Li, Zhou Yu, Ziji Zhang, Yingying Zhuang, Swair Shah, and Anurag Beniwal. 2025. Seman- tic volume: Quantifying and detecting both exter- nal and internal uncertainty in llms. arXiv preprint arXiv:2502.21239 . Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out , pages 74–81. Stephanie Lin, Jacob Hilton, and Owain Evans. 2022. Truthfulqa: Measuring how models mimic human falsehoods. In Proceedings of the 60th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 3214–3252. Patrick E McKnight and Julius Najab. 2010. Mann- whitney u test. The Corsini encyclopedia of psychol- ogy, pages 1–1. David Mimno and Laure Thompson. 2017. The strange geometry of skip-gram with negative sampling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing , pages 2873–2878. Jiaqi Mu and Pramod Viswanath. 2018. All-but-the-top: Simple and effective postprocessing for word repre- sentations. In International Conference on Learning Representations . Dor Muhlgay, Ori Ram, Inbal Magar, Yoav Levine, Nir Ratner, Yonatan Belinkov, Omri Abend, Kevin Leyton-Brown, Amnon Shashua, and Yoav Shoham. 2024. Generating benchmarks for factuality evalua- tion of language models. In Proceedings of the 18th Conference of the European Chapter of the Associa- tion for Computational Linguistics (Volume 1: Long Papers) , pages 49–66.Georg Pichler, Pierre Jean A
https://arxiv.org/abs/2505.17793v1
Colombo, Malik Boudiaf, Günther Koliander, and Pablo Piantanida. 2022. A differential entropy estimator for training neural net- works. In International Conference on Machine Learning , pages 17691–17715. PMLR. Olivier Roy and Martin Vetterli. 2007. The effective rank: A measure of effective dimensionality. In 2007 15th European signal processing conference , pages 606–610. IEEE. William Rudman and Carsten Eickhoff. 2024. Stable anisotropic regularization. In ICLR . William Rudman, Nate Gillman, Taylor Rayne, and Carsten Eickhoff. 2022. Isoscore: Measuring the uniformity of embedding space utilization. In Find- ings of the Association for Computational Linguistics: ACL 2022 , pages 3325–3339. Yutaka Sasaki et al. 2007. The truth of the f-measure. Teach tutor mater , 1(5):1–5. Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. Bleurt: Learning robust metrics for text generation. InProceedings of the 58th Annual Meeting of the As- sociation for Computational Linguistics , pages 7881– 7892. Oscar Skean, Md Rifat Arefin, Dan Zhao, Niket Patel, Jalal Naghiyev, Yann LeCun, and Ravid Shwartz- Ziv. 2025. Layer by layer: Uncovering hidden rep- resentations in language models. arXiv preprint arXiv:2502.02013 . Jianlin Su, Jiarun Cao, Weijie Liu, and Yangyiwen Ou. 2021. Whitening sentence representations for bet- ter semantics and faster retrieval. arXiv preprint arXiv:2103.15316 . Ilya Sutskever. 2023. Stronger compressors find more shared structure. The Ilya’s Talk. Talk. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. Commonsenseqa: A question answering challenge targeting commonsense knowl- edge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers) , pages 4149–4158. Zhiquan Tan, Lai Wei, Jindong Wang, Xing Xie, and Weiran Huang. 2024. Can i understand what i create? self-knowledge evaluation of large language models. arXiv preprint arXiv:2406.06140 . Yiming Wang, Pei Zhang, Baosong Yang, Derek Wong, Zhuosheng Zhang, and Rui Wang. 2024a. Embed- ding trajectory for out-of-distribution detection in mathematical reasoning. Advances in Neural Infor- mation Processing Systems , 37:42965–42999. Yiming Wang, Pei Zhang, Baosong Yang, Derek F Wong, and Rui Wang. 2024b. Latent space chain-of- embedding enables output-free llm self-evaluation. arXiv preprint arXiv:2410.13640 . Lai Wei, Zhiquan Tan, Chenghai Li, Jindong Wang, and Weiran Huang. 2024. Diff-erank: A novel rank- based metric for evaluating large language models. InThe Thirty-eighth Annual Conference on Neural Information Processing Systems . Sangwon Yu, Jongyoon Song, Heeseung Kim, Seong- min Lee, Woo-Jong Ryu, and Sungroh Yoon. 2022. Rare tokens degenerate all tokens: Improving neural text generation via adaptive gradient gating for rare token embeddings. In Proceedings of the 60th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 29–45. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher De- wan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022a. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068 . Yanzhao Zhang, Richong Zhang, Samuel Mensah, Xudong Liu, and Yongyi Mao. 2022b. Unsupervised sentence representation via contrastive learning with mixing negatives. In Proceedings of the AAAI Con- ference on Artificial Intelligence , volume 36, pages 11730–11738. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang,
https://arxiv.org/abs/2505.17793v1
Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. 2023. Judg- ing llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems , 36:46595–46623. Zhanghao Zhouyin and Ding Liu. 2023. Understanding neural networks withlogarithm determinant entropy estimator. A Related Work and Further Analysis A.1 Evaluation of Language Models The evaluation of language models is currently in a state of rapid iterative development, encompassing a variety of tasks, datasets, and benchmarks (Ce- likyilmaz et al., 2020; Zheng et al., 2023; Tan et al., 2024). Traditional evaluation metrics such as accu- racy, F1-score (Sasaki et al., 2007), BLEU (Sellam et al., 2020), and ROUGE (Lin, 2004) focus on comparing model predictions with annotated labels in downstream tasks. Other metrics like perplexity and cross-entropy loss do not rely on annotated labels and are computed solely based on input text. However, these methods primarily emphasize ex- ternal evaluation based on model predictions. Recently, significant efforts have been devoted to exploring the mechanisms by which language models (LMs) process information internally, driv- ing the development of LM self-evaluation (Wei et al., 2024; Wang et al., 2024a,b) independent of specific tasks and model outputs. The conceptof “compression as intelligence” has provided an information-theoretic internal evaluation perspec- tive for language models, highlighting that the ac- quisition of world knowledge by language mod- els is a denoising process (Sutskever, 2023; Dele- tang et al., 2023; Wei et al., 2024; Chen et al., 2025). Differential entropy of representations, as a classical information-theoretic measure, effec- tively quantifies the internal uncertainty of lan- guage models (Chen et al., 2023a; Zhouyin and Liu, 2023). Semantic volume (Li et al., 2025) leverages representation-level differential entropy- aware compression metrics to offer a novel per- spective for language model evaluation. However, related work has found that such compression can only model the scale of language models and fails to align with their capabilities (Wei et al., 2024; Li et al., 2025). We introduces the concept of compression hack- ing in language model representations, where the noisy directions of LM representations sacrifice spatial uniformity to feign high compression rates. This implies that we can refine the information com- pression perspective by considering the geometric distortions in the language model’s representation space. A.2 Anisotropy of Language Models Anisotropy The anisotropy of language models re- flects the geometric properties of the contextual em- bedding space. Related studies have observed that during sampling, the spatial embeddings of nega- tive samples exhibit anisotropy, which describes how vectors are distributed within the contextual space (Mimno and Thompson, 2017; Ethayarajh, 2019). The researchers found that most vectors oc- cupy a relatively narrow cone within the space, and that vectors within this cone tend to have high co- sine similarity (Gao et al., 2019b). Demeter pointed out that using softmax introduces structural weak- nesses in the representation space, leading to bias, a common issue in language models (Demeter et al., 2020). To better quantify the anisotropy of LMs, related work has identified isolated clusters and low-dimensional manifolds in the contextual em- bedding space, introducing tools for their qualita-
https://arxiv.org/abs/2505.17793v1
tive and quantitative analysis (Ethayarajh, 2019; Cai et al., 2019; Rudman et al., 2022). However, these tools are mainly based on similarity calcula- tions of embedded representations. What is needed instead is an anisotropy metric that can establish a connection with entropy based compression metric. 0 500 1000 1500 20000.10 0.05 0.000.050.100.150.200.25Qwen2.5-3B-Instruct-logdet=27758.1 cond=236845.0Original Eigenvalues -LogEigenvalues 0 500 1000 1500 20000.1 0.00.10.20.30.4 -logdet=27828.4 cond=366267.4+Remove Directions Eigenvalues -LogEigenvalues 0 500 1000 1500 20000.10 0.08 0.06 0.04 0.02 0.000.02 -logdet=27720.2 cond=16126.7+Whitening Eigenvalues -LogEigenvalues 0 500 1000 1500 20000.10 0.05 0.000.050.100.150.20 -logdet=18355.8 cond=1394.3+LW Shrinkage Eigenvalues -LogEigenvalues 0 500 1000 1500 20000.10 0.05 0.000.050.100.150.200.25 -logdet=3730.8 cond=1.2+PCS Eigenvalues -LogEigenvalues 0 500 1000 1500 2000 2500 3000 35000.1 0.00.10.20.30.4Qwen2.5-7B-logdet=49041.6 cond=404130.2Original Eigenvalues -LogEigenvalues 0 500 1000 1500 2000 2500 3000 35000.1 0.00.10.20.30.4 -logdet=49106.1 cond=444620.0+Remove Directions Eigenvalues -LogEigenvalues 0 500 1000 1500 2000 2500 3000 35000.10 0.08 0.06 0.04 0.02 0.000.02 -logdet=48973.0 cond=25803.1+Whitening Eigenvalues -LogEigenvalues 0 500 1000 1500 2000 2500 3000 35000.10 0.05 0.000.050.100.150.200.250.30 -logdet=34571.3 cond=4096.1+LW Shrinkage Eigenvalues -LogEigenvalues 0 500 1000 1500 2000 2500 3000 35000.1 0.00.10.20.30.4 -logdet=6614.2 cond=1.2+PCS Eigenvalues -LogEigenvalues 0 1000 2000 3000 40000.1 0.00.10.20.30.40.50.60.7LLaMA3.1-8B-logdet=56355.0 cond=808659.8Original Eigenvalues -LogEigenvalues 0 1000 2000 3000 40000.1 0.00.10.20.30.40.50.60.7 -logdet=56359.3 cond=788645.1+Remove Directions Eigenvalues -LogEigenvalues 0 1000 2000 3000 40000.10 0.08 0.06 0.04 0.02 0.000.020.040.06 -logdet=56061.3 cond=45231.1+Whitening Eigenvalues -LogEigenvalues 0 1000 2000 3000 40000.1 0.00.10.20.30.4 -logdet=39087.5 cond=4349.7+LW Shrinkage Eigenvalues -LogEigenvalues 0 1000 2000 3000 40000.1 0.00.10.20.30.40.50.60.7 -logdet=3320.2 cond=1.2+PCS Eigenvalues -LogEigenvalues 0 1000 2000 3000 4000 50000.00.20.40.60.81.01.2OPT-13B-logdet=70756.5 cond=1102632.0Original Eigenvalues -LogEigenvalues 0 1000 2000 3000 4000 50000.00.20.40.60.81.01.2 -logdet=70772.3 cond=1131497.9+Remove Directions Eigenvalues -LogEigenvalues 0 1000 2000 3000 4000 50000.00.20.40.60.81.01.2 -logdet=70750.6 cond=886874.9+Whitening Eigenvalues -LogEigenvalues 0 1000 2000 3000 4000 50000.1 0.00.10.20.30.4 -logdet=49394.9 cond=5109.2+LW Shrinkage Eigenvalues -LogEigenvalues 0 1000 2000 3000 4000 50000.00.20.40.60.81.01.2 -logdet=1571.6 cond=1.2+PCS Eigenvalues -LogEigenvalues0.02.55.07.510.012.515.017.5 0.02.55.07.510.012.515.017.5 0.02.55.07.510.012.515.017.5 024681012 0.00.51.01.52.02.5 0.02.55.07.510.012.515.017.5 0.02.55.07.510.012.515.017.5 0.02.55.07.510.012.515.017.5 024681012 0.00.51.01.52.02.5 0.02.55.07.510.012.515.017.5 0.02.55.07.510.012.515.017.5 0.02.55.07.510.012.515.017.5 024681012 0.00.20.40.60.81.0 0.02.55.07.510.012.515.017.5 0.02.55.07.510.012.515.017.5 0.02.55.07.510.012.515.017.5 024681012 0.1 0.00.10.20.30.4Figure 6: The eigenvalues and their negative logarithmic distributions of different models’ representations before and after processing with different anisotropy razors. Anisotropy Razors To mitigate anisotropy in lan- guage models, existing research has proposed var- ious solutions. Contrastive learning has emerged as a powerful tool for obtaining effective sentence representations, effectively reducing anisotropy by increasing the spatial distance between positive and negative samples (Gao et al., 2021; Zhang et al., 2022b; Jiang et al., 2022). In this work, we em- ploy post-processing methods applied directly to the representation space as baseline approaches for the anisotropy razor: •Remove Directions (Mu and Viswanath, 2018): First, subtract the common mean vec- tor of all word vectors to eliminate global bias; then remove the top high-variance principal component directions via Principal Compo- nent Analysis (PCA). This process enhances semantic feature discriminability by eliminat- ing non-semantic common information from word vectors, making the word space distribu- tion more isotropic. •Whitening (Su et al., 2021): Zero-center the representations and transform the covariancematrix into an identity matrix, forcing the em- bedding distribution toward isotropy. •LW Shrinkage (Ledoit and Wolf, 2004): Lin- early shrink the sample covariance matrix to- ward the diagonal matrices to reduce noise interference in high-dimensional data, yield- ing more stable covariance matrix estimates. This operation mitigates
https://arxiv.org/abs/2505.17793v1
excessive sensitivity in specific directions, promoting isotropic fea- ture distributions. These training-free paradigms provide refer- ences for decoupling anisotropy from compres- sion. However, these methods maintain the lin- ear geometric structure of the data, with eigenval- ues still exhibiting consistent partitioning behav- ior. Figure 6 demonstrates the distribution changes in eigenvalues and their negative logarithms af- ter applying these baseline anisotropy razor post- processing methods. The results show that the distributions after Remove Directions, Whitening, and LW-Shrinkage treatments retain their origi- nal forms, leaving cross-model relationships of the modified compression metrics relatively un- changed. Consequently, we propose principal com- ponent smoothing to force eigenvalues toward dom- inant features. As shown in Figure 6, this approach induces significant changes in eigenvalue distribu- tions. B Statistical Properties of Principal Component Smoothing Lemma B.1 (Asymptotic Optimality of Ledoit– Wolf Shrinkage (Ledoit and Wolf, 2004)) .LetΣ∈ RD×Dbe the population covariance matrix and ΣZ=1 |V|Z⊤Zthe sample covariance. The Ledoit- Wolf estimator ˆΣLW= (1−βLW)ΣZ+βLWµI, µ =1 Dtr(Σ Z) (11) attains minimal MSE when the shrinkage inten- sity satisfies βLW≍1 |V|. Under general covariance structures (without spectral sparsity), this yields asymptotic MSE: MSE(ˆΣLW)≍ OD |V| (12) Theorem B.2 (Statistical Stability of the Princi- pal Component Smoothing Estimator) .Assume the true covariance matrix Σhas a dominant eigen- value λ∗ 1= max dλd≫λ∗ d(d≥2), i.e., spec- tral sparsity holds. Define the improved shrinkage estimator as: ˆΣPCS = (1−βPCS)ΣZ+βPCSλ1I, βPCS≍ O 1p |V|! (13) where λ1is the largest eigenvalue of the sample covariance matrix ΣZand satisfies λ1|V|− − →λ∗ 1in probability. When the sample size |V|is sufficiently large, MSE( ˆΣPCS)<MSE( ˆΣLW) (14) Proof. We commence by analyzing the mean squared error (MSE) structure of covariance ma- trix estimators. Let ∥ · ∥ Fdenote the Frobenius norm, the MSE decomposes into bias and variance components: MSE(ˆΣ) = E[ˆΣ]−Σ 2 F|{z} Bias2+E ˆΣ−E[ˆΣ] 2 F | {z } Variance. (15) For the Ledoit-Wolf estimator ˆΣLW= (1− βLW)ΣZ+βLWµI, under spectral sparsity λ∗ 1≫PD d=2λ∗ d/D, the shrinkage target µ≈λ∗ 1/Dcre- ates dominant bias from the leading eigenvalue: Bias2 LW≈β2 LW∥Σ−µI∥2 F =β2 LW" (λ∗ 1−µ)2+DX d=2(λ∗ d−µ)2# ≍β2 LW(λ∗ 1)2 1−1 D2 (16) According to lemma B.1, the variance term inher- its from sample covariance matrix with dimension scaling: Variance LW≈(1−βLW)2· O D2 |V| ≍ O D2 |V| (17) where the O(D2/|V|)scaling comes from concen- tration of sample covariance in high dimensions. For our eigenvalue-shrinkage estimator ˆΣPCS= (1−βPCS)ΣZ+βPCSλ1I, the preserved leading eigenvalue estimation λ1p− →λ∗ 1fundamentally al- ters the bias-variance tradeoff. The bias now origi- nates from minor eigenvalues: Bias2 PCS=β2 PCSDX d=2(λ∗ d−λ∗ 1)2≍β2 PCS(D−1)(λ∗ 1)2 (18) where the last approximation uses λ∗ d≪λ∗ 1from spectral sparsity. The variance term splits into two parts: Variance PCS= (1−βPCS)2Var DX d=2λd! |{z} ≍O(D−1)λ∗2 1 |V| +β2 PCSVar(λ1)|{z} ≍Oλ∗2 1 |V|(19) With optimal shrinkage intensity βPCS = O(1/p |V|), the dominant variance term becomes: Variance PCS≍ O(D−1)λ∗2 1 |V| . (20) The MSE comparison reveals fundamental dif- ferences in scaling laws. For ˆΣLWwith βLW= O(1/|V|): MSE(ˆΣLW)≍ Oλ∗2 1 |V|2 |{z} Bias2+O D2 |V| |{z} Variance. (21) Model R² p-value LLaMA3.1-8B 0.89 **** LLaMA3.1-8B-Instruct 0.79 *** LLaMA3.2-1B 0.80
https://arxiv.org/abs/2505.17793v1
**** LLaMA3.2-1B-Instruct 0.78 *** LLaMA3.2-3B 0.89 **** LLaMA3.2-3B-Instruct 0.77 **** OPT-0.125B 0.88 **** OPT-1.3B 0.76 **** OPT-2.7B 0.66 ** OPT-6.7B 0.91 **** OPT-13B 0.80 *** OPT-30B 0.83 **** Qwen2.5-0.5B-Instruct 0.81 **** Qwen2.5-1.5B-Instruct 0.86 *** Qwen2.5-3B-Instruct 0.80 **** Qwen2.5-7B-Instruct 0.79 **** Qwen2.5-14B-Instruct 0.85 **** Qwen2.5-32B-Instruct 0.83 **** Table 3: The R ²and p-values of the compression- anisotropy regression fitting curves across different mod- els, where, **, ***, and **** denote statistical signifi- cance at the 1%, 0.1%, and 0.01% levels respectively. ForˆΣPCSwith dimension-adaptive shrinkage: MSE(ˆΣPCS)≍ O(D−1)λ∗2 1 |V| |{z} Bias2+O(D−1)λ∗2 1 |V| |{z} Variance. (22) When |V| → ∞ , theO(1/|V|)terms dominate O(1/|V|2). Under spectral sparsity λ∗ 1≫λ∗ d(d≥ 2), the improvement ratio becomes: MSE(ˆΣPCS) MSE(ˆΣLW)≍Dλ∗2 1/|V| D2/|V|=λ∗2 1 D≪1,(23) where the inequality follows from λ∗2 1/D≤ (PD d=1λ∗ d)2/D2by Cauchy-Schwarz. ■ C Significance Analysis Our evaluation results presented in Table 3 demon- strate a strong and statistically significant relation- ship between compression and anisotropy across the 18 open-source language models examined. The high R ²values (ranging from 0.7 to 0.9 for most models) indicate that linguistic anisotropy accounts for a substantial proportion of the ob- served compression phenomena. Furthermore, the compression-anisotropy synchronization proves statistically significant at stringent confidence lev-els (p<0.001 or p<0.01) for the majority of mod- els. These robust and consistent findings across diverse architectures provide compelling empirical evidence that compression hacking is not merely an artifact but rather an intrinsic and fundamental characteristic of language model representations, revealing important insights about their underlying geometric properties. LLaMA3.1-8B LLaMA3.1-8B-InstructLLaMA3.2-1B LLaMA3.2-1B-InstructLLaMA3.2-3B LLaMA3.2-3B-InstructOPT-0.125BOPT-1.3B OPT-2.7B OPT-6.7B OPT-13B OPT-30B Qwen2.5-0.5B-Instruct Qwen2.5-1.5B-InstructQwen2.5-3B-Instruct Qwen2.5-7B-InstructQwen2.5-14B-Instruct Qwen2.5-32B-InstructLLaMA3.1-8BLLaMA3.1-8B-InstructLLaMA3.2-1BLLaMA3.2-1B-InstructLLaMA3.2-3BLLaMA3.2-3B-InstructOPT-0.125BOPT-1.3BOPT-2.7BOPT-6.7BOPT-13BOPT-30BQwen2.5-0.5B-InstructQwen2.5-1.5B-InstructQwen2.5-3B-InstructQwen2.5-7B-InstructQwen2.5-14B-InstructQwen2.5-32B-Instruct ************************************************************************ ************ ************************************** ******************* ******************** ************************************************** ******************************************** *************** *********** ************************************************************************ ************************************************************************ **************************************** *********** ******************* **************** ******************************************************* **************************************************** ******************* ******************** *************************************************** **** ******************************************************************* **************************************************** ******************* **************** ******************************************************* ******************************************** *************************** **************** ******************************* *********************** ************************************************************************ **************************** *************** ******* ******************* Figure 7: The Mann-Whitney U tests of compression- anisotropy regression fitting between different models, where, ***, and **** denote statistical significance at the 0.1%, and 0.01% levels respectively. Figure 7 presents the Mann-Whitney U test re- sults for compression-anisotropy regression fitting across different models. Our analysis reveals that the differences between most model pairs achieve statistical significance at rigorous levels. These statistically significant variations in compression- anisotropy fitting curves demonstrate that the in- formation compression metric, when adjusted for compression hacking effects, can effectively cap- ture meaningful distinctions in model capabilities. This finding provides empirical validation that our refined compression-based evaluation framework offers discriminative power for comparing perfor- mance differences across language model architec- tures. D Implementation Details of the Evaluation Pipeline For the projection dataset, we primarily collected 1,000 data samples from the pretraining corpus (Wiki (Foundation, 2025)) and the instruction- tuning dataset (Dolly-15k (Conover et al., 2023)) to derive projection data. By sampling the word repre- 0100 200 300 400 500 600 700 8004208842090420924209442096LLaMA3.2-3B-Instruct Cumulative ValueMax: 42096.3 Min: 42087.2 Final: 42094.4Compression (DE) 0100 200 300 400 500 600 700 8002.002.052.102.152.20 Max: 2.2 Min: 2.0 Final: 2.0Compression (SE) 0100 200 300 400 500 600 700 80014151617 Max: 17.6 Min: 13.5 Final: 17.1Semantic CV 0100 200 300 400 500 600 700 8001800190020002100220023002400 Max: 2436.7 Min: 1791.1 Final:
https://arxiv.org/abs/2505.17793v1
1957.1Compression (PCS) 0100 200 300 400 500 600 700 800706807069070700707107072070730707407075070760OPT-13B Cumulative ValueMax: 70756.5 Min: 70679.3 Final: 70704.0 0100 200 300 400 500 600 700 8000.50.60.70.80.9 Max: 1.0 Min: 0.5 Final: 0.8 0100 200 300 400 500 600 700 80015.816.016.216.416.616.817.0 Max: 17.1 Min: 15.7 Final: 16.5 0100 200 300 400 500 600 700 8004005006007008009001000 Max: 1056.3 Min: 429.3 Final: 814.0 0100 200 300 400 500 600 700 800 Data Samples45678Qwen2.5-3B-Instruct Cumulative Value+2.775e4 Max: 27758.1 Min: 27753.9 Final: 27756.4 0100 200 300 400 500 600 700 800 Data Samples3.383.403.423.443.463.483.50 Max: 3.5 Min: 3.4 Final: 3.5 0100 200 300 400 500 600 700 800 Data Samples7.27.47.67.88.08.2 Max: 8.3 Min: 7.1 Final: 7.4 0100 200 300 400 500 600 700 800 Data Samples33003350340034503500355036003650 Max: 3632.9 Min: 3274.9 Final: 3568.4Figure 8: The cumulative expected values of different metrics as the number of samples increases. 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0.00.20.40.60.81.0 0.965 0.965 0.965 0.965 0.965 0.792 0.684 0.482 0.482 0.482 0.482 Figure 9: The correlation coefficients between compres- sion (PCS) and ground truth under different smoothing coefficient. sentations of these data points, we aim to estimate the full model’s representation space, ensuring the convergence of our metrics. Our pipeline defaults to sampling 800 data samples. Figure 8 illustrates the cumulative expected values of different metrics as the number of samples increases. We observe that all metrics converge relatively early to stable val- ues, demonstrating that our refined metrics enablerobust evaluation based on the provided projection dataset. For the hyperparameter αthat ensures full-rank covariance matrices, we selected 10−8. Regarding the smoothing coefficient ( β) for principal compo- nent smoothing, we determined the interval [0.6,1] to be appropriate. Figure 8 illustrates how different choices of principal component smoothing coef- ficients affect the compression (PCS). It can be observed that when βfalls within [0.6,1], the re- sults maintain strong correlation with the ground truth. This occurs because the principal directions already dominate the compression computation. As the smoothing coefficient decreases, noise direc- tions gradually regain prominence in the compres- sion calculation.
https://arxiv.org/abs/2505.17793v1
arXiv:2505.17795v1 [cs.CL] 23 May 2025DialogXpert : Driving Intelligent and Emotion-Aware Conversations through Online Value-Based Reinforcement Learning with LLM Priors Tazeek Bin Abdur Rakib1, Ambuj Mehrish2 Lay-Ki Soon1,Wern Han Lim1,Soujanya Poria2 1School of Information Technology, Monash University Malaysia 2Singapore University of Technology and Design {soon.layki, lim.wern.han, tazeek.binabdurrakib}@monash.edu {ambuj_mehrish, sporia}@sutd.edu.sg Abstract Large-language-model (LLM) agents excel at reactive dialogue but struggle with proactive, goal-driven interactions due to myopic de- coding and costly planning. We introduce DialogXpert , which leverages a frozen LLM to propose a small, high-quality set of candi- date actions per turn and employs a compact Q- network over fixed BERT embeddings trained via temporal-difference learning to select op- timal moves within this reduced space. By tracking the user’s emotions, DialogXpert tai- lors each decision to advance the task while nurturing a genuine, empathetic connection. Across negotiation, emotional support, and tu- toring benchmarks, DialogXpert drives con- versations to under 3turns with success rates exceeding 94% and, with a larger LLM prior, pushes success above 97% while markedly im- proving negotiation outcomes. This framework delivers real-time, strategic, and emotionally intelligent dialogue planning at scale1. 1 Introduction Recent advances in large language models (LLMs) such as ChatGPT (OpenAI, 2022), Vicuna (Zheng et al., 2023a), and LLaMA2-Chat (Ouyang et al., 2022; Touvron et al., 2023) have significantly en- hanced open-domain dialogue systems, enabling fluent, context-aware, and intent-aligned responses (Hu et al., 2023). However, these systems remain largely reactive, adept at replying to user input but limited in proactively steering conversations toward specific goals. Domains such as negotiation, emo- tional support, and tutoring require initiative and long-term planning (Deng et al., 2023a; Kang et al., 2024; Song et al., 2024), which current LLMs often lack (Deng et al., 2025). This limitation stems from their turn-by-turn gen- eration, typically guided by greedy decoding, that 1Code available at https://github.com/declare-lab/ dialogxpert/overlooks future dialogue objectives (Levin et al., 1997; Cheng et al., 2022). Although techniques like Monte Carlo Tree Search (MCTS) (Silver et al., 2016; Zhao et al., 2024) and A∗search (Hart et al., 1968) offer deeper look-ahead (Väth et al., 2023), they are computationally expensive and unsuitable for real-time use. Prior to LLMs, dialogue planning relied on su- pervised learning over annotated corpora (Zhou et al., 2020; Joshi et al., 2021; Cheng et al., 2022; Wang et al., 2023b; Deng et al., 2023b, 2022), focusing on dialogue act prediction. These ap- proaches were static, domain-specific, and diffi- cult to scale, often failing to adapt to evolving user behavior or optimize long-term outcomes. While LLMs introduced a new paradigm, efficient and goal-driven dialogue planning remains an open challenge. To mitigate these challenges, recent frameworks such as Plug-and-Play Dialogue Policy Planning (PPDPP) (Deng et al., 2024) have emerged. PPDPP fine-tunes a compact RoBERTa-based (Liu et al., 2019) policy language model using supervised learning and further optimizes it through self- play (Silver et al., 2017) with LLM-based user and reward simulators. This approach is computation- ally efficient requiring only a single forward pass per turn but remains inherently myopic. It selects actions greedily, lacks multi-turn foresight, and is constrained by the limited zero-
https://arxiv.org/abs/2505.17795v1
or few-shot gen- eralization capabilities of the frozen policy model. Consequently, the agent may choose locally opti- mal but globally suboptimal actions and struggle with out-of-distribution states. Dual-Process Dialogue Planner (DPDP) (He et al., 2024) improves over PPDPP with Kahne- man’s dual-process theory (Kahneman, 2003), pair- ing a fast RoBERTa policy (System 1) with an MCTS planner (System 2) triggered under uncer- tainty (Anthony et al., 2017). While this boosts look-ahead reasoning, repeated rollouts and reward simulations incur high latency, and its heuristic gat- ing can misjudge when deeper reasoning is needed. Moreover, both DPDP and PPDPP rely on compact, fine-tuned models that either plan too greedily or at excessive computational cost. We propose the LLM-Prior Planning Paradigm , which leverages frozen LLMs’ generalization with- out full-tree planning overhead. At each turn, a frozen LLM (e.g., Qwen-2.5 14B (Bai et al., 2023)) produces a top- kset of semantically coher- ent actions, forming a concise prior (Bengio, 2017; Korbak et al., 2022). A lightweight Q-network, trained via Q-learning on fixed BERT embeddings of state–action pairs (Devlin, 2018; Mnih et al., 2013), performs localized rollouts within this can- didate set and updates value estimates through temporal-difference learning (Watkins and Dayan, 1992; Tesauro et al., 1995; Yan et al., 2024). This reduces expensive LLM calls, avoids exhaustive tree expansion, and converges rapidly even in com- pact action spaces. Importantly, dialogue effectiveness depends not only on task success but also on emotional res- onance (Chen et al., 2023; Asghar et al., 2020). To this end, we introduce DialogXpert , an LLM-Prior framework enhanced with a dedicated emotion-tracking component . After each system turn, the Emotion Tracker infers the user’s current feelings for example, distress or engagement from the chosen action and preceding context. These in- ferred emotions are folded into the planner’s state representation, allowing DialogXpert to trade off goal progress against rapport building. As a result, the agent avoids abrupt or tone-deaf responses, pro- ducing conversations that feel both effective and genuinely empathetic (Zhao et al., 2023). Our contributions are: (1) DialogXpert model that combines the strategic power of LLMs (Xu et al., 2023) with the efficiency of lightweight value learning and the sensitivity of emotion-aware plan- ning. (2) It tackles major limitations seen in earlier approaches like short-sighted decisions, poor gener- alization, and heavy computational demands while still being suitable for real-time use. (3) Results across a range of tasks, including negotiation, tutor- ing, and emotional support, demonstrate its strong performance, setting a new standard for proactive and emotionally intelligent dialogue systems. 2 Related Works LLM-driven decision-making has progressed from fine-tuned chatbots to sophisticated planners. Earlysystems like DialoGPT (Zhang et al., 2019, 2020), ProAgent (Zhang et al., 2023a), and V oyager (Wang et al., 2023a) adapted pretrained transform- ers or retrieval-augmented controllers for multi- step tasks, while prompt-chaining (Proactive, Pro- CoT (Deng et al., 2023a)) and modular prompting (Ask-an-Expert (Zhang et al., 2023b), ICL-AIF (Fu et al., 2023)) enabled iterative reasoning and de- composed tasks. Planning-as-search methods such as Tree-of-Thoughts (Yao et al., 2023), RAP with MCTS rollouts (Hao et al., 2023), and reinforce-
https://arxiv.org/abs/2505.17795v1
ment learning approaches like PPDPP (Deng et al., 2024) and DPDP (He et al., 2024) improved explo- ration efficiency. Recent latent-policy techniques such as LDPP (He et al., 2025a) and UDP (He et al., 2025b) learn continuous action representations via V AE and diffusion-based user models. In contrast, DialogXpert treats the LLM as a frozen action proposer: it selects top- ksamples from a large pre- trained model (e.g., Vicuna 13B or Qwen 2.5 14B) to generate semantically coherent candidates, then uses Q-learning augmented with explicit emotion tracking to select the optimal move — balancing in- ference speed, strategic depth, and emotional align- ment without full-tree search at runtime. 3 Methodology 3.1 Preliminaries Problem statement. Existing works (Wang et al., 2020; He et al., 2024, 2025a) formulate the di- alogue planning process as a Markov Decision Process (MDP), represented formally as a tuple (S,A, r,T), where Sdenotes the dialogue state space, Arepresents the dialogue action space, r denotes the reward function, and Tdefines the tran- sition function. At each turn t, the dialogue state st∈ S includes the complete conversational con- text and encompassing historical utterances. The agent selects an action at∈ A, which leads to a state transition st+1=T(st, at)and a reward rt. The goal of the dialogue agent is to learn an optimal policy π∗maximizing cumulative future rewards: π∗= arg max πEπ"TX t=0γtrt# (1) where γ∈[0,1]is the discount factor and Tis the maximum dialogue length. LLM-powered self-play. Following (He et al., 2024, 2025a), we leverage LLMs to simulate both user and system roles for generating realistic dia- Conversation State ( ) List of Actions (, , , ...., ) Emotion Tracker (, , , ...., )Policy PlannerPrior distribution over actions}Select top- -NetworkLLMNext action Case Info. System LLMUser LLM Emotion State ( ) User UtteranceEmotion Tracker System UtteranceConversation History () Trained With Online RL Reward LLMCritic LLMAppendFigure 1: DialogXpert pipeline: case information and dialogue history drive user/system LLMs and an emotion tracker; a frozen LLM generates a prior over candidate actions, the top-k are evaluated by a Q-network and executed by the system LLM; a critic LLM provides reward signals to train the Q-network. logues. Specifically, two distinct LLM agents are used: one represents the user and the other the dia- logue system, as illustrated in Figure 1. Given pre- defined case information (Case Info.), each LLM generates utterances conditioned on its role and prior conversation history (Luo et al., 2022). Ad- ditionally, an independent LLM-based critic evalu- ates each turn, providing scalar rewards that capture task success and emotional alignment, thereby en- abling reinforcement learning. More information on self-play is in Appendix C. 3.2 LLM Action Prior Framework The LLM Action Prior Framework leverages the semantic knowledge of pretrained LLMs to narrow the dialogue action space. By conditioning on the current dialogue state stincluding conversational history and emotional context—the LLM generates a prior distribution over candidate actions, signifi- cantly reducing computational overhead and guid- ing effective action selection. Formally, this prior is defined as pLLM(· |st). Following (Yan et al., 2024), we adopt a two- step “free-form + projection” approach that
https://arxiv.org/abs/2505.17795v1
com- bines the generative flexibility of LLMs with a constrained action space A={a1, . . . , a n}. At each dialogue turn t, the model input is: I= (ct, st, Et),, where ctis the case information, stin- cludes the conversation history, and Etrepresents the accumulated emotion. The input Iand action setAare serialized into a prompt (see Appendix A). The LLM produces an open-text proposal: o∼pLLM(o|st,A),which is projected via a deterministic mapping P to a valid action: at+1=P(o)∈ A. Although we do not enumerate the full action space internally, including Ain the prompt implic- itly defines a normalized prior over actions, de- noted pproj(a|st). From this distribution, we extract the top- kmost probable actions: Atop-k t=Top-k(pproj(a|st)). This approach reduces the dimensionality and complexity of decision-making by focusing com- putation on a compact set of semantically coherent, contextually appropriate candidate actions. Q-Network: In our implementation (illustrated in Figure 1), the action-value function Q(s, a)uses a pretrained BERT encoder2(kept fixed) followed by a lightweight adaptor network ( 3layer MLP). Specifically, given the current state stand each proposed action ai(sampled via the free-form + projection prior), we construct the input sequence: [CLS] State: <serialize( st)> [SEP] Action: ai[SEP] tokenize it, and feed it into BERT. We take the final hidden vector hi∈Rdat the [CLS] position and pass it through a three-layer MLP adaptor with ReLU activations to produce a scalar score: ˜Qi= BERT Adaptor (hi)∈R. We then normalize these scores across all K candidates using a softmax, pQ(ai|st) =exp( ˜Qi)PK j=1exp( ˜Qj), 2https://huggingface.co/google-bert/bert-base-uncased and select the highest-probability action a∗= arg max ipQ(ai|st). The chosen a∗is executed to produce the next state. Rather than a purely greedy policy, we adopt an ϵ-greedy strategy with ϵchosen empirically. 3.3 Emotion-Aware Policy Planning Integrating emotional context into dialogue pol- icy planning is critical for building proactive, user- aligned systems (Zhao et al., 2023). Unlike tra- ditional approaches that rely solely on semantic and task-specific signals (Wang et al., 2020), our method explicitly incorporates emotion prediction to guide strategic decision-making. We introduce anEmotion Tracker module that uses a frozen LLM to infer the user’s emotional state etat each dia- logue turn from their utterance uusr t. Formally, the prediction is defined as: et=LLM-EmoPred (uusr t) (2) where LLM-EmoPred denotes the LLM-based module that estimates emotion directly from text, without requiring additional embeddings or fine-tuning. The sequence of emotional states {e1, e2, . . . , e t}is tracked over turns and incorpo- rated into the conversational state st, alongside semantic context and the set of candidate dialogue actions. This enriched representation enables the policy planner to generate emotionally aware, con- textually appropriate actions throughout the dia- logue. 3.4 Online RL with LLM Priors At each dialogue turn t, we first query the free-form + projection LLM prior to obtain a distribution pproj(a|st)over the finite action set A. Rather than sampling directly from this prior, we evaluate each candidate action a∈ A with Q-network and select the action with the highest value: at= arg max a∈AQθ(st, a). We then execute atin the environment, observe the next state st+1, and solicit a
https://arxiv.org/abs/2505.17795v1
scalar reward rt from the Critic LLM, which assesses the transi- tion(st, at, st+1)in terms of task effectiveness and emotional alignment. The tuple (st, at, rt, st+1) is appended to the replay buffer D ← D ∪ {(st, at, rt, st+1)}. Periodically, we sample minibatches from Dand perform temporal-difference updates. For each sampled transition, we form the Bellman targety=rt+γmax a′∈AQθ(st+1, a′)and minimize the mean squared error L(θ) =E(s,a,r,s′)∼Dh Qθ(s, a)−yi2 .(3) Throughout training, all exploratory actions and Bellman backups draw from the LLM-induced prior, while the Critic LLM’s rewards (Rafailov et al., 2023) guide the Q-network toward seman- tically coherent and emotionally aware dialogue policies. 4 Experimental Setup 4.1 Tasks and Datasets We evaluate our method on five proactive dia- logue datasets spanning both collaborative and non- collaborative settings. ESConv (Liu et al., 2021) focuses on emotional support, with 1040/130/130 train/validation/test samples. CIMA (Stasaski et al., 2020) involves tutoring dialogues for English- to-Italian translation, with 909/113/113 splits. CraigslistBargain (CB) (He et al., 2018) features buyer-seller negotiations, containing 3290 training, 188 validation, and 188 test cases. P4G (Wang et al., 2019) includes persuasion dialogues around donation, using 817 training and 100 each for val- idation and testing, following (He et al., 2025a). ExTES (Zheng et al., 2023b), a more diverse ex- tension of ESConv, is split into 10,717/200/200 samples as per (He et al., 2025a). More informa- tion is given in Appendix D Datasets are grouped into collaborative (ESConv, CIMA, ExTES) and non-collaborative (CB, P4G) environments based on whether participants share a common goal. For generalization, we follow (He et al., 2025a) by training on ExTES and testing on ESConv without fine-tuning. Predefined action prompts are listed in Appendix F.5, and case back- grounds are used to initialize dialogue states. 4.2 Baselines In addition to DialoGPT (Zhang et al., 2019), we evaluate DialogXpert against both prompt- based and planner-based dialogue models. The prompt-based methods begin with Standard, which relies on unguided self-play; Proactive and Pro- COT (Deng et al., 2023a), which use chain-of- thought prompts to plan strategies (though their internally predicted strategy labels serve only as latent cues, not interpretable actions); AnE (Zhang et al., 2023b) and ICL-AIF (Fu et al., 2023), which enlist external LLMs as “strategy experts” or feed- back providers; and GPD-Zero (Yu et al., 2023), which incorporates MCTS to select optimal strate- gies. On the other hand, planner-based approaches represent the state of the art: PPDPP (Deng et al., 2024) fine-tunes a RoBERTa-based policy plan- ner with reinforcement learning; DPDP combines two RoBERTa systems in a dual-process frame- work augmented by MCTS; LDPP (He et al., 2025a)integrates variational autoencoders with hi- erarchical offline RL to learn compact latent poli- cies; and UDP (He et al., 2025b) models user traits via diffusion-based inference alongside ac- tive learning for optimized responses. Both LDPP and UDP follow the PPDPP-style architecture cen- tered on RoBERTa as the core planner. For full implementation details, see Appendix C. 4.3 Evaluation Protocols Following PPDPP (Deng et al., 2024) and DPDP (He et al., 2024), we evaluate dialogue quality us- ing two main metrics: Average Turn
https://arxiv.org/abs/2505.17795v1
(AT), which measures conversational efficiency by counting the mean number of turns to reach the goal (Kwan et al., 2023), and Success Rate (SR), which reflects the proportion of successful outcomes within a fixed turn limit (Gao et al., 2021). For the CraigslistBar- gain (CB) dataset, we also report the Sale-to-List Ratio (SL) (Zhou et al., 2019), indicating negotia- tion quality from the buyer’s perspective—higher SL values represent better deals, while failed ne- gotiations receive an SL of zero. Additionally, for the ESConv dataset, we conduct human evaluations (Joshi et al., 2021; Liu et al., 2021) with four an- notators who assess responses across four criteria: Suggestion, Identification, Comforting, and Over- all Quality. Annotators compare system outputs and label each metric as a win, lose, or tie, with final scores averaged across all judgments. Reward Values: We use an LLM-based critic to generate scalar rewards for training, with task- specific mappings for each dataset. Full details of the reward structure and scoring heuristics are provided in Appendix E. LLM Variations: Baseline proactive planners span a range of frozen LLM backbones and search strategies: DialoGPT uses GPT-2 for greedy, turn- by-turn responses; PPDPP combines a RoBERTa planner with a frozen Vicuna 13B action prior via self-play; DPDP pairs fast “System 1” GPT-3.5- Turbo proposals with deeper MCTS rollouts; andUDP/LDPP exploit GPT-4o-mini or Qwen 1.8B for latent policy mining. In contrast, DialogXpert uses LLM as a frozen action proposer, generating a top- kset of candidate actions each turn. We evaluate Vicuna 13B, Qwen1 1.8B, and Qwen 2.5 14B for the purpose of achieving a balance of speed, strategic exploration, and emotional alignment. 5 Results & Analysis 5.1 Main Results We evaluate DialogXpert on three challenging dialogue-planning benchmarks: CraigslistBargain (negotiation), ESConv (emotional support), and CIMA (tutoring)—using average turns (AT), suc- cess rate (SR), and, for negotiation, sale-to-list ratio (SL). Table 1 summarizes performance of diverse baselines, MCTS-style planners, recent policy-LM methods, and our two DialogXpert variants (Vi- cuna 13B and Qwen 2.5 14B). Furthermore, pre- liminary experiments identified ϵ=0.5and top- k =4as optimal, and these values are fixed in all subsequent evaluations. Across all datasets, standard LLM-only methods (e.g. DialoGPT, ProCoT, ICL-AIF) either require many dialogue turns ( AT > 5) or achieve only moderate success ( SR < 0.80), and in negotiation they yield SL < 0.31. In contrast, pure policy- LM approaches such as PPDPP and DPDP sub- stantially reduce AT (to ≈5or less) while boost- ing SR above 0.85−0.90, but their negotiation quality remains limited ( SL≈0.33−0.34). By integrating an LLM-prior policy with lightweight value learning and emotion tracking, DialogXpert achieves sub-3-turn dialogues and success rates above 0.94across all three benchmarks (Craigslist- Bargain, ESConv, and CIMA) with the Vicuna backbone, and further improves to SR > 0.97 andSL= 0.4389 with Qwen 2.5 14B for nego- tiations(CraigslistBargain), while maintaining av- erage turns around 2.32. As shown in Tables 1 and 2, DialogXpert not only surpasses MCTS- based planners like DPDP and fine-tuned policy LMs like PPDPP in both efficiency and effective- ness, but also generalizes strongly across diverse settings including P4G and ExTES,
https://arxiv.org/abs/2505.17795v1
where it de- livers the highest success rates ( 0.972on ExTES) and competitive turn efficiency. These results con- firm that DialogXpert offers a practical alternative to computationally intensive planning approaches, without sacrificing quality. Impact of Emotions: Integrating emotions into policy planning improves dialogue effectiveness Table 1: Comparison of dialogue planning methods on the CraigslistBargain, ESConv and CIMA benchmarks, reporting average turns (AT ↓), success rate (SR ↑) and satisfaction level (SL ↑).DialogXpert results reported in this table are obtained by sampling the top- k= 4candidates from the frozen LLM and using an ϵ-greedy policy withϵ= 0.5 (i.e., 50 % exploration vs. 50 % exploitation) at each turn. CraigslistBargain ESConv CIMA Method Backbone AT ↓ SR↑ SL↑ AT↓ SR↑ AT↓ SR↑ DialoGPT (Zhang et al., 2019) GPT-2 6.73 0.3245 0.2012 5.31 0.7538 5.43 0.4956 Standard - 6.47 0.3830 0.1588 5.10 0.7692 3.89 0.6903 AnE (Zhang et al., 2023b) - 5.91 0.4521 0.2608 4.76 0.8000 3.86 0.6549 Proactive (Deng et al., 2023a) - 5.80 0.5638 0.2489 5.08 0.7538 4.84 0.5310 + MI-Prompt (Deng et al., 2024) - 5.74 0.5691 0.2680 4.78 0.7846 4.70 0.5664 ProCoT (Deng et al., 2023a) - 6.22 0.5319 0.2486 4.75 0.7923 4.58 0.5487 + MI-Prompt (Deng et al., 2024) - 6.12 0.5532 0.3059 4.83 0.7769 4.72 0.5221 ICL-AIF (Fu et al., 2023) - 6.53 0.3617 0.1881 4.69 0.8079 4.19 0.6106 PPDPP (Deng et al., 2024) Vicuna 13B 5.62 0.6117 0.3376 4.56 0.8462 3.03 0.8407 -w/o SFT 5.71 0.6223 0.3354 4.68 0.8384 3.18 0.8230 -w/o RL 5.57 0.6649 0.2280 5.24 0.7308 3.41 0.7965 DPDP (System 1) (He et al., 2024) GPT-3.5-Turbo 5.03 0.7447 0.4108 3.61 0.9000 2.24 0.9469 -System 1 w/o PT – – – 4.22 0.8769 2.36 0.9292 -System 1 w/o SPT – – – 3.97 0.8692 2.51 0.8938 -System 2 2.78 0.9734 0.2728 2.13 0.9923 2.49 0.9735 -System 1 & 2 – – – 2.13 0.9923 2.28 0.9823 UDP (He et al., 2025a) GPT-4o mini – – – 7.59 0.8320 – – -w/o PT – – – 7.48 0.7720 – – -w/o RL – – – 8.64 0.5310 – – DialogXpert Vicuna 13B 2.93 0.9415 0.3811 2.7 0.9651 2.24 0.9883 -w/o RL 5.13 0.7561 0.3473 4.13 0.8749 3.05 0.8829 DialogXpert Qwen 1.8B 2.78 0.9274 0.3791 2.49 0.9805 2.16 0.9902 -w/o RL 4.69 0.7754 0.3012 4.04 0.8921 2.96 0.9042 DialogXpert Qwen2.5 14B 2.32 0.9746 0.4389 2.31 0.9876 2.03 0.9951 -w/o RL 3.64 0.8754 0.2952 3.53 0.9401 2.62 0.9317 -w/o LLM-Prior 3.31 0.9165 0.3598 3.89 0.9243 2.71 0.9395 -w/o Emotion 2.75 0.9136 0.3156 3.08 0.9611 2.34 0.9425 across tasks. We observe from Table 1 that, in ES- Conv, success rate increases from 0.9611 to0.9876 and average turns drop from 3.08to2.31. In CIMA, success improves from 0.9611 to0.9876 with a turn reduction from 2.34to2.03. For Craigslist- Bargain, emotion-aware planning boosts success from 0.9136 to0.9746 and improves the sale-to- list ratio from 0.3156 to0.4389 . These gains stem from the model adapting to user emotions at each turn. The emotion tracker estimates affective state, enriching the input to the Q-network and enabling more empathetic, goal-aligned actions. Impact of LLM Prior: LLM prior narrows
https://arxiv.org/abs/2505.17795v1
the action space to relevant candidates, reducing com- putation and boosting decision quality. Disabling it causes drop in performance. We can observe in Table 1 that on ESConv, success falls from 0.9876 to0.9401 and average turns rise from 2.31to3.53; on CIMA, success drops from 0.9951 to0.9317 . Without the prior, the agent repeats trivial patternsand struggles to choose optimal actions. By pro- viding diverse, high-quality options, the prior lets the Q-network focus on value learning its removal degrades efficiency, planning, and generalization. Comparison with MCTS Variants: Table 3 compares DPDP’s MCTS-based planner with our DialogXpert variants. In the original DPDP ex- periments (GPT-3.5-Mini), increasing the MCTS rollout budget from 22.3%to60.3%on Craigslist- Bargain reduced AT from 3.69to2.49and lifted SR from 0.8298 to0.9681 , while SL remained con- stant. On ESConv, 100% rollouts achieved AT =2.13and SR = 0.9923 ; on CIMA, 50% MCTS yielded AT = 2.28and SR = 0.9823 . These deeper searches clearly improve efficiency and success, but at a linear cost in simulation count and latency, which hinders real-time deployment. By contrast, DialogXpert (Vicuna 13B) matches these gains without any tree search: negotiation completes in 2.93turns (SR = 0.9415 , SL = 0.3811 ), emotional Figure 2: Exploration vs. Exploitation: We use the Qwen 2.5 14B prior with top- k= 4and sweep the ϵ-greedy parameter ( ϵ) to measure how different exploration rates affect average turns, success rate, and SL Average. Table 2: Evaluation of dialogue planners on P4G and ExTES, reporting average turns (AT ↓) and success rate (SR↑).DialogXpert results were obtained by sampling the top- k= 4 candidates from the frozen LLM and using an ϵ-greedy policy with ϵ= 0.5 at each turn. Method Backbone P4G ExTES AT↓ SR↑ AT↓ SR↑ Standard - 8.32 0.468 – – ProCoT (Deng et al., 2023a) - 7.975 0.543 – – ICL-AIF (Fu et al., 2023) - 8.085 0.465 7.65 0.555 GDP-Zero (Yu et al., 2023) - 9.119 0.328 – – TRIP (Zhang et al., 2024) GPT3.5 8.20 0.495 – – PPDPP (Deng et al., 2024) Vicuna 13B 8.185 0.463 8.163 0.558 UDP (He et al., 2025b) GPT-4o mini 7.705 0.598 – – – w/o PT 8.017 0.513 – – – w/o RL 8.000 0.533 – – LDPP (He et al., 2025a) Qwen1-1.8B 5.57 0.795 4.132 0.903 – w/o 2nd Stage 6.14 0.760 4.483 0.865 – w/o 3rd Stage 6.84 0.570 7.038 0.623 DialogXpert Vicuna 13B 5.07 0.8132 2.97 0.9534 DialogXpert Qwen1-1.8B 3.97 0.8793 2.73 0.9651 DialogXpert Qwen2.5 14B 3.34 0.9129 2.57 0.9782 support in 2.70turns (SR = 0.9651 ), and tutoring in2.24turns (SR = 0.9883 ). Its Qwen 2.5 14B variant further reduces AT to 2.32(SR = 0.9746 , SL = 0.4389 ),2.31(SR = 0.9876 ), and 2.03(SR = 0.9951 ), cutting inference overhead by over 50% compared to DPDP + MCTS. Top-k values: We analyze in Table 4, how the top-kdecoding parameter affects DialogXpert ’s performance.We fix the LLM prior to the Qwen 2.5 14B model and vary kto isolate its impact on planning performance. Narrow decoding ( k= 2) cuts average turns by half compared to greedy LLM decoding
https://arxiv.org/abs/2505.17795v1
and achieves over 93% success, with a negotiation SL of 0.3968 . Increasing to k= 3 improves success to above 95% across all tasks and further reduces turns. The optimal setting is k= 4, yielding the lowest average turns of 2.39 (negotiation/emotional support) and 2.04 (tutoring) highest success rates ( 97.1%–99.5%), and best SL (0.4325 ). Atk= 5, performance declines slightly due to increased randomness.Exploitation vs Exploration As illustrated in Figure 2, our ϵ-greedy strategy controls the trade- off between exploration and exploitation (Tokic, 2010). At ϵ=25%, we surpass pure LLM inference (˜95% success, SL = 0.407) but may overlook best actions; at ϵ≥75%, performance dips (turns > 2.5, success < 97%, SL < 0.35); and at ϵ= 100%, all learned value is ignored. The sweet spot is ϵ = 50%, yielding the fewest turns (2.32 negotia- tion, 2.31 support, 2.03 tutoring) with peak success (97.5–99.5%) and SL = 0.439, confirming that mod- erate exploration maximizes planning efficiency. Generalization Test: Following (He et al., 2025b), we assess generalization from ExTES to ESConv, given their similar environments and ac- tion labels (differing only in reward computation). We train the Q-network on ExTES and directly eval- uate it on ESConv without further fine-tuning. Our approach achieves an average turn (AT) of 2.28 (vs5.39) and a success rate (SR) of 0.9943 (vs. 0.781), significantly outperforming LDPP. This strong transfer performance stems from the larger training set in ExTES, enabling better generaliza- tion. In contrast, LDPP relies heavily on RoBERTa- based encoders/decoders, making it more sensitive to domain shifts. 0% 20% 40% 60% 80% 100%Ove. Ind. Com. Sug.51% 8% 41% 60% 5% 35% 52% 2% 45% 48% 6% 46% WIN TIE LOSEDialogXpert PPDPP Figure 3: Win/tie/loss percentages for DialogXpert vs. PPDPP on ESConv across Identification, Comforting, Suggestion and Overall metrics. Human Evaluation To ensure a fair compari- son, both DialogXpert and PPDPP were run with the same Vicuna-13B backbone on 20ESConv Table 3: Ablation of MCTS budget in DPDP (GPT-3.5-Turbo) and comparison to DialogXpert (Vicuna 13B, Qwen 2.5 14B) on CraigslistBargain, ESConv and CIMA, reporting average turns (AT ↓), success rate (SR ↑) and satisfaction level (SL ↑where available). DialogXpert results are obtained by sampling the top- k= 4candidates from the frozen LLM and using an ϵ-greedy policy with ϵ= 0.5 at each turn. Approach CraigslistBargain ESConv CIMA AT↓ SR↑ SL↑ AT↓ SR↑ AT↓ SR↑ DPDP (22.3 % MCTS) (GPT3.5-Turbo) 3.69 0.8298 0.3102 – – – – -51.4 % MCTS 2.77 0.9468 0.3118 – – – – -60.3 % MCTS 2.49 0.9681 0.2856 – – – – -0.0 % MCTS – – – 3.61 0.9000 – – -21.9 % MCTS – – – 3.42 0.9154 – – - 46.5 % MCTS – – – 2.95 0.9692 – – -68.3 % MCTS – – – 2.72 0.9769 – – -100 % MCTS – – – 2.13 0.9923 – – -0.0 % MCTS – – – – – 2.24 0.9469 -28.6 % MCTS – – – – – 2.39 0.9646 -50.0 % MCTS – – – – – 2.28 0.9823 -81.1 % MCTS – – – – –
https://arxiv.org/abs/2505.17795v1