text string | source string |
|---|---|
of the victim and attack models, respectively. This optimization problem admits a closed-form solution via the normal equa- tion (see the derivation in Appendix A): W= (E⊤ VEV)−1E⊤ VEA, which minimizes the reconstruction error between the aligned victim embeddings EVWand the at- tacker’s reference embeddings EA. 3) Text Reconstruction The aligned embeddings ˆ eAare decoded into text via decA, i.e., ˆx=decA(ˆeA) =decA(eVW). ALGEN achieves inversion without retraining the victim model, requiring only fine-tuning of decAand estimation of W. 3.3 Fundamentals of Distributed Optimization Distributed optimization addresses global optimiza- tion problems through a unified objective function while incorporating constraints derived from inter- node relationships within the network. Formally, this approach can be expressed as min {wi:i∈V}X i∈Vfi(wi), s.t.hij(wi, wj)≤0,(i, j)∈ E, where fidenotes the local objectives on node iand hijencodes the constraints between adjacent nodes iandj. Specifically, a fundamental formulation in dis- tributed optimization employs linear inequality con- straints to couple decision variables across network nodes. Such formulation can be efficiently solved 3 Figure 2: Illustration of LAGO vs. ALGEN (Chen et al., 2025b). Top: ALGEN treats each language indepen- dently. Bottom : LAGO leverages language similarity by introducing edge constraints in a joint distributed optimization framework. using IEQ-PDMM method (Heusdens and Zhang, 2024b) and this can be formally expressed as min {wi:i∈V}X i∈Vfi(wi), (1) s.t.Ai|jwi+Aj|iwj≤bi,j,(i, j)∈ E. Constraints between entries are defined by Ai|j, Aj|iandbi,j. 4 LAGO: Language Similarity-Aware Graph Optimization Framework Building upon the ALGEN paradigm and grounded in distributed optimization, we propose LAGO - a general framework for few-shot cross-lingual em- bedding inversion. LAGO operates in two stages: (1) constructing a language similarity graph to cap- ture topological relationships between languages, and (2) solving a graph-constrained optimization problem to jointly estimate transformation matrices across languages. This section details both compo- nents and introduces two algorithmic variants that implement our optimization framework. 4.1 Step I: Language Similarity-Aware Graph Construction To formalize cross-lingual relationships, we pro- pose to construct a linguistic topological graph G= (V,E), where the set of nodes Vrepresents languages and the set of undirected edges Een- codes pairwise similarity. Language similarity is quantified using established metrics such as AJSP (Wichmann et al., 2022) and Lang2vec (Littell et al., 2017). For a predefined threshold r, an edgeis established between two languages iandjif their distance metric Dij< r. Mathematically, given a distance matrix D∈RN×NoverNlan- guages, the adjacency matrix A∈ {0,1}N×Nof the resulting topology is derived as: A=1−sign(D−r) 2. See Appendix B for a concrete example of graph construction. 4.2 Step II: Graph-Constrained Optimization Algorithms Using the constructed graph, we reformulate the optimization objective, leveraging cross-lingual re- lationships, thereby enhancing embedding inver- sion attacks through knowledge transfer from lin- guistically related languages. In few-shot settings where local data is scarce, this formulation im- proves transferability by leveraging cross-lingual regularities. We introduce two optimization strate- gies: one enforcing hard constraints and one ap- plying soft penalties. Let Widenote the transfor- mation matrix at node i(language i). To ensure stability in underdetermined settings (e.g., b < m ), we incorporate Frobenius norm regularization to mitigate rank deficiency and enhance convergence. Variant 1: Linear Inequality | https://arxiv.org/abs/2505.16008v1 |
Constraints The first algorithm variant introduces topological con- straints to enforce consistency between adjacent nodes’ transformation matrices. Formally, we for- mulate the objective as minimizing the sum of re- construction errors across all nodes while imposing ϵ-bounded constraints on the pairwise differences between neighbors’ mapping matrices: min W1,···,WNX i∈V1 2 ||EA,i−EV,iWi||2 F+λ||Wi||2 F , s.t.||Wi−Wj||max≤ϵ,(i, j)∈ E, where ∥ · ∥maxdenotes the entry-wise ℓ∞norm. This formulation corresponds to the general inequality-constrained form in Eq. (1), where Ai|j=−Aj|i= [1−1]Tandbi,j= [ϵ ϵ]T. As such, it is compatible with the IEQ-PDMM opti- mization framework (Heusdens and Zhang, 2024b). The update equations used in this framework are 4 given below1. W(t) i=[E⊤ V,iEV,i+ (2cdi+λ)I]−1 (E⊤ V,iEA,i−X j∈NiAi|jZ(t) i|j) Y(t) i|j=Z(t) i|j+ 2cAi|jW(t) i−cbi,j Z(t+1) i|j=( Y(t+1) j|i,Y(t+1) i|j+Y(t+1) j|i>0, −Y(t+1) i|j,otherwise, (2) where di=|Ni|is the degree of node i. Variant 2: Total Variation Regularization The second variant introduces soft penalties using to- tal variation across edges, a technique originally proposed for Byzantine-robust decentralized learn- ing systems (Peng et al., 2021). The optimization objective is formulated as follows: min W1,···,WNX i∈V 1 2||EA,i−EV,iWi||2 F+λ 2||Wi||2 F +ηX j∈Ni||Wi−Wj||sum , where ∥ · ∥ sum denotes the entry-wise ℓ1norm. At time t, each node updates its W with W(t+1) i =W(t) i−α√t+ 1h −E⊤ V,i(EA,i−EV,iW(t) i) +λW(t) i+ηX j∈Nisign(W(t) i−W(t) j)i , where αis the learning rate. 4.3 Generalization: ALGEN as a Special Case Our proposed LAGO is general and subsumes prior method ALGEN as a special case. Specif- ically, in the inequality-constrained variant, when ϵ→ ∞ , cross-node constraints vanish, and each language node solves an independent alignment problem. Similarly, in the total variation setting, setting η= 0 decouples all nodes. In both cases, the optimization reduces to ALGEN’s per-language formulation with no cross-lingual structure. This highlights the flexibility of LAGO: by adjusting constraint strength, it interpolates between isolated optimization (as in ALGEN) and fully collabora- tive cross-lingual inversion. Our approach thus provides a principled, generalizable framework for multilingual attack design. 1The comparison in Eq. (2) is applied element-wise. Figure 3: Example graphs using two Language Similari- ties: (a) AJSP model with r= 0.9; (b) Lang2vec model withr= 0.45. 5 Experimental Setup Models and Dataset Our attack framework is initialized using a pre-trained FLAN-T5model. To evaluate the robustness of our approach, we con- duct experiments with two distinct victim model encoders, MT5,E5- SMALL -V2(E5) and OpenAI’s TEXT -EMBEDDING -ADA-002 (ADA-2) (see the de- tails in Tabel 3). The decoder decA(·), fine-tuned on the MMARCO English dataset (Bonifacio et al., 2021), serves in this paper as the attack model for simulating few-shot inversion attack scenarios. We employ the current state-of-the-art ALGEN method as the baseline for the few-shot scenario, maintain- ing identical training and testing configurations for the decoder as those used in ALGEN. To assess cross-lingual transferability, we select a subset of seven syntactically and lexically related languages: English, German, French, Dutch, Spanish, Italian and Portuguese. Language Graphs We evaluate two distinct topologies derived from language similarities: AJSP and Lang2vec. The tested topologies are illustrated in Fig. 3. Regularization Parameters To accomplish sub- stantial convergence, the number of | https://arxiv.org/abs/2505.16008v1 |
iterations is fixed at 500. For the linear inequality constraints method, the convergence parameter is set to c= 0.4, while for the TV penalty term method, the learning rate is chosen as α= 0.01. The computa- tional cost of the attack is relatively low, using the topology of 7languages as an example, it takes ap- proximately five minutes to compute a set of matrix {Wi:i∈ V} with the inequality constrained for- mulation, while the total variation method is faster, completing the attack in about two minutes. Evaluation Metrics We use Cosine similarity to measure the semantic alignment between the adversarial embeddings of the victim model EVW 5 Figure 4: Cross-lingual Inversion Performances with AJSP Graph in Cosine Similarities across Training Sam- ples. and the target attack embeddings EA. Meanwhile, Rouge-L (Lin, 2004) evaluates the lexical overlap between the reconstructed text and the ground truth by computing the length of their longest common subsequence, serving as a proxy for assessing the fidelity of the generated output at the lexical level. 6 Analysis and Results To validate the effectiveness of our proposed LAGO framework, we experiment across a range of settings and tasks. Each subsection addresses one research question, probing key aspects of cross- lingual transferability, generalization, and robust- ness to defense mechanisms2. 6.1 Do Similar Languages Transfer Vulnerabilities? To assess whether language similarity aids attack transfer, we use an attack model trained on English data attack embeddings in other languages. We compare LAGO (with both optimization variants) to ALGEN baselines with and without Frobenius norm regularization ( λ= 0.01), using 10 to 1000 2We open-source our code https://anonymous.4open. science/r/ALGO_anonymous . Figure 5: Cross-lingual Inversion Performances with AJSP Graph in Rouge-L Scores across Training Sam- ples. training samples. Notice that the training sample is used exclusively for alignment. For LAGO, we setϵ= 0.01andη= 0.01. As shown in Table 1, LAGO consistently improves both cosine similar- ity and Rouge-L scores on inverting French em- beddings across all training sizes. In low-resource settings (e.g., 10 samples), our method yields a 10–20% boost in Rouge-L over ALGEN. This trend generalizes to other languages, such as Dutch, Ger- man, Italian, Portuguese and Spanish, as demon- strated in Fig. 4; 5 and Fig. 6. These findings suggest that leveraging language similarity both mitigates data scarcity and optimizes cross-lingual generalization in low-resource settings. 6.2 Does the choice of Language Similarity Metric Impact the Attack Effectiveness? To test the sensitivity of LAGO to the choice of language similarity measures, we compare perfor- mance under two topologies: ASJP (lexical sim- ilarity) and Lang2vec (syntactic similarity). The results, demonstrated in Fig. 6 and 9 in Appendix C, confirm that LAGO is robust to the choice of similarity metric. Lang2vec shows slightly bet- ter performance in moderate-data settings in terms 6 Method Cosine Similarities Rouge-L Train Samples 10 100 300 1000 10 100 300 1000ALGEN- 0.8657 0.8723 0.8610 0.8986 10.07 10.47 10.22 12.07 Reg.(λ= 0.01)0.8663 0.8767 0.8703 0.8997 10.14 10.59 10.37 11.91OursInequality 0.8701 0.8919 0.9039 0.9178 10.14 11.09 12.31 12.49 Total Variation 0.8777 0.8966 0.9046 0.9129 10.87 11.59 11.46 12.30 Table 1: Cross-lingual Inversion | https://arxiv.org/abs/2505.16008v1 |
Performances of French embeddings with Attack Model trained in English in Cosine Similarities and Rouge-L scores across Training Samples. The best Rouge-L scores are bold , and the maximum cosine similarities are underlined . Figure 6: Cross-lingual Inversion Performances with Lang2vec Graph in Rouge-L Scores across Training Samples. of Rouge-L scores for moderately larger training sample sizes (>300). For instance, Dutch, with training samples of |DV|= 500 , exhibits an in- crease from 5.71to6.65. Overall, our approach consistently outperforms the baseline in terms of attack efficacy, irrespective of the similarity metric. This suggests that LAGO is not contingent upon a specific language similarity framework but in- stead exhibits robust generalizability across diverse language structures. Furthermore, the observed im- provements in attack effectiveness indicate that our methodology is particularly advantageous for lan- guages with shared linguistic features. Whether the similarity is lexical or syntactic, the attack remains effective, reinforcing its versatility. Figure 7: Cross-lingual Inversion Performances with Attack Model trained in Spanish in Rouge-L Scores across Training Samples. 6.3 Is the Inversion Generalizable to Different Victim Models? We assess generalizability by evaluating our method on embeddings from ADA-2andE5en- coders. As shown in Appendix Fig. 10 - 13, LAGO consistently outperforms ALGEN in both cosine similarity and Rouge-L across these models. Relatively, under the Rouge-L metric, the in- equality constraint demonstrates stronger perfor- mance with larger sample sizes, whereas total vari- ation proves more effective in extremely few-shot scenarios with fewer than 300 training samples. We attribute this to the flexibility of inequality con- straints, a smaller sample size provides Wwith 7 greater degrees of freedom, thereby imposing rel- atively weaker restrictions on Wunder the same ϵ. Consequently, the performance of inequality constraints under smaller sample sizes aligns more closely with the ALGEN method. 6.4 Can other Languages assume the Source of Transfer? English as the most represented language in the pretrained LLMs, serves as an obvious choice for training the attack model to facilitate the inversion of other languages. We demonstrate that the pro- posed scheme remains robust even when the attack model is trained in an alternative language. As shown in Fig. 7 and Fig. 14 in Appendix C, when Spanish is used as the attack language, LAGO con- tinues to yield consistent improvements over the baseline. The cosine similarity increases across target languages, and the inequality-constrained variant shows stronger gains in Rouge-L, particu- larly under low-resource conditions. We also observe performance disparities across specific language pairs. For example, the inversion performance from English to German is notably higher than that from Spanish to German - a pat- tern already present in the ALGEN baseline. This disparity may be attributed to two factors: differ- ences in decoder training quality and variations in language similarity. In our constructed graph, En- glish and German are directly connected (one-hop neighbors), whereas Spanish and German are two hops apart. The increased topological distance may weaken the effectiveness of parameter transfer, as similarity constraints exert less influence. These observations suggest that the relative po- sition of languages in the similarity graph - and not just data size | https://arxiv.org/abs/2505.16008v1 |
or encoder choice - can influ- ence transfer strength. Understanding the dynam- ics of language topology in transfer-based attacks presents an important direction for future work. 6.5 Defenses We further investigate the effectiveness of differ- ential privacy (DP) in mitigating embedding in- versions. We employ the SNLI dataset (Bowman et al., 2015) to fine-tune the decoder and subse- quently transfer the adversarial attack framework to German, French and Spanish using the XNLI dataset (Conneau et al., 2018). While the SNLI dataset is widely utilized for downstream tasks like text classification, Chen et al. (2025b) has demon- strated that with a strong privacy guarantee ϵdp= 1,model accuracy drops to 40%, which is a signifi- cant reduction from the 60% accuracy achieved at ϵdp= 12 where DP defenses show limited impact on utility and inversion performance. In our setup, we apply two DP mechanisms: the Purkayastha Mechanism (PurMech) and the Normalized Planar Laplace Mechanism (LapMech) proposed by Du et al. (2023) in sentence embed- dings. The privacy budget parameter is evaluated across ϵdp∈[1,4,8,12]. As shown in Table 2, Table 4, and Fig. 15, 16 in Appendix C, inversion attacks in cross-lingual settings are highly sensitive to DP perturbations. Specifically, Rouge-L scores are consistently suppressed to below 2 across tested configurations. These results are consistent with theoretical expectations: more challenging exam- ples, such as those in cross-lingual or low-resource settings, tend to be more sensitive to DP noise (Car- lini et al., 2019; Feldman, 2020). While DP mech- anisms provide meaningful protection against in- version, they incur a non-trivial utility cost, under- scoring the need for more efficient, structure-aware defenses in multilingual NLP applications. LangϵdpRouge-L ↓COS↓Rouge-L ↓COS↓ LapMech PurMech 1 14.11 0.0017 14.05 0.0156 eng→eng 4 13.58 0.0087 13.94 0.0348 8 13.38 0.0249 13.45 0.0185 12 13.90 0.0345 12.77 -0.0076 1 1.66 -0.0013 1.31 0.0136 eng→fra 4 1.70 -0.0043 1.58 0.0140 8 1.42 0.0364 1.24 0.0166 12 1.60 0.0411 1.44 0.0113 1 0.52 -0.0119 0.49 0.0090 eng→deu 4 0.32 0.0065 0.54 0.0127 8 0.62 0.0187 0.53 0.0418 12 0.44 0.0327 0.43 0.0367 1 1.47 0.0062 1.55 -0.0090 eng→spa 4 1.43 -0.0006 1.32 0.0208 8 1.70 0.0384 1.35 0.0266 12 1.52 0.0160 1.41 0.0389 Table 2: Cross-lingual Inversion Performance with |DV|=100 on Classification Tasks on SNLI dataset with Local DP (Inequality). From a defender’s perspective, ↓means lower is better. 7 Conclusion We proposed two optimization-based paradigms for enhancing few-shot crosslingual embedding in- versions. Both are grounded in distributed opti- mization and operate over a topological network of languages constructed via language similarity. This graph structure enables collaborative align- ment of embedding decoders, facilitating effective 8 knowledge transfer even with extremely limited su- pervision. Our experimental results show that both variants - linear inequality constraints and total vari- ation penalties - consistently outperform existing methods, including ALGEN. In particular, the total variation approach demonstrates superior robust- ness in extremely few-shot settings, validating the importance of smooth cross-lingual parameter shar- ing. These findings establish language similarity as a key enabler of transferable inversion attacks, and underscore the need for privacy-preserving defenses that account for structural relationships among languages | https://arxiv.org/abs/2505.16008v1 |
in multilingual NLP systems.Limitations While our approach outperforms prior methods, few-shot crosslingual embedding inversion remains a challenging task with substantial room for im- provement. One limiting factor appears to be the decoder itself: even in the monolingual (original language) setting, inversion accuracy remains mod- erate, achieving approximately a 25 Rouge-L score on the MMARCO English dataset with |DV|=1k, and further declines under cross-lingual transfer. This suggests that the current attack decoder may struggle to generalize across languages, particu- larly when signal supervision is limited. Interestingly, we observe that cross-lingual set- tings exhibit higher sensitivity to DP defenses, though such defenses incur significant utility degra- dation. This sensitivity highlights both the vulner- ability and fragility of multilingual embeddings. Future work could focus on enhancing the decoder training, e.g., through multilingual pretraining, or incorporating language-specific priors - which we expect could improve inversion performance in both monolingual and crosslingual scenarios. Computational Resources We fine-tune the decoder on a single NVIDIA A40 GPU, with training completing in just three hours. Notably, ALGO operates with minimal GPU re- source demands, enabling a true few-shot setup. Ethics Statement We comply with the ACL Ethics Policy. The in- version attacks implemented in this paper can be misused and potentially harmful to proprietary em- beddings. We discuss and experiment with poten- tial mitigation and defense mechanisms, and we encourage further research in developing effective defenses in this attack space. Acknowledgements WY is founded by the EU ChipsJU and the Innova- tion Fund Denmark through the project CLEVER (no. 101097560); YC and JB are funded by the Carlsberg Foundation, under the Semper Ardens: Accelerate programme (project nr. CF21-0454). We further acknowledge the support of the AAU AI Cloud for providing computing resources. 9 References David Ifeoluwa Adelani, Graham Neubig, Sebastian Ruder, Shruti Rijhwani, Michael Beukman, Chester Palen-Michel, Constantine Lignos, Jesujoba O. Al- abi, Shamsuddeen H. Muhammad, Peter Nabende, Cheikh M. Bamba Dione, Andiswa Bukula, Roowei- ther Mabuya, Bonaventure F. P. Dossou, Blessing Sibanda, Happy Buzaaba, Jonathan Mukiibi, God- son Kalipe, Derguene Mbaye, and 26 others. 2022. MasakhaNER 2.0: Africa-centric transfer learning for named entity recognition. In Proceedings of the 2022 Conference on Empirical Methods in Nat- ural Language Processing , pages 4488–4508, Abu Dhabi, United Arab Emirates. Association for Com- putational Linguistics. Verena Blaschke, Masha Fedzechkina, and Maartje ter Hoeve. 2025. Analyzing the effect of linguistic simi- larity on cross-lingual transfer: Tasks and experimen- tal setups matter. arXiv preprint arXiv:2501.14491 . Luiz Bonifacio, Vitor Jeronymo, Hugo Queiroz Abonizio, Israel Campiotti, Marzieh Fadaee, Roberto Lotufo, and Rodrigo Nogueira. 2021. mmarco: A multilingual version of the ms marco passage ranking dataset. arXiv preprint arXiv:2108.13897 . Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326 . Stephen Boyd. 2010. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. Foundations and Trends ® in Machine Learning , 3(1):1–122. Nicholas Carlini, Ulfar Erlingsson, and Nicolas Paper- not. 2019. Prototypical examples in deep learning: Metrics, characteristics, and utility. Yiyi Chen, Russa Biswas, Heather Lent, and Johannes Bjerva. 2024a. | https://arxiv.org/abs/2505.16008v1 |
Against all odds: Overcoming ty- pology, script, and language confusion in multilin- gual embedding inversion attacks. arXiv preprint arXiv:2408.11749 . Yiyi Chen, Heather Lent, and Johannes Bjerva. 2024b. Text embedding inversion security for multilingual language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers) , pages 7808–7827. Yiyi Chen, Qiongxiu Li, Russa Biswas, and Johannes Bjerva. 2025a. Large language models are easily confused: A quantitative metric, security implica- tions and typological analysis. In Findings of the Association for Computational Linguistics: NAACL 2025 , pages 3810–3827, Albuquerque, New Mexico. Association for Computational Linguistics. Yiyi Chen, Qiongkai Xu, and Johannes Bjerva. 2025b. Algen: Few-shot inversion attacks on textual embed- dings using alignment and generation. arXiv preprint arXiv:2502.11308 .Rochelle Choenni, Dan Garrette, and Ekaterina Shutova. 2023. How do languages influence each other? study- ing cross-lingual data sharing during lm fine-tuning. arXiv preprint arXiv:2305.13286 . Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, and 16 others. 2022. Scaling instruction-finetuned language models. Preprint , arXiv:2210.11416. Alexis Conneau, Guillaume Lample, Ruty Rinott, Ad- ina Williams, Samuel R. Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. Xnli: Evaluating cross-lingual sentence representations. Preprint , arXiv:1809.05053. Wietse de Vries, Martijn Wieling, and Malvina Nissim. 2022. Make the best of cross-lingual transfer: Ev- idence from POS tagging with over 100 languages. InProceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 7676–7685, Dublin, Ireland. Association for Computational Linguistics. Minxin Du, Xiang Yue, Sherman SM Chow, and Huan Sun. 2023. Sanitizing sentence embeddings (and labels) for local differential privacy. In Proceedings of the ACM Web Conference 2023 , pages 2349–2359. Vitaly Feldman. 2020. Does learning require memoriza- tion? a short tale about a long tail. In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing , pages 954–959. Richard Heusdens and Guoqiang Zhang. 2024a. Distributed nonlinear conic optimisation with partially separable structure. arXiv preprint arXiv:2405.09490 . Richard Heusdens and Guoqiang Zhang. 2024b. Dis- tributed optimisation with linear equality and inequal- ity constraints using pdmm. IEEE Transactions on Signal and Information Processing over Networks . Yu-Hsiang Huang, Yuche Tsai, Hsiang Hsiao, Hong-Yi Lin, and Shou-De Lin. 2024. Transferable Embed- ding Inversion Attack: Uncovering Privacy Risks in Text Embeddings without Model Queries. In Pro- ceedings of the 62nd Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers) , pages 4193–4205, Bangkok, Thailand. As- sociation for Computational Linguistics. Anne Lauscher, Vinit Ravishankar, Ivan Vulic, and Goran Glavas. 2020. From zero to hero: On the limitations of zero-shot language transfer with mul- tilingual transformers. In Conference on Empirical Methods in Natural Language Processing . Haoran Li, Mingshi Xu, and Yangqiu Song. 2023. Sen- tence embedding leaks more information than you expect: Generative embedding inversion attack to 10 recover the whole sentence. In Findings of the As- sociation for Computational Linguistics: ACL | https://arxiv.org/abs/2505.16008v1 |
2023 , pages 14022–14040, Toronto, Canada. Association for Computational Linguistics. Qiongxiu Li, Richard Heusdens, and Mads Græsbøll Christensen. 2020. Privacy-preserving distributed optimization via subspace perturbation: A general framework. IEEE Transactions on Signal Processing , 68:5983–5996. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out , pages 74–81. Patrick Littell, David R Mortensen, Ke Lin, Katherine Kairis, Carlisle Turner, and Lori Levin. 2017. Uriel and lang2vec: Representing languages as typological, geographical, and phylogenetic vectors. In Proceed- ings of the 15th Conference of the European Chap- ter of the Association for Computational Linguistics: Volume 2, Short Papers , pages 8–14. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics , pages 1273–1282. PMLR. John X Morris, V olodymyr Kuleshov, Vitaly Shmatikov, and Alexander M Rush. 2023. Text embeddings reveal (almost) as much as text. arXiv preprint arXiv:2310.06816 . Yuyuan Ouyang, Yunmei Chen, Guanghui Lan, and Eduardo Pasiliao Jr. 2015. An accelerated linearized alternating direction method of multipliers. SIAM Journal on Imaging Sciences , 8(1):644–681. Jie Peng, Weiyu Li, and Qing Ling. 2021. Byzantine- robust decentralized stochastic optimization over static and time-varying networks. Signal Process- ing, 183:108020. Fred Philippy, Siwen Guo, and Shohreh Haddadan. 2023. Towards a common understanding of con- tributing factors for cross-lingual transfer in multi- lingual language models: A review. arXiv preprint arXiv:2305.16768 . Michael Rabbat and Robert Nowak. 2004. Distributed optimization in sensor networks. In Proceedings of the 3rd international symposium on Information processing in sensor networks , pages 20–27. Congzheng Song and Ananth Raghunathan. 2020. In- formation leakage in embedding models. In Pro- ceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security , CCS ’20, page 377–390, New York, NY , USA. Association for Computing Machinery. Huahua Wang and Arindam Banerjee. 2014. Bregman alternating direction method of multipliers. Advances in neural information processing systems , 27.Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, and Furu Wei. 2022. Text embeddings by weakly- supervised contrastive pre-training. arXiv preprint arXiv:2212.03533 . Søren Wichmann, Eric W. Holman, and Cecil H. Brown. 2022. CLDF dataset derived from Wichmann et al.’s "ASJP Database" v20 from 2022. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2020. mt5: A massively multilingual pre-trained text-to-text transformer. arXiv preprint arXiv:2010.11934 . Wenrui Yu, Qiongxiu Li, Milan Lopuhaä-Zwakenberg, Mads Græsbøll Christensen, and Richard Heusdens. 2024. Provable privacy advantages of decentralized federated learning via distributed optimization. IEEE Transactions on Information Forensics and Security . Guoqiang Zhang and Richard Heusdens. 2017. Dis- tributed optimization using the primal-dual method of multipliers. IEEE Transactions on Signal and In- formation Processing over Networks , 4(1):173–187. 11 A Derivation of Normal Equation The optimal alignment matrix Wis obtained by minimizing a cost function Jthat quantifies the discrepancy between the attack embedding ma- trixEAand the transformed victim embeddings EV→A=EVW: J(W) =1 2(EA−EVW)T(EA−EVW) =1 2(ET AEA−ET AEVW−(EVW)TEA + (EVW)TEVW) =1 2(ET AEA−ET AEVW−WTET VEA +WTET VEVW). (3) | https://arxiv.org/abs/2505.16008v1 |
By calculating the derivatives of J(W), we have ∇WJ(W) =1 2∇W(ET AEA−ET AEVW −WTET VEA+WTET VEVW) = 2ET VEVW−2ET VEA. (4) The optimized Wis achieved when the derivative is equal to 0, ET VEVW=ET VEA. (5) Then, the matrix Wthat minimizes J(W)is W= (ET VEV)−1ET VEA. (6) B Topology Construction To illustrate this approach, consider the syntactic distance matrix obtained from Lang2vec for En- glish (eng), French (fra), and Italian (ita): D= 0 0 .46 0 .51 0.46 0 0 .55 0.51 0 .55 0 where each entry Dijrepresents the syntactic dis- similarity between language pairs. By applying dif- ferent threshold values r, we construct distinct topo- logical configurations of language relationships. Fig. 8 demonstrates how the network connectivity varies with increasing rvalues, revealing: • Atr= 0.45: No edges form • Atr= 0.47: eng-fra connection emerges •Atr= 0.52: eng-ita connection appears while fra-ita remains disconnected • Atr= 0.56: Complete graph forms Figure 8: Linguistic topological graph of English, French and Italian with different threshold r. The higher the threshold, the denser the connectivity. C Other Experimental Results Figure 9: Cross-lingual Inversion Performances with Lang2vec Graph in Cosine Similarities across Training Samples. 12 Model Huggingface Architecture #Languages Dimension FLAN-T5 (Chung et al., 2022) google/flan-t5-small Encoder-Decoder 60 512 E5- SMALL -V2 (Wang et al., 2022) intfloat/e5-small-v2 Encoder 1 384 MT5 (Xue et al., 2020) google/mt5-base Encoder-Decoder 102 768 TEXT -EMBEDDING -ADA-002 OpenAI API Encoder 100+ 1536 Table 3: Details of LLMs and Embeddings. Figure 10: Cross-lingual Inversion Performances with ADA-2 Victim Model in Rouge-L Scores across Train- ing Samples. Figure 11: Cross-lingual Inversion Performances with E5 Victim Model in Rouge-L Scores across Training Samples. 13 Figure 12: Cross-lingual Inversion Performances with ADA-2 Victim Model in Cosine Similarities across Training Samples. Figure 13: Cross-lingual Inversion Performances with E5 Victim Model in Cosine Similarities across Training Samples. Figure 14: Cross-lingual Inversion Performances with Attack Model trained in Spanish in Cosine Similarities across Training Samples. LangϵdpRouge-L ↓COS↓Rouge-L ↓COS↓ LapMech PurMech 1 13.16 0.0751 13.35 0.0199 eng->eng 4 12.95 0.0257 12.61 0.0510 8 14.01 0.0845 13.88 0.1320 12 13.52 0.1720 13.86 0.1162 1 1.60 -0.0168 1.90 -0.0189 eng->fra 4 1.77 -0.0161 2.10 0.1081 8 2.02 0.1040 2.10 0.1428 12 1.92 0.1271 2.46 0.1853 1 0.86 0.0080 0.62 -0.0240 eng->deu 4 0.99 0.0259 0.62 -0.0216 8 0.77 0.0960 0.64 0.0881 12 0.70 0.1815 1.22 0.1944 1 1.58 0.0431 1.78 0.0729 eng->sap 4 1.35 0.0318 1.45 0.0360 8 1.87 0.2408 1.94 0.1119 12 1.65 0.1875 2.29 0.1846 Table 4: Cross-lingual Inversion Performance with |DV|=100 on Classification Tasks on SNLI dataset with Local DP (Total Variation). From a defender’s perspec- tive,↓means lower are better. 14 Figure 15: Cross-lingual Inversion Performance on Classification Tasks on SNLI dataset with Local DP (ϵdp= 12 ) in Cosine Similarities. Figure 16: Cross-lingual Inversion Performance on Classification Tasks on SNLI dataset with Local DP (ϵdp= 12 ) in Rouge-L Scores. 15 | https://arxiv.org/abs/2505.16008v1 |
arXiv:2505.16014v2 [cs.CL] 23 May 2025Ranking Free RAG: Replacing Re-ranking with Selection in RAG for Sensitive Domains Yash Saxena UMBC Baltimore, Maryland, USA ysaxena1@umbc.eduAnkur Padia UMBC Baltimore, Maryland, USA pankur1@umbc.eduMandar S. Chaudhary∗ eBay Inc. San Jose, California, USA manchaudhary@ebay.com Kalpa Gunaratna† Samsung Research America (SRA) San Francisco, California, USA k.gunaratna@samsung.comSrinivasan Parthasarathy Ohio State University Columbus, Ohio, USA srini@cse.ohio-state.edu Manas Gaur UMBC Baltimore, Maryland, USA manas@umbc.edu Abstract Traditional Retrieval-Augmented Generation (RAG) pipelines rely on similarity-based retrieval and re-ranking, which depend on heuristics such as top-k, and lack explainability, interpretability, and robustness against adversarial content. To address this gap, we propose a novel method METEORA that replaces re-ranking in RAG with a rationale-driven selection approach. METEORA operates in two stages. First, a general-purpose LLM is preference-tuned to generate rationales conditioned on the input query using direct preference optimization. These rationales guide the evidence chunk selection engine , which selects relevant evidence in three stages: pairing individual rationales with corresponding retrieved evidence for local relevance, globalselection with elbow detection for query-adaptive cutoff, andcontext expansion via neighboring evidence. This process eliminates the need for top- kheuristics. The rationales are also used for a consistency check usingVerifier LLM to detect and filter poisoned or misleading content forsafegeneration. The framework provides explainable and interpretable evidence flow by using rationales consistently across both selection and verification. Our evaluation across six datasets spanning legal, financial, and academic research domains, METEORA improves generation accuracy by 33.34% while using approximately 50% fewer evidence than state-of-the-art re-ranking methods. In adversarial settings, METEORA significantly improves the F1 score from 0.10 to 0.44 over the state-of-the-art perplexity-based defense baseline, demonstrating strong resilience to poison attacks. The code is available in the GitHub repository3. ∗This work does not relate to the author’s position at eBay Inc. †This work does not relate to the author’s position at Samsung Research America. 3https://anonymous.4open.science/r/METEORA-DC46/README.md Preprint. Under review. Figure 1: Overview of our METEORA framework. 1 Introduction Retrieval-Augmented Generation (RAG) enhances large language models (LLMs) by augment- ing input prompts with external context, enabling more accurate and grounded responses. Recent works have applied RAG to reduce hallucinations [Ayala and Bechard, 2024], improve downstream task performance [Zheng et al., 2025], and enhance answer quality in open- domain question answering (QA) [Choi et al., 2025]. A typical RAG pipeline comprises a retriever, a re-ranker, and a generator. The re-ranker is crucial to select contextually relevant top-kchunks or evidence before passing them to the generator [Glass et al., 2022]. Despite the growing popularity of RAG, its black-box nature poses challenges in sensitive domains like law, finance, and academic research [Zhou et al., 2024, Xue et al., 2024]. Existing re-ranking methods suffer mainly from three limitations. First, existing re-ranking approaches lack interpretability, as they rely on opaque similarity scores and a manually defined number kto select evidence, without providing justification for why particular evidence was chosen. Second, re-rankers are not robust to adversarial attacks such as the injection of irrelevant or poisoned content, which can corrupt the selected context and negatively impact the final generation. Third, these methods depend on heuristic decisions, particularly the choice of | https://arxiv.org/abs/2505.16014v2 |
top-k, which is often query-specific and difficult to determine in advance. Selecting too few evidence may omit critical context, while selecting too many can introduce noise, degrading answer quality [Leng et al., 2024]. Such limitations raise serious concerns in high-stakes domains where factual accuracy and robustness are critical [Barron et al., 2024, Bhushan et al., 2025]. Figure 2: Generation Results . Comparison of different retrieval ap- proaches across all datasets using theLLaMA-3.1-8B generator.Recent work such as RAG2[Sohn et al., 2025] and RankRAG [Yu et al., 2024] has attempted to address these challenges. RAG2uses rationales to improve re- triever capabilities to identify relevant evidence. How- ever, it still relies on a re-ranking step to reorder re- trieved evidence, thereby inheriting the limitations of traditional re-rankers, including a dependence on fixed top-kcutoffs and opaque scoring mechanisms. Similarly, RankRAG instruction-tunes a single LLM to both rank retrieved contexts and generate answers based on the retrieved passages. Because it leverages the LLM’s parametric knowledge, RankRAG suffers from limited interpretability and lacks robust filtering mechanisms. 2 To address above challenges and limitations, we pro- poseMethod for In terpretable rank-free evidence selection with Optimal Rationale ( METEORA), a rationale-driven, interpretable, and robust framework as an alternative to traditional re- ranking. As shown in Figure 1, METEORA operates in two phases. In the first phase, a general-purpose LLM, πref, is preference-tuned using Direct Preference Optimization (DPO) [Rafailov et al., 2023a] to learn an optimal policy π∗ θthat generates query-aligned rationales. To tune the policy model, METEORA does not require a manually annotated preference dataset; instead, it is constructed from existing QA annotations. This preference data generation method is generalizable across datasets and domains. In the second phase, the model π∗ θ is used to generate rationales to select and verify evidence. Generated rationales guide an unsupervised evidence selector, the Evidence Chunk Selection Engine (ECSE) (Section 2.3) to select relevant evidence in three steps: pairing individual rationales with retrieved evidence for local alignment, semantic pooling with elbow detection for adaptive thresholding, and expansion via neighboring evidence to recover missed evidence. Such a setting enables adaptive and interpretable evidence selection without relying on a subjective top- kheuristic. Moreover, the rationales help to flag the evidence that are inconsistent with the query and filter them using Verifier LLM . Because rationales are used consistently across both selection and verification, METEORA offers fully explainable and traceable evidence flow, allowing users to introspect the selected evidence explaining whyit was selected, and howit influenced the final generated answer. In contrast to previous approaches, METEORA eliminates the re-ranking stage, integrating rationale-based decisions at every step of evidence processing. Using rationales can significantly improve the quality of generated answers. To illustrate this, consider the plots in Figure 1 (top right), which show how traditional re-ranking methods based solely on similarity fail to capture contextually relevant evidence. This results in reduced recall for complex QA tasks across legal, financial, and academic research datasets. In contrast, METEORA employs rationale-guided selection to align evidence more precisely with the query, improving recall by 14.8% without compromising precision compared to the best- | https://arxiv.org/abs/2505.16014v2 |
performing baseline. Furthermore, Figure 2 demonstrates that this improved selection leads directly to a 33.34% increase in generation accuracy over the next best-performing baseline, further validating the importance of recall and accurate evidence selection. This advantage is particularly valuable in high-stakes domains, where factual accuracy is paramount. Our main contributions are: •We propose METEORA, a rationale-driven, explainable, interpretable, and robust alternative to re-ranking in RAG that eliminates the need for top- kheuristics. •We introduce ECSE, an unsupervised rationale-based evidence selector, and a Verifier LLM to determine consistency of the evidence to filter poisoned content. •We conduct extensive experiments across legal, financial, and academic research domains, showing consistent gains over state-of-the-art (SoTA) baselines. Moreover, in adversarial settings, METEORA significantly improves the average precision and recall scores by order of magnitude over the SoTA perplexity-based defenses. 2METEORA METEORA is a rationale-driven framework designed to select relevant evidence from a document for a given query such that it is interpretable, explainable, free from top- kheuristics, and robust to adversarial content. The framework operates in two stages: rationale generation (subsection 2.2) and selection (subsection 2.3 and 2.4). 2.1 Problem Formulation METEORA aims to replace the traditional re-ranking step in RAG architectures with rationales. Formally, given a query, q, and a knowledge base, D, the objective is to select a subset of evidence,Es, from retrieved documents, Ds, whereDs⊂D, such that: (i) Esmaximizes the coverage of relevant information to obtain higher recall, (ii) minimizes the inclusion of irrelevant or poisoned content to obtain higher precision, and (iii) achieves this without relying on vague heuristics such as a fixed number of top- kevidence. 3 2.2 Preference-Tuned Rationale Generator Rationale generator aim to generates a set of rationales conditioned on the query. Figure 3 shows an example of a rationale generated for a real-world query from the PrivacyQA dataset. To enables effective learning without manual annotation while ensuring high-quality rationale generation, we use an off-the-shelf LLM and preference-tune it to generate rationales aligned with a query. We automatically construct preference dataset by pairing the query with rationales that led to correct evidence selection as positive samples ( rw), and with other rationales as negative samples ( rl). This setup enables training the model using DPO, as defined by the following objective: LDPO(πθ;πref) =−E(q,c,r w,rl)∼D/bracketleftbigg logσ/parenleftbigg βlogπθ(rw,q,c) πref(rw,q,c)−βlogπθ(rl,q,c) πref(rl,q,c)/parenrightbigg/bracketrightbigg .(1) Example Query: TickTick: To Do List with Reminder, Day Planner’s privacy policy; can it view my real name? Rationale: Search for terms like “real name” ,“PII”, or “user information” , especially in sections covering data collection, use, or disclosure. Flagging Instructions: Flag the chunk if it contains internally inconsistent language about real name usage, or if it contradicts other verified parts of the policy. Figure 3: Generated Rationale ExampleHere,πrefis a frozen reference model, whileπθis the same architecture with trainable parameters. The hyperparame- terβcontrols the sharpness of preference learning. The rationale rwcorresponds to the rationale that leads to the correct evidence (positive), and rlcorresponds negative evidence. The model is trained to assign higher likelihood to rwoverrl given the query, q, and evidence, e, i.e., rw≻rl|q,e. Examples for positive and negative rationales can | https://arxiv.org/abs/2505.16014v2 |
be found in subsection A.2. Figure 7 (in subsection A.2) illustrates a case where some of the generated rationales successfully lead to the selection of the correct evidence. In contrast, Figure 8 (subsection A.2) shows a case where none of the rationales selected the correct evidence. Despite this, both examples demonstrate interpretability by revealing which rationales contributed to the selection of which evidence, and explainability by clarifying why the LLM produced a particular answer. We model the joint distribution of rationale, query, and evidence, πθ(rw,q,e)as a product of two conditional distributions πθ(e,r|q) =πθ(e|q)·πθ(r|q,e). Since oracle context eis available during training, we focus on learning πθ(r|q,e)and optimize rationale quality using the cosine similarity between rationale and evidence. At test time, where true context is unavailable, we condition only on the query to generate rationales. Such an approximation still yields superior perfor- mance compared to baselines as shown in section 4. The final preference-tuned model π∗ θis used to generate aligned rationales for downstream selection and verification. The prompt used for rationale generation is provided in subsection A.3, and implementation details for preference-tuning the LLM is described in Table 6. 2.3 ECSE: Unsupervised Rationale Based Chunk Selector Evidence Chunk Selection Engine (ECSE) is an unsupervised model that identifies the most relevant evidence by combining the generated rationals, input query, and the list of documents. The ECSE identifies relevant evidence using following techniques: Pairing Rationales with evidence. Rationale-based pairing computes the similarity score (Ev) between each pair of a rationale and the evidences from documents, ej, and selects the evidence with the highest match. Rationale and evidence are encoded using SBERT model to compute cosine similarity. Such a method ensures to obtain higher precision. Ev=/braceleftbigg max ej∈ES(ri,ej)|ri∈R/bracerightbigg (2) Pooling Rationales. To identify evidence that align with the overall intent of all rationales, we compute a pooled embedding ¯rby averaging the individual rationale embeddings: ¯r= 1 |R|/summationtext ri∈RSBERT (ri). We then calculate cosine similarity between ¯rand each evidence embedding. The evidence are sorted based on the similarity, yielding a sequence of scores {s1,s2,...,sn}. We compute first-order differences ∆i=si−si+1and apply z-score normalization to detect statistically significant drops in similarity. We select the first index 4 k∗where the z-score of ∆kdeviates notably from the mean4. If no such drop is found, we are at the point of finding maximum curvature using second-order differences. We select the top-k∗evidence as Eg, enabling adaptive, data-driven cutoff without heuristic thresholds. For a detailed explanation of the working of the elbow detection, refer to subsection A.5. Expanding Rationales Selection. Each of the selected evidence in EgandEvis expanded by considering evidence before and after the selected evidence to obtain a wider set of evidencesEw. Final set of candidate evidence is selected as follows: Es=Ev∪Eg∪Ew 2.4 Verifier LLM Embeddings Language Model Embeddings Language Model Query Similarity[0, 1] Document Embeddings Language Model Classifier {0, 1} Query + Document Aligned Rationales Query Document PreferenceTuned Rationale Generator Verifier Evidence Chunk Selection Engine{0, 1} Figure 4: Comparison of Bi- encoder (left), Cross-Encoder (middle) and METEORA (right).To make RAG robust against adversarial content, METEORA incorporates a Verifier LLM that filters | https://arxiv.org/abs/2505.16014v2 |
selected evidenceEsbefore generation. The Verifier evaluates each evidence using the input query and its associated rationale, which includes embedded Flagging Instructions . Evidences are flagged for (i) factual violations , when the content contradicts established facts; (ii) contradiction , when a evidence is logically inconsistent with other ver- ified evidences; and (iii) instruction violations , when a evidence fails to meet the criteria embedded in the ra- tionale. Flagged evidence are discarded, and only the filtered set is passed to the generator. For more infor- mation about the Verifier LLM and prompt format, see subsection 3.4 and subsection A.3, respectively. 2.5 METEORACovers Bi-encoder and Cross-Encoder Figure 4 compares bi-encoder, cross-encoder, and METEORA.METEORA generalizes both SBERT and Cross-Encoder: when the ECSE and Verifier are identity functions, it reduces to these models. Unlike Cross-Encoder, which jointly processes query and document, METEORA processes them independently like SBERT but uses rationales to guide evidence selection, combining efficiency with interpretability. 3 Experiments To demonstrate the effectiveness of rationales, we evaluate METEORA on three tasks across six real-world benchmark datasets spanning multiple domains, and provide a detailed ablation study. For clarity, we report results using evidence of chunk size of 512 tokens in the main paper; results for other sizes are included in Appendix subsection A.7. 3.1 Tasks We use following three tasks: (i) the Context Prioritization (CP) task to measures the ability to select relevant evidence for a given query; (ii) the Generation task, to assesses the quality of generated answers using the selected or re-ranked evidence; and (iii) the Adversarial Defense task to evaluates the defense ability to detect and filter poisoned or misleading content in the corpus. For the adversarial setting, we follow the poisoning strategy of Nazary et al. [2025], who injected five poisoned texts per instance. In our setup, we inject only one poisoned evidence per instance. Detecting a single poisoned instance becomes more difficult because it is hidden among mostly correct content, making it harder for the model to recognize it as harmful. When multiple poisoned instances are present, it gets easier to spot unusual patterns, but with just one, the model has fewer signals to detect the attack. 3.2 Datasets We select six datasets spanning legal, financial, and academic domains based on their suitability for evaluating the Context Prioritization (CP) task, which measures how accurately systemsselectrelevanttextevidencefromdocumentstoanswerqueries. Eachdatasetprovides: (1) question-answer (QA) pairs, (2) complete reference to lengthy documents ranging from 5-50 pages, and (3) human-annotated evidence spans that explicitly mark which specific sentences or paragraphs contain the exact information required to answer each query. These 4A large similarity drop after cutoff indicates evidence are less aligned with the rationale intent. 5 Table 1: Dataset Statistics Across Domains Dataset # Documents Avg. Tokens/Doc # QA Pairs Domain ContractNLI 95 10,673 946 Legal, NDA related documents CUAD 462 55,827 4,042 Legal, Private Contracts MAUD 150 351,476 1,676 Legal, M&A documents of public companies PrivacyQA 7 25,266 194 Legal, Privacy policies of consumer apps FinQA 2,789 ∼700 8,281 Finance, Financial documents of companies QASPER 1,585 ∼6,500 5,000+ Academic Research, NLP Papers evidence spans serve as | https://arxiv.org/abs/2505.16014v2 |
our ground-truth evidence, precisely identified text segments that evaluation metrics compare against system-selected evidences to compute retrieval accuracy scores, including Precision, Recall, and F1. In the legal domain, we use: (1) ContractNLI [Koreeda and Manning, 2021], where each instance evaluates whether contract clauses entail, contradict, or remain neutral to hypotheses, with law experts highlighting the exact clause text needed for reasoning; (2) PrivacyQA [Ravichander et al., 2019], containing questions about data collection practices with privacy specialists identifying the specific policy sections containing relevant disclosure statements; (3) CUAD[Hendrycks et al., 2021], comprising commercial contracts where legal professionals have marked exact paragraphs corresponding to 41 distinct clause categories; and (4) MAUD[Pipitone and Alami, 2024], featuring merger agreements where corporate attorneys have indicated the precise sections addressing specific acquisition terms. In the finance domain, we use FinQA[Chen et al., 2021], which requires extracting specific numerical values from financial reports to perform arithmetic operations (addition, subtraction, multiplication, division, percentages) for quarterly earnings analyses, with financial analysts marking the exact tables, figures, and statements containing the required values. Finally, in academic research, we use QASPER[Dasigi et al., 2021], which presents questions about research papers where domain scientists have identified the minimal set of sentences containing methodological details, experimental results, or theoretical claims needed for accurate answers. Table 1 shows the statistics. 3.3 Baselines For fair comparison, we selected baseline models that span diverse re-ranking paradigms, including supervised and unsupervised, dual-encoder and cross-encoder designs. For the CP task, we compare our approach with SoTA re-ranking methods. These include Cross-Encoder ms-marco-MiniLM-L4-v2 [Huggingface], which jointly encodes the query and document to compute a fine-grained relevance score; Contriever [Izacard et al., 2022], an unsupervised dense retriever trained via contrastive learning; and SBERT[Reimers and Gurevych, 2019], a bi-encoder model that computes similarity using sentence embeddings. Additionally, we include a fine-tuned domain-specific variant of SBERT, the Fine-Tuned SBERT[Legal-huggingface], as base SBERT was found to perform better than or on par with Cross-Encoder and Contriever in preliminary experiments. This made it a suitable backbone to evaluate the effect of domain adaptation on re-ranking performance. We include RankRAG as an LLM-based baseline, which prompts the generator LLM to retrieve and re-rank document evidence based on the query, using the top-ranked evidence for answer generation. As the RankRAG model is not publicly released, we reproduce it using the Promptriever model [Weller et al., 2024] and achieved similar results. To evaluate adver- sarial robustness, we compare our Verifier against the Perplexity-based Defense [Zhou et al., 2024] which uses perplexity, a measure of deviation from expected language model behavior, to identify potentially poisoned content. 3.4 Settings CP and Generation. We used LLaMA-3.1-8b-Instruct [Kassianik et al., 2025] to generate rationale, to verify the evidence and to generate text from selected evidence. For rank based baselines, we vary top-kfrom 1 to 64 to evaluate performance for each dataset and evidence size and report both. For fair comparison in CP, we set the kvalue used for each dataset equal to the average number of evidence selected by METEORA for all baselines. Adversarial Defense : The goal of this controlled contamination setup | https://arxiv.org/abs/2505.16014v2 |
is to evaluate if METEORA can detect and filter poisoned content before it influences the generation process. To simulate knowledge corpus poisoning in RAG, we follow the protocol proposed by Nazary et al. [2025], which implements an adversarial injection attack where malicious content is strategically embedded within retrieval contexts. Since the poisoning is introduced within 6 Table 2: CP Task Results across Datasets. METEORA achieves the best average recall and precision, outperforming all baselines. Since METEORA does not depend on specific K number of evidence, we take the average evidence for METEORA and set it to other baselilnes as well for fair comparison using Precision (P@K) and Recall (R@K). Model QASPER Contract-NLI FinQA PrivacyQA CUAD MAUD Average P@8 R@8 P@3 R@3 P@10 R@10 P@6 R@6 P@12 R@12 P@33 R@33 P R SBERT 0.26 0.91 0.38 0.91 0.12 0.96 0.240.89 0.11 0.78 0.03 0.410.190.81 Contriever 0.25 0.94 0.37 0.89 0.11 0.980.23 0.82 0.10 0.73 0.01 0.46 0.17 0.80 Cross-Encoder 0.270.940.38 0.91 0.12 0.97 0.22 0.81 0.10 0.71 0.02 0.50 0.18 0.80 Finetuned-SBERT 0.26 0.92 0.38 0.920.13 0.97 0.21 0.75 0.11 0.76 0.02 0.46 0.18 0.79 RankRAG 0.19 0.76 0.23 0.77 0.08 0.89 0.19 0.86 0.07 0.60 0.01 0.22 0.13 0.68 METEORA 0.260.990.351.00 0.12 0.95 0.23 0.98 0.12 0.93 0.03 0.72 0.19 0.93 domain-specific datasets, we use domain-matched expert LLMs to generate semantically coherent but factually incorrect content that maintains the linguistic style of legitimate documents. For the legal domain, we use Law-Chat [Cheng et al., 2024], for the finan- cial domain, we use Finma-7b-full [Xie et al., 2023], and for academic research, we use LLaMA-3.1-8b-Instruct to generate plausible-sounding but factually contradictory state- ments. Specifically, we randomly select 30% of QA instances in each dataset and poison them with generated text by inserting the text in the document containing the correct context. Appendix A.3 contains the prompt for poisoning the datasets, along with an example. 3.5 Evaluation Metrics For the CP task, we evaluate performance using Precision@k (P@k), Recall@k (R@k), and the Precision-Recall curve. To assess the quality of the generated responses, we use GPT-4o[OpenAI et al., 2024] as an evaluator. For each dataset, we randomly sample 100 instances and prompt GPT-4oto score each generated answer based on its agreement with the ground-truth answer to assign a score of +1 for correct, else 0. Upon manual verification of GPT-4oresponses, we found that GPT-4oproduces consistent and accurate evaluations across datasets, allowing us to automatically compute the overall accuracy of the generated answers. We consider only those instances where all methods atleast included the correct evidence. For the Adversarial Defense task, we use the existing evaluation protocol from Nazary et al. [2025] to compare the resilience of different methods using two metrics: Precision, to measures how accurately each method identifies poisoned content among the selected evidence, and Recall, to measures how many of the selected poisoned evidences are correctly detected as poisoned. 4 Results 4.1 Results from CP task Table 2 presents the performance of various methods on the CP task across all datasets. Over- all,METEORA achieves the best precision and recall, consistently outperforming all baselines. | https://arxiv.org/abs/2505.16014v2 |
Notably, RankRAG, which uses the generator model to re-rank retrieved evidence, performs worse than standard re-rankers and significantly underperforms METEORA. This highlights the limitations of relying solely on the model’s parametric knowledge and underscores the effectiveness of using rationales for selection. Interestingly, SBERT outperforms its domain-specific fine-tuned variant on legal datasets, indicating a drop in representation quality after fine-tuning. This degradation is likely caused by overfitting on a limited amount of legal-domain data. On the other hand, gains in precision and recall with METEORA demonstrate the utility of rationale-guided evidence selection. The rationale generator allows ECSEto prioritize the most contextually relevant evidence, leading to more accurate and robust selection. Combining Table 1 with Table 2, the average document length increases from left to right with CUAD and MAUD being particularly challenging. METEORA maintains strong performance even in these challenging conditions, including cases when documents are long. In contrast, for simpler datasets like QASPER, ContractNLI, and FinQA, METEORA performs comparably or better than existing methods. Given that real-world data is often noisy and unstructured, the consistent gains across diverse settings highlight the practical value of using rationales. The average per-query compute time for each method is shown in subsection A.6. 7 Figure 5: Precision–Recall curves for all datasets. 4.2 Rationales Guide to Select Most Relevant evidences METEORA does not rely on a preset top -kcutoff. Its rationale -driven selector automatically stops once no additional evidence pass the relevance test, so on every precision–recall (P–R) plot, it appears as a single operating point. Across all datasets in Figure 5, that point lies in the upper–right region: the recall is already competitive with the highest values attained by any baseline, while the precision is markedly higher. Because baselines must move along the curve by enlarging k, they pay a precision penalty as recall grows. In contrast, METEORA fixeskat the point where recall is strong and precision remains high; empirically this uses roughly 50% fewer evidence than the next-best method at the same recall level. The advantage is clearest on Contract -NLI, as shown in Figure 5, METEORA reaches perfect recall (1.0) with only three evidence , whereas every competing approach requires all 64 available evidence to achieve the same recall, cutting precision nearly in half. Similar patterns hold on the more demanding CUAD and MAUD datasets, where METEORA maintains the best precision at any recall the baselines can match. These results confirm that rationale -guided selection scales gracefully with document length and conceptual complexity, delivering higher quality evidence with far less noise than fixed- kre-ranking methods. 4.3 Results on Generation Task Contriever Cross Encoder Fine-tuned SBERTMETEORA RankRAGSBERT00.20.40.6 MethodAccuracyMethod Contriever Cross EncoderFine-tuned SBERT METEORARankRAG SBERT Figure 6: Accuracy of responses from LLaMA-3.1-8B , only when the ev- idence retrieved is present in the list of ground-truth. Results are presented with variance bars to indicate statisti- cal significance across multiple trials.The improvement in retrieval performance achieved byMETEORA directly contributes to its strong results in the generation task. Figure 6 shows average accu- racy to generate semantically correct answers across all the datasets for each of the methods | https://arxiv.org/abs/2505.16014v2 |
based on the evidence obtained from the document. METEORA selects the most relevant evidence to help generate more accu- rate answers compared to other baselines. Approaches with higher precision and recall in the CP task tend to produce more accurate answers during generation. Re-ranking methods such as SBERT, Cross-Encoder, and Contriever show similar results with only minor variation. Fine-Tuned SBERT performs slightly worse, likely due to overfitting on limited domain-specific data. Overall, METEORA substantiallyoutperformsallbaselines and achieves a more than 33.34% absolute increase in accuracy compared to the next best-performing method, the Cross-Encoder. This clearly demonstrates the im- portance of rationale-guided selection in improving the overall quality of generated outputs. 8 Table 3: Corpus-poisoning detection results. METEORA shows strong resilience, clearly outperforming a perplexity-based defense on every dataset. Method QASPER Contract-NLI FinQA PrivacyQA CUAD MAUD Average P R F1 P R F1 P R F1 P R F1 P R F1 P R F1 P R F1 No Defense 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Perplexity 0.18 0.08 0.11 0.25 0.05 0.08 0.18 0.07 0.11 0.23 0.11 0.15 0.14 0.07 0.09 0.10 0.04 0.06 0.18 0.07 0.10 METEORA 0.42 0.46 0.44 0.44 0.61 0.51 0.39 0.48 0.43 0.49 0.48 0.48 0.26 0.45 0.33 0.46 0.34 0.39 0.41 0.47 0.44 Table4: ECSEablationstudy. Poolingandexpandingtheevidenceimprovesrecallwhilemaintaining competitive precision, especially in complex datasets like CUAD and MAUD. Components QASPER Contract-NLI FinQA PrivacyQA CUAD MAUD P R P R P R P R P R P R Pairing 0.320.880.38 0.950.190.840.260.710.150.700.030.42 Pairing + Pooling 0.29 0.95 0.36 0.97 0.14 0.91 0.25 0.91 0.14 0.79 0.02 0.50 Pairing + Pooling + Expanding 0.26 0.990.351.00 0.120.950.230.980.120.930.020.78 Table 5: Percentage of flagged poisoned evidence with respect to the total poisoned evidence categorized by instruction, contradiction, and factual violations. Flags QASPER Contract-NLI FinQA PrivacyQA CUAD MAUD Instruction 32.13 42.62 37.02 43.20 24.55 46.08 Contradiction 0.23 0.24 1.84 0.64 0.31 0.00 Factual 9.62 1.13 0.11 5.15 1.14 0.00 Total 41.98 43.99 38.97 48.99 26.00 46.08 4.4 Adversarial Defense Table 3 shows the ability of different methods to defend against adversarial attack. No Defense is reference baseline in which every poisoned evidence is passed to the generator and hence gets a score of zero. Overall, METEORA achieves significantly higher recall and precision demonstrating the effectiveness of the Verifier to detect poisoned content. Surprisingly similar trend in observed when evidence size is reduced, indicating the effectiveness of using rationales even in a limited context (refer to Table 11 in subsection A.7) . We believe the effectiveness of Verifier is due to the flagging instruction in the rationales, which helps identify the poisoned evidence and remove them. Table 3 demonstrates the importance of using rationales, especially in identifying candidate evidence without fine-tuning the Verifier. 4.5 Ablation Study Effect of each component of ECSE on METEORA.Table 4 shows the contribution of each component in the ECSE pipeline to construct the initial pool of candidate evidence for answer generation. Rationale-based pairing only yields the highest precision across all datasets. This is expected, as it selects | https://arxiv.org/abs/2505.16014v2 |
the single document most semantically aligned with each rationale, reducing the likelihood of irrelevant context. However, after pooling the rationales significantly improves recall to capture the evidence that are broadly aligned with the overall intent of the rationales. Finally, after expanding, with adjacent evidence to include necessary surrounding context increases Recall is beneficial in longer documents or fragmented evidence settings especially for challenging datasets such as CAUD and MAUD. Notably, the full ECSE configuration consistently achieves the best recall. Above observations are consistent with smaller evidence chunk size along with additional results are available in Table 10. Verifier Ablation. Table 5 shows the contribution of each inconsistent criterion, Instruction, Contradiction, and Factual, to identify poisoned evidence across datasets. Instruction-based flags consistently account for the largest share of detections, contributing over 30% across all datasets and reaching as high as 46.08% on MAUD and 43.20% on PrivacyQA. This shows that violations of rationale-derived instructions are the most effective signal for detecting adversarial content. Contradiction flags contribute very little across all datasets (below 2%), indicating that explicit logical inconsistencies are either rare or difficult to detect at the evidence level. Factual violations offer an additional signal, especially in QASPER (9.62%), where misleading facts are more likely to be introduced and go undetected. These results highlight the value of combining a diverse set of consistency checks, with instruction-based checks being the most reliable and factual checks offering helpful support in most domains. 9 5 Conclusion We introduced METEORA, a rationale-driven, rank-free framework for interpretable and robust evidence selection. By leveraging preference-tuned LLMs to generate rationales that guide both selection and verification, METEORA eliminates the need for top- kheuristics and opaque scoring functions. Through comprehensive evaluation across six domain-specific datasets, METEORA consistently selects more relevant and reliable evidence than existing methods, while providing transparent evidence flow and strong resilience to adversarial content. By replacing opaque re-ranking with a rationale-driven selection framework, our work contributes to more interpretable and explainable language systems. This has broader implications for safer deployment of LLMs in sensitive domains such as legal, financial, and scientific settings, where reliability and robustness to adversarial attacks play a crucial role. While METEORA demonstrates strong performance, several avenues remain for improvement. First, reducing reliance on few-shot prompts and improving rationale generation in low-resource settings could enhance generalizability. Second, constructing preference datasets without requiring gold contexts would increase applicability across diverse domains. Lastly, exploring efficiency-accuracy trade-offs could help optimize METEORA’s multi-stage pipeline to better balance interpretability with speed. References Orlando Ayala and Patrice Bechard. Reducing hallucination in structured outputs via retrieval-augmented generation. In Yi Yang, Aida Davani, Avi Sil, and Anoop Ku- mar, editors, Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Vol- ume 6: Industry Track) , pages 228–238, Mexico City, Mexico, June 2024. Associa- tion for Computational Linguistics. doi: 10.18653/v1/2024.naacl-industry.19. URL https://aclanthology.org/2024.naacl-industry.19/ . Lucia Zheng, Neel Guha, Javokhir Arifov, Sarah Zhang, Michal Skreta, Christopher D. Manning, Peter Henderson, and Daniel E. Ho. A reasoning-focused legal retrieval benchmark. In Proceedings of the Symposium on Computer | https://arxiv.org/abs/2505.16014v2 |
Science and Law on ZZZ , CSLAW ’25, page 169–193. ACM, March 2025. doi: 10.1145/3709025.3712219. URL http://dx.doi.org/10.1145/3709025.3712219 . Chanyeol Choi, Jihoon Kwon, Jaeseon Ha, Hojun Choi, Chaewoon Kim, Yongjae Lee, Jy yong Sohn, and Alejandro Lopez-Lira. Finder: Financial dataset for question answering and evaluating retrieval-augmented generation, 2025. URL https://arxiv.org/abs/2504. 15800. Michael Glass, Gaetano Rossiello, Md Faisal Mahbub Chowdhury, Ankita Rajaram Naik, Pengshan Cai, and Alfio Gliozzo. Re2g: Retrieve, rerank, generate, 2022. URL https: //arxiv.org/abs/2207.06300 . Qingyun Zhou et al. The trustworthiness of retrieval-augmented generation systems. ACL, 2024. Jiaqi Xue, Mengxin Zheng, Yebowen Hu, Fei Liu, Xun Chen, and Qian Lou. Badrag: Identifying vulnerabilities in retrieval augmented generation of large language models, 2024. URL https://arxiv.org/abs/2406.00083 . Quinn Leng, Jacob Portes, Sam Havens, MateiZaharia, andMichael Carbin. Long contextrag performance of large language models, 2024. URL https://arxiv.org/abs/2411.03538 . Adam Barron et al. Domain-specific retrieval-augmented generation using expert-tuned llms. ACL, 2024. Rohit Bhushan et al. Systematic knowledge injection in large language models for scientific qa.ICLR, 2025. Jiwoong Sohn, Yein Park, Chanwoong Yoon, Sihyeon Park, Hyeon Hwang, Mujeen Sung, HyunjaeKim, andJaewooKang. Rationale-guidedretrievalaugmentedgenerationformedi- cal question answering. In Luis Chiruzzo, Alan Ritter, and Lu Wang, editors, Proceedings of 10 the 2025 Conference of the Nations of the Americas Chapter of the Association for Compu- tational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 12739– 12753, Albuquerque, New Mexico, April 2025. Association for Computational Linguistics. ISBN 979-8-89176-189-6. URL https://aclanthology.org/2025.naacl-long.635/ . Yue Yu, Wei Ping, Zihan Liu, Boxin Wang, Jiaxuan You, Chao Zhang, Moham- mad Shoeybi, and Bryan Catanzaro. Rankrag: Unifying context ranking with retrieval-augmented generation in llms. In A. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak, and C. Zhang, editors, Advances in Neural In- formation Processing Systems , volume 37, pages 121156–121184. Curran Associates, Inc., 2024. URL https://proceedings.neurips.cc/paper_files/paper/2024/file/ db93ccb6cf392f352570dd5af0a223d3-Paper-Conference.pdf . Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model.Advances in Neural Information Processing Systems , 36:53728–53741, 2023a. Fatemeh Nazary, Yashar Deldjoo, and Tommaso di Noia. Poison-rag: Adversarial data poisoning attacks on retrieval-augmented generation in recommender systems. arXiv preprint arXiv:2501.11759 , 2025. YutaKoreedaandChristopherD.Manning. Contractnli: Adatasetfordocument-levelnatural language inference for contracts, 2021. URL https://arxiv.org/abs/2110.01799 . Abhilasha Ravichander, Alan W Black, Shomir Wilson, Thomas Norton, and Norman Sadeh. Question answering for privacy policies: Combining computational and legal perspectives, 2019. URL https://arxiv.org/abs/1911.00841 . Dan Hendrycks, Collin Burns, Anya Chen, and Spencer Ball. Cuad: An expert-annotated nlp dataset for legal contract review, 2021. URL https://arxiv.org/abs/2103.06268 . Nicholas Pipitone and Ghita Houir Alami. Legalbench-rag: A benchmark for retrieval- augmented generation in the legal domain. arXiv preprint arXiv:2408.10343 , 2024. Zhiyu Chen, Wenhu Chen, Charese Smiley, Sameena Shah, Iana Borova, Dylan Langdon, Reema Moussa, Matt Beane, Ting-Hao Huang, Bryan Routledge, et al. Finqa: A dataset of numerical reasoning over financial data. arXiv preprint arXiv:2109.00122 , 2021. Pradeep Dasigi, Kyle Lo, Iz Beltagy, Arman Cohan, Noah A Smith, and Matt Gardner. A dataset of information-seeking questions and answers anchored in research papers. arXiv preprint arXiv:2105.03011 , 2021. Huggingface. cross-encoder/ms-marco-MiniLM-L4-v2 ·Hugging Face — huggingface.co. https://huggingface.co/cross-encoder/ms-marco-MiniLM-L4-v2 . [Accessed 16-05- | https://arxiv.org/abs/2505.16014v2 |
2025]. Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. Unsupervised dense information retrieval with contrastive learning, 2022. URL https://arxiv.org/abs/2112.09118 . Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks, 2019. URL https://arxiv.org/abs/1908.10084 . Legal-huggingface. Stern5497/sbert-legal-xlm-roberta-base ·HuggingFace—huggingface.co. https://huggingface.co/Stern5497/sbert-legal-xlm-roberta-base . [Accessed 16- 05-2025]. Orion Weller, Benjamin Van Durme, Dawn Lawrie, Ashwin Paranjape, Yuhao Zhang, and Jack Hessel. Promptriever: Instruction-trained retrievers can be prompted like language models, 2024. URL https://arxiv.org/abs/2409.11136 . 11 Paul Kassianik, Baturay Saglam, Alexander Chen, Blaine Nelson, Anu Vellore, Massimo Aufiero, Fraser Burch, Dhruv Kedia, Avi Zohary, Sajana Weerawardhena, Aman Priyanshu, Adam Swanda, Amy Chang, Hyrum Anderson, Kojin Oshiba, Omar Santos, Yaron Singer, and Amin Karbasi. Llama-3.1-foundationai-securityllm-base-8b technical report, 2025. URL https://arxiv.org/abs/2504.21039 . Daixuan Cheng, Shaohan Huang, and Furu Wei. Adapting large language models to domains via reading comprehension, 2024. URL https://arxiv.org/abs/2309.09530 . Qianqian Xie, Weiguang Han, Xiao Zhang, Yanzhao Lai, Min Peng, Alejandro Lopez-Lira, and Jimin Huang. Pixiu: A large language model, instruction data and evaluation benchmark for finance, 2023. URL https://arxiv.org/abs/2306.05443 . OpenAI, :, Aaron Hurst, Adam Lerer, Adam P. Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, Aleksander Mądry, Alex Baker-Whitcomb, AlexBeutel, Alex Borzunov, Alex Carney, AlexChow, Alex Kirillov, Alex Nichol, Alex Paino, Alex Renzin, Alex Tachard Passos, Alexander Kirillov, Alexi Christakis, Alexis Conneau, Ali Kamali, Allan Jabri, Allison Moyer, Allison Tam, Amadou Crookes, Amin Tootoochian, Amin Tootoonchian, Ananya Kumar, Andrea Vallone, Andrej Karpathy, Andrew Braunstein, Andrew Cann, Andrew Codispoti, Andrew Galu, Andrew Kondrich, Andrew Tulloch, Andrey Mishchenko, Angela Baek, Angela Jiang, Antoine Pelisse, Antonia Woodford, Anuj Gosalia, Arka Dhar, Ashley Pantuliano, Avi Nayak, Avital Oliver, Barret Zoph, Behrooz Ghorbani, Ben Leimberger, Ben Rossen, Ben Sokolowsky, Ben Wang, Benjamin Zweig, Beth Hoover, Blake Samic, Bob McGrew, Bobby Spero, Bogo Giertler, Bowen Cheng, Brad Lightcap, Brandon Walkin, Brendan Quinn, Brian Guarraci, Brian Hsu, Bright Kellogg, Brydon Eastman, Camillo Lugaresi, Carroll Wainwright, Cary Bassin, Cary Hudson, Casey Chu, Chad Nelson, Chak Li, Chan Jun Shern, Channing Conger, Charlotte Barette, Chelsea Voss, Chen Ding, Cheng Lu, Chong Zhang, Chris Beaumont, Chris Hallacy, Chris Koch, Christian Gibson, Christina Kim, Christine Choi, Christine McLeavey, Christopher Hesse, Claudia Fischer, Clemens Winter, Coley Czarnecki, Colin Jarvis, Colin Wei, Constantin Koumouzelis, Dane Sherburn, Daniel Kappler, Daniel Levin, Daniel Levy, David Carr, David Farhi, David Mely, David Robinson, David Sasaki, Denny Jin, Dev Valladares, Dimitris Tsipras, Doug Li, Duc Phong Nguyen, Duncan Findlay, Edede Oiwoh, Edmund Wong, Ehsan Asdar, Elizabeth Proehl, Elizabeth Yang, Eric Antonow, Eric Kramer, Eric Peterson, Eric Sigler, Eric Wallace, Eugene Brevdo, Evan Mays, Farzad Khorasani, Felipe Petroski Such, Filippo Raso, Francis Zhang, Fred von Lohmann, Freddie Sulit, Gabriel Goh, Gene Oden, Geoff Salmon, Giulio Starace, Greg Brockman, Hadi Salman, Haiming Bao, Haitang Hu, Hannah Wong, Haoyu Wang, Heather Schmidt, Heather Whitney, Heewoo Jun, Hendrik Kirchner, Henrique Ponde de Oliveira Pinto, Hongyu Ren, Huiwen Chang, Hyung Won Chung, Ian Kivlichan, Ian O’Connell, Ian O’Connell, Ian Osband, Ian Silber, Ian Sohl, Ibrahim Okuyucu, Ikai Lan, Ilya Kostrikov, Ilya Sutskever, Ingmar Kanitscheider, Ishaan Gulrajani, Jacob Coxon, Jacob Menick, Jakub Pachocki, James | https://arxiv.org/abs/2505.16014v2 |
Aung, James Betker, James Crooks, James Lennon, Jamie Kiros, Jan Leike, Jane Park, Jason Kwon, Jason Phang, Jason Teplitz, Jason Wei, Jason Wolfe, Jay Chen, Jeff Harris, Jenia Varavva, Jessica Gan Lee, Jessica Shieh, Ji Lin, Jiahui Yu, Jiayi Weng, Jie Tang, Jieqi Yu, Joanne Jang, Joaquin Quinonero Candela, Joe Beutler, Joe Landers, Joel Parish, Johannes Heidecke, John Schulman, Jonathan Lachman, Jonathan McKay, Jonathan Uesato, Jonathan Ward, Jong Wook Kim, Joost Huizinga, Jordan Sitkin, Jos Kraaijeveld, Josh Gross, Josh Kaplan, Josh Snyder, Joshua Achiam, Joy Jiao, Joyce Lee, Juntang Zhuang, Justyn Harriman, Kai Fricke, Kai Hayashi, Karan Singhal, Katy Shi, Kavin Karthik, Kayla Wood, Kendra Rimbach, Kenny Hsu, Kenny Nguyen, Keren Gu-Lemberg, Kevin Button, Kevin Liu, Kiel Howe, Krithika Muthukumar, Kyle Luther, Lama Ahmad, Larry Kai, Lauren Itow, Lauren Workman, Leher Pathak, Leo Chen, Li Jing, Lia Guy, Liam Fedus, Liang Zhou, Lien Mamitsuka, Lilian Weng, Lindsay McCallum, Lindsey Held, Long Ouyang, Louis Feuvrier, Lu Zhang, Lukas Kondraciuk, Lukasz Kaiser, Luke Hewitt, Luke Metz, Lyric Doshi, Mada Aflak, Maddie Simens, Madelaine Boyd, Madeleine Thompson, Marat Dukhan, Mark Chen, Mark Gray, Mark Hudnall, Marvin Zhang, Marwan Aljubeh, Mateusz Litwin, Matthew Zeng, Max Johnson, Maya Shetty, Mayank Gupta, Meghan Shah, Mehmet Yatbaz, Meng Jia Yang, Mengchao Zhong, Mia Glaese, Mianna Chen, Michael Janner, Michael Lampe, Michael Petrov, 12 Michael Wu, Michele Wang, Michelle Fradin, Michelle Pokrass, Miguel Castro, Miguel Oom Temudo de Castro, Mikhail Pavlov, Miles Brundage, Miles Wang, Minal Khan, Mira Murati, Mo Bavarian, Molly Lin, Murat Yesildal, Nacho Soto, Natalia Gimelshein, Natalie Cone, Natalie Staudacher, Natalie Summers, Natan LaFontaine, Neil Chowdhury, Nick Ryder, Nick Stathas, Nick Turley, Nik Tezak, Niko Felix, Nithanth Kudige, Nitish Keskar, Noah Deutsch, Noel Bundick, Nora Puckett, Ofir Nachum, Ola Okelola, Oleg Boiko, Oleg Murk, Oliver Jaffe, Olivia Watkins, Olivier Godement, Owen Campbell-Moore, Patrick Chao, Paul McMillan, Pavel Belov, Peng Su, Peter Bak, Peter Bakkum, Peter Deng, Peter Dolan, Peter Hoeschele, Peter Welinder, Phil Tillet, Philip Pronin, Philippe Tillet, Prafulla Dhariwal, Qiming Yuan, Rachel Dias, Rachel Lim, Rahul Arora, Rajan Troll, Randall Lin, Rapha Gontijo Lopes, Raul Puri, Reah Miyara, Reimar Leike, Renaud Gaubert, Reza Zamani, Ricky Wang, Rob Donnelly, Rob Honsby, Rocky Smith, Rohan Sahai, Rohit Ramchandani, Romain Huet, Rory Carmichael, Rowan Zellers, Roy Chen, Ruby Chen, Ruslan Nigmatullin, Ryan Cheu, Saachi Jain, Sam Altman, Sam Schoenholz, Sam Toizer, Samuel Miserendino, Sandhini Agarwal, Sara Culver, Scott Ethersmith, Scott Gray, Sean Grove, Sean Metzger, Shamez Hermani, Shantanu Jain, Shengjia Zhao, Sherwin Wu, Shino Jomoto, Shirong Wu, Shuaiqi, Xia, Sonia Phene, Spencer Papay, Srinivas Narayanan, Steve Coffey, Steve Lee, Stewart Hall, Suchir Balaji, Tal Broda, Tal Stramer, Tao Xu, Tarun Gogineni, Taya Christianson, Ted Sanders, Tejal Patwardhan, Thomas Cunninghman, Thomas Degry, Thomas Dimson, Thomas Raoux, Thomas Shadwell, Tianhao Zheng, Todd Underwood, Todor Markov, Toki Sherbakov, Tom Rubin, Tom Stasi, Tomer Kaftan, Tristan Heywood, Troy Peterson, Tyce Walters, Tyna Eloundou, Valerie Qi, Veit Moeller, Vinnie Monaco, Vishal Kuo, Vlad Fomenko, Wayne Chang, Weiyi Zheng, Wenda Zhou, Wesam Manassra, Will Sheu, Wojciech Zaremba, Yash Patil, Yilei Qian, Yongjik Kim, Youlong Cheng, Yu Zhang, Yuchen He, Yuchen Zhang, Yujia Jin, Yunxing Dai, | https://arxiv.org/abs/2505.16014v2 |
and Yury Malkov. Gpt-4o system card, 2024. URL https://arxiv.org/abs/2410.21276 . Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. Retrieval-augmented generation for knowledge-intensive nlp tasks, 2021. URL https://arxiv.org/abs/2005.11401 . Tianyu Li et al. Lexrag: Retrieval-augmented generation for legal domain dialogue systems. arXiv preprint arXiv:2502.00001 , 2025. Zsolt Karpati and Norbert Szabo. Governing black boxes: On the use of retrieval-augmented systems in law. ICAIL, 2023. Arjun Verma et al. Infusion attacks in retrieval-augmented generation. arXiv preprint arXiv:2402.00789 , 2024. Ori Yoran et al. Docent: Document-centric rag improves multi-hop qa. ACL, 2024. Akari Asai et al. Self-rag: Learning to retrieve when answering questions. NeurIPS , 2023. Xi Chen, Yuanzhi Li, and Jieming Mao. A nearly instance optimal algorithm for top-k ranking under the multinomial logit model, 2017. URL https://arxiv.org/abs/1707.08238 . Yuxin Ren, Qiya Yang, Yichun Wu, Wei Xu, Yalong Wang, and Zhiqiang Zhang. Non- autoregressive generative models for reranking recommendation, 2025. URL https: //arxiv.org/abs/2402.06871 . Shaohan Qi et al. Mirage: Faithful attribution in retrieval-augmented generation. ICLR, 2024. Long Ouyang et al. Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155 , 2022. Linyi Yang et al. Grpo: Generalized reinsertion preference optimization for instruction tuning.arXiv preprint arXiv:2309.02654 , 2023. Rafael Rafailov et al. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290 , 2023b. 13 A Appendix A.1 Related Work RAG in Sensitive Domains. RAG has been widely adopted in high-stakes domains such as law, finance, and healthcare, where factual accuracy and verifiability are critical [Lewis et al., 2021, Li et al., 2025, Sohn et al., 2025, Karpati and Szabo, 2023, Barron et al., 2024, Bhushan et al., 2025]. Such domains are regulated and hence require traceable generations to retrieved and selected sources (for example, denial of loan application) and are prone to select semantically similar but contextually misleading evidence [Karpati and Szabo, 2023, Bhushan et al., 2025]. Furthermore, RAG pipelines remain susceptible to corpus-poisoning attacks [Verma et al., 2024, Zhou et al., 2024], highlighting the critical need for secure, context-aware retrieval methods. Our framework addresses these vulnerabilities by producing query-specific rationales that explain and justify chunk/evidence selection, thereby enhancing the system’s transparency, interpretability, and overall robustness. Heuristics in RAG. Most RAG systems use fixed top- kselection heuristics, which often hurt performance due to irrelevant or noisy context [Yoran et al., 2024, Asai et al., 2023]. Moreover, in real-world applications, it is hard to know upfront what the value should be fork, and re-rankers often lack interpretability, and hence complicates their deployment in sensitive domains. While some efforts have been made to automate the selection of k[Chen et al., 2017, Ren et al., 2025], these methods had limited success due to their inability to explain the selection of evidence to generate final text. Dynamic retrieval methods such as RankRAG [Yu et al., 2024] and Self-RAG [Asai et al., 2023] improve adaptability but lack interpretability. In contrast, our METEORA replaces top- kheuristics with rationale-grounded selection, enabling query-specific and explaninable document filtering. | https://arxiv.org/abs/2505.16014v2 |
Interpretability in RAG . Interpretability is often absent in RAG pipelines. Recent efforts like MIRAGE [Qi et al., 2024] and Rationale-first Retrieval [Sohn et al., 2025] introduce rationales or attribution signals, but these are not used consistently across the full pipeline. In METEORA, we define end-to-end rationale integration by using rationales not only for selecting evidences, but also for verifying them before generation. Unlike Sohn et al. [2025], which still relies on downstream re-ranking and lacks verification, METEORA eliminates re-ranking entirely and applies rationale-derived instructions in both selection and filtering stages, offering interpretable decisions. Reliability in RAG. RAG systems are susceptible to adversarial attacks and retrieval of noisy evidences [Verma et al., 2024, Xue et al., 2024]. Methods like entailment filtering [Yoran et al., 2024] and ensemble retrieval offer partial solutions. These approaches address only specific vulnerabilities while leaving others unaddressed - entailment filtering can be overly strict and discard valuable information that doesn’t meet formal entailment criteria, while ensemble methods add complexity and computational overhead without fundamentally solving the semantic verification problem. On the other hand, METEORA adds semantic filtering by requiring coherent rationales per evidence, to help discard poisoned evidence before generation. Feedback-Based Optimization. Policy optimization methods like PPO [Ouyang et al., 2022], RLHF, and GRPO [Yang et al., 2023] have been used to align LLMs with human preferences, but rely on complex reward models and unstable training dynamics. In con- trast, DPO [Rafailov et al., 2023b] offers a simpler, supervised alternative that directly optimizes preferred outputs. While prior methods focus on generation helpfulness or safety, we demonstrate that DPO is well-suited for rationale generation, where precision and domain adherence are crucial. PPO and RLHF suffer from costly reward modeling, opaque opti- mization processes, and vulnerability to data poisoning, whereas GRPO reduces but doesn’t eliminate these issues. DPO overcomes these limitations by providing explicit, interpretable reasoning paths for evidence selection, making the rationale generation process not only more transparent but also more robust against poisoned content. We apply DPO in METEORA to produce rationales that align with domain-specific expectations while avoiding the pitfalls of traditional policy gradients. 14 A.2Qualitative Examples Showcasing METEORA’s Interpretable and Explainable Nature Positive Example Explainability: How the Answer Was Formed Interpretability: Why This evidence Was Selected Query: Consider the Marketing Affiliate Agreement between Birch First Global Investments Inc. and Mount Knowledge Holdings Inc.; What is the governing law for this contract? Rationales: •Identify provisions that determine jurisdiction or applicable law, especially under dispute resolution or choice of law clauses. •Check “Assignment/Change of Control” sections for mentions of governing law. Selected evidence: “This Agreement is accepted by Company in the State of Nevada and shall be governed by and construed in accordance with the laws thereof, which laws shall prevail in the event of any conflict.” Generated Answer: The governing law for this contract is the laws of the State of Nevada. This is stated in Section 26 of the agreement: “This Agreement is accepted by Company in the State of Nevada and shall be governed by and construed in accordance with the laws...” GPT-4o Evaluation: Accuracy | https://arxiv.org/abs/2505.16014v2 |
Score: 1 Analysis: The answer correctly identifies the governing law and references the exact clause, showing clear alignment between rationale, context, and final output. Figure 7: Positive example demonstrating how METEORA links rationale-based evidence selec- tion (interpretability) with rationale-grounded answer generation (explainability), resulting in a correct and traceable response to a legal question. 15 Negative Example Explainability: Why the Answer Was Incorrect Interpretability: Why These evidence Were Selected Query: Consider the Marketing Affiliate Agreement between Birch First Global Investments Inc. and Mount Knowledge Holdings Inc.; Is there a cap on liability under this contract? Rationales: •Look for sections discussing liability caps or clauses stating a maximum limit of responsibility for either party. •Search for phrases like “shall not exceed,” “limited to,” or “capped at” in liability or indemnity sections. •Focus on sections near “Dispute Resolution” or “Indemnification” — common locations for liability limitations. Correct evidence (not selected): “Company is not liable for incidental, special or consequential damages for any reason [...]. Company’s liability shall not exceed the fees that MA has paid under this Agreement.” Generated Answer: There is no clear cap on liability under this contract; however, according to paragraph 15, Company’s liability "shall not exceed the fees...” GPT-4o Evaluation: Accuracy Score: 0 Analysis: The generated answer inaccurately states that there is no clear cap on liability and does not fully convey the information that the company’s liability is capped at the fees paid under the agreement. The reference answer provides a clear and specific cap on liability, which is missing in the generated answer. Figure 8: Negative example showing how METEORA enables transparent error tracing. Al- though the correct evidence was not selected, the rationale trail provides insight into why the incorrect evidences were selected, and how they influenced the erroneous answer. A.3 Prompts Used in Experiments Few-Shot Prompt Template for Rationale Generation Instructions: Generate semantic rationales to guide query-specific information selection from domain-specific documents. Each rationale should: •Represent a unique semantic search strategy grounded in the query. •Be concise, concrete, and tailored to the scientific, legal, or financial context. •Help extract precise and targeted evidence from long-form documents. •Avoid redundancy across rationales. Formatting Guidelines: •Use XML-style tags: <rationale_1> ,<rationale_2> , etc. •Include a brief description in square brackets. •Follow with a strategic, query-specific rationale sentence. Example Query: What are the limitations of this approach? Example Response (truncated): <rationale_1>[Locate explicit limitation sections] Look for sections explicitly titled "Limitations," "Threats to Validity," or "Shortcomings" which directly enumerate the authors’ acknowledged limitations......</rationale_1> ... <rationale_10>[Review human evaluation or annotation caveats] If any part of the work relies on human judgment, authors may mention subjectivity or annotator disagreement as limitations.....</rationale_10> 16 Verifier Prompt Input: •Query: {query} •Flagging Instructions: {rationales} •Previous Chunks Summaries: {chunk_summaries} •Chunk to Verify: {chunk_text} Instructions: •Flag clear factual errors only if highly confident ( >90%). •Detect contradictions with prior verified chunks. •Assume validity unless strong evidence to the contrary exists. Response Format: { "flagged": false, "chunk_summary": "This chunk discusses ...", "flag_types": [] } Knowledge Poisoning Prompt with Example Prompt: You are given a question and the correct context. Your task is | https://arxiv.org/abs/2505.16014v2 |
to: (1) Write an incorrect but plausible answer in the same style as the correct answer. (2) Create a poisoned_corpus ( ∼100 words) that might mislead a model into selecting the incorrect answer........ Question: Consider “Fiverr’s” privacy policy; who can see the jobs that I post? Correct Context: In other words, when you access the Site we are aware of your usage of the Site, and may gather, collect and record the information relating to such usage, including geo-location information, IP address, device and connection information, browser information and web-log information, and all communications recorded by Users through the Site. Poisoned Corpus: Fiverr assures its users that their data is not only secure but private. When you interact with the platform, job postings and communications are secured using TLS encryption. Fiverr clearly states that it will never sell, rent, or share user content without explicit agreement. This gives users peace of mind when they publish job posts or engage with freelancers. In other words, your job posts are only visible to you and are never shared with other users, freelancers, or third parties, ensuring total privacy and anonymity. A.4 DPO Implementation Details We use Direct Preference Optimization (DPO) to fine-tune a general-purpose LLM to generate query-aligned rationales. The preference dataset is automatically constructed using the original QA corpus: for each query, rationales that led to correct evidence selection form the preferred output, while others form the rejected output. No manual labeling is required. The model is trained using pairwise comparisons of effective and ineffective rationales. We train the model using the LlaMA-3.1-8b-Instruct LLM. The DPO loss (Equation 1 in section 2) is optimized over three epochs using cosine learning rate scheduling. Training and validation data are derived from a single annotated file using an 80/10/10 train-validation-test split. Table 6: DPO training configuration used for rationale refinement. Parameter Value Base model LLaMA-3.1-8B-Instruct Batch size (train / eval) 1 / 1 Gradient accumulation steps 2 Epochs 3 Learning rate 3e-5 Scheduler Cosine Warmup ratio 0.1 DPO lossβ 0.05 Train / Val / Test split 80% / 10% / 10% Save strategy Per epoch Best model selection metric eval_rewards/chosen A.5 Elbow Detection in Pooled Rationale Component of ECSE To identify evidence that align with the collective intent of all rationales, we compute a pooled embedding ¯r=1 |R|/summationtext ri∈RSBERT (ri)and calculate cosine similarity between ¯rand each evidence embedding. This produces a sorted sequence of similarity scores {s1,s2,...,sn}. 17 We first compute the first-order differences ∆i=si−si+1and apply z-score normalization across{∆i}to highlight sharp changes in similarity. The selection index k∗is identified at the first point where the drop in similarity significantly deviates from the average pattern, indicating a natural boundary between highly relevant and less relevant chunks. In cases where similarity scores decline uniformly and no clear deviation is found, we fallback to computing the second-order differences ∇2 i= ∆i+1−∆i. We then choose the index of maximum curvature, which reflects the sharpest transition in the similarity landscape. The selected top- k∗chunks, denoted as Eg, are thus derived without relying on manually defined thresholds, enabling adaptive and data-driven cutoff | https://arxiv.org/abs/2505.16014v2 |
across varying distributions. A.6 Compute Time Comparison Across Methods All experiments were conducted on NVIDIA L40s GPUs to ensure consistency across evaluations. We report the average compute time (in seconds) per query for each method, assessed across all datasets and evidence sizes. For LLM based methods like RankRAG and METEORA, we use batching with a batch size of 5. Table 7: Average compute time per query (in seconds), measured across all datasets and evidence sizes. All experiments were conducted on NVIDIA L40s. Metric SBERT Cross-Encoder Contriever RankRAG METEORA Time per query 0.0209 0.0248 0.0223 18.8791 22.6098 A.7 Results Across All Evidence Sizes We report precision, recall, and generation performance for all evaluated chunk sizes across datasets to complement the main results presented in the paper. Table 8: CP Task Results across Datasets and Evidence Sizes Model Contract-NLI PrivacyQA CUAD MAUD QASPER Evidence Size = 128 P@7 R@7 P@10 R@10 P@24 R@24 P@43 R@43 P@22 R@22 SBERT 0.17 0.78 0.12 0.61 0.04 0.59 0.01 0.03 0.10 0.86 Contriever 0.16 0.81 0.11 0.55 0.04 0.27 0.01 0.14 0.11 0.90 Cross-Encoder 0.17 0.83 0.11 0.51 0.03 0.50 0.01 0.15 0.11 0.89 Finetuned-SBERT 0.17 0.81 0.09 0.59 0.04 0.59 0.01 0.14 0.11 0.88 RankRAG 0.12 0.77 0.09 0.57 0.04 0.49 0.01 0.12 0.09 0.61 METEORA 0.18 0.89 0.13 0.84 0.06 0.78 0.02 0.39 0.11 0.90 Evidence Size = 256 P@5 R@5 P@8 R@8 P@17 R@17 P@37 R@37 P@14 R@14 SBERT 0.25 0.88 0.17 0.81 0.07 0.76 0.01 0.37 0.17 0.93 Contriever 0.24 0.85 0.15 0.68 0.06 0.68 0.01 0.27 0.18 0.95 Cross-Encoder 0.25 0.89 0.16 0.74 0.06 0.64 0.01 0.31 0.16 0.94 Finetuned-SBERT 0.25 0.89 0.13 0.59 0.07 0.68 0.01 0.36 0.17 0.94 RankRAG 0.19 0.73 0.11 0.69 0.05 0.76 0.01 0.22 0.13 0.79 METEORA 0.25 0.98 0.18 0.92 0.08 0.84 0.02 0.58 0.16 0.96 Evidence Size = 512 P@3 R@3 P@6 R@6 P@12 R@12 P@33 R@33 P@8 R@8 SBERT 0.38 0.91 0.24 0.89 0.11 0.78 0.03 0.41 0.26 0.91 Contriever 0.37 0.89 0.23 0.82 0.10 0.73 0.01 0.46 0.25 0.94 Cross-Encoder 0.38 0.91 0.22 0.81 0.10 0.71 0.02 0.50 0.27 0.94 Finetuned-SBERT 0.38 0.92 0.21 0.75 0.11 0.76 0.02 0.46 0.26 0.92 RankRAG 0.23 0.77 0.19 0.86 0.07 0.60 0.01 0.22 0.19 0.76 METEORA 0.35 1.00 0.23 0.98 0.12 0.93 0.03 0.72 0.26 0.99 18 Table 9:Verifier Flag Distribution (Excluding FinQA). Percentage of flagged poisoned evidence categorized by instruction, contradiction, and factual violations across evidence sizes. Flag Type QASPER Contract-NLI PrivacyQA CUAD MAUD Evidence Size = 128 Instruction 23.67 57.80 34.80 20.37 43.93 Contradiction 0.07 0.21 0.21 2.23 0.11 Factual 9.27 3.96 3.96 2.44 0.02 Evidence Size = 256 Instruction 35.30 52.48 54.93 20.79 40.60 Contradiction 0.10 0.07 0.39 1.86 0.46 Factual 7.59 2.39 4.68 2.18 0.00 Evidence Size = 512 Instruction 32.13 42.62 43.20 24.55 46.08 Contradiction 0.23 0.24 0.64 0.31 0.00 Factual 9.62 1.13 5.15 1.14 0.00 Table 10: Ablation study of the ECSE pipeline, evaluating the effect of Pairing, Pooling, and Expansion stages across datasets and evidence sizes. Pooling improves recall over Pairing alone, and adding Expansion further enhances performance. The complete | https://arxiv.org/abs/2505.16014v2 |
ECSE pipeline achieves the best balance between precision and recall, particularly at larger evidence sizes. Components Contract-NLI PrivacyQA CUAD MAUD QASPER Evidence Size = 128 P R P R P R P R P R Pairing 0.21 0.780.150.450.090.60 0.01 0.19 0.200.69 Pairing + Pooling 0.20 0.91 0.14 0.64 0.08 0.64 0.01 0.25 0.14 0.81 Pairing + Pooling + Expansion 0.18 0.95 0.130.840.060.78 0.02 0.39 0.110.90 Evidence Size = 256 P R P R P R P R P R Pairing 0.26 0.91 0.210.670.110.620.030.390.250.79 Pairing + Pooling 0.27 0.94 0.19 0.83 0.10 0.69 0.02 0.50 0.20 0.90 Pairing + Pooling + Expansion 0.25 0.98 0.180.920.080.840.020.580.160.96 Evidence Size = 512 P R P R P R P R P R Pairing 0.38 0.950.260.710.150.700.030.420.390.81 Pairing + Pooling 0.36 0.97 0.25 0.91 0.14 0.79 0.02 0.50 0.32 0.93 Pairing + Pooling + Expansion 0.35 1.00 0.230.980.120.930.020.780.260.99 19 Table 11: Corpus poisoning detection performance across datasets and evidence sizes. METEORA shows strong resilience, outperforming perplexity-based defenses in recall, especially at larger evidence sizes. Method Contract-NLI PrivacyQA CUAD MAUD QASPER Evidence Size = 128 Precision Recall Precision Recall Precision Recall Precision Recall Precision Recall No Defense 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Perplexity 0.69 0.22 0.19 0.17 0.19 0.22 0.46 0.12 0.09 0.10 METEORA 0.62 0.55 0.37 0.18 0.25 0.31 0.44 0.42 0.29 0.33 Evidence Size = 256 No Defense 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Perplexity 0.54 0.20 0.29 0.18 0.18 0.10 0.31 0.14 0.27 0.15 METEORA 0.55 0.72 0.60 0.40 0.24 0.38 0.41 0.46 0.43 0.53 Evidence Size = 512 No Defense 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Perplexity 0.25 0.05 0.23 0.11 0.14 0.07 0.10 0.04 0.18 0.08 METEORA 0.44 0.61 0.49 0.48 0.26 0.45 0.46 0.34 0.42 0.46 Table 12: FinQA analysis across evidence sizes: CP task performance, ablation of ECSE, verifier flag contribution, and poisoning detection. CP Performance Model P@13 R@13 Size = 32 SBERT 0.08 0.88 Contriever 0.07 0.91 CrossEnc 0.090.83 Fine-Tuned SBERT 0.08 0.90 RankRAG 0.06 0.78 METEORA 0.07 0.89 Size = 64 SBERT 0.12 0.96 Contriever 0.11 0.98 CrossEnc 0.12 0.97 Fine-Tuned SBERT 0.130.97 RankRAG 0.08 0.89 METEORA 0.12 0.95ECSE Ablation Stage P@13 R@13 Size = 32 Pairing 0.100.67 + Pooling 0.08 0.82 + Expansion 0.07 0.89 Size = 64 Pairing 0.190.84 + Pooling 0.14 0.91 + Expansion 0.12 0.95Verifier Flags Flag 32 Instruction 32.02 Contradiction 1.64 Factual 0.32 Flag 64 Instruction 37.02 Contradiction 1.84 Factual 0.11Poisoning Detection Method Prec. Rec. Size = 32 No Defense 0.00 0.00 Perplexity 0.13 0.05 METEORA 0.34 0.34 Size = 64 No Defense 0.00 0.00 Perplexity 0.18 0.07 METEORA 0.39 0.48 B DPO Theory We address the problem of aligning an LLM’s rationale generation capability with evidence selection using Direct Preference Optimization (DPO). We consider a setting where: •We have a preference dataset consisting of document evidences •For each user query, certain document evidences are labeled as relevant (positive) examples •Other document evidences are considered irrelevant (negative) examples •The goal is to train the LLM to select appropriate evidences by generating rationales | https://arxiv.org/abs/2505.16014v2 |
that explain the relevance of evidences to the query Unlike traditional RAG approaches that use re-ranking mechanisms based on similarity metrics, our approach enables the LLM to learn to select evidences through a process of rationale generation, providing transparency and better alignment with the final response generation. We provide a rigorous mathematical formulation of this problem and prove that DPO training leads to a policy that optimally selects contextual evidences based on generated rationales. 20 C Problem Formulation Let us now formalize the problem of aligning rationale generation for contextual evidence selection using DPO. C.1 Notation and Setup We denote: •Q: The set of all possible user queries •E: The set of all possible contextual evidences in the knowledge base •R: The set of all possible rationales explaining evidence relevance •E+ q⊂E: The set of relevant evidences for query q •E− q⊂E: The set of irrelevant evidences for query q •πθ(e,r|q): The LLM policy parameterized by θ, giving the joint probability of selecting evidence eand generating rationale rgiven query q •πref(e,r|q): The reference (initial) policy We can decompose the joint probability as: πθ(e,r|q) =πθ(e|q)·πθ(r|q,e) (3) Whereπθ(e|q)is the probability of selecting evidence efor queryq, andπθ(r|q,e)is the probability of generating rationale rgiven query qand selected evidence e. C.2 Preference Model We assume there exists an ideal reward function r∗(q,e,r )that captures the appropriateness of both the selected evidence and the generated rationale. This reward function should assign higher values to relevant evidences with convincing rationales and lower values to irrelevant evidences or unconvincing rationales. We model this as: r∗(q,e,r ) =f(relevance (e,q),quality (r,q,e )) (4) Whererelevance (e,q)measures how relevant the evidence eis to query q, andquality (r,q,e ) measures how well the rationale rexplains the relevance of evidence eto queryq. The functionfcombines these measures. A simple form could be: r∗(q,e,r ) =α·relevance (e,q) +γ·quality (r,q,e ) (5) Whereα,γ > 0are weights that balance the importance of evidence relevance and rationale quality. C.3 Deriving the DPO Objective for Rationale-Based Evidence Selection To apply DPO, we need preference data in the form of (q,(ew,rw),(el,rl))tuples, where (ew,rw)is preferred over (el,rl). In our setting, we can generate these tuples as follows: •q: A user query •(ew,rw): A relevant evidence with a high-quality rationale explaining its relevance •(el,rl): An irrelevant evidence with a rationale attempting to explain its relevance Given such tuples, the DPO objective becomes: LDPO(πθ;πref) =−E(q,(ew,rw),(el,rl))/bracketleftbigg logσ/parenleftbigg βlogπθ(ew,rw|q) πref(ew,rw|q)−βlogπθ(el,rl|q) πref(el,rl|q)/parenrightbigg/bracketrightbigg (6) 21 D Theoretical Analysis We now prove that optimizing the DPO objective leads to an aligned policy that selects appropriate contextual evidences through rationale generation. Letπθbe a policy trained using the DPO objective with preference data derived from relevant and irrelevant evidences with their corresponding rationales. Under certain regularity conditions, as the amount of preference data increases, πθconverges to: π∗(e,r|q)∝πref(e,r|q) exp/parenleftbigg1 βr∗(q,e,r )/parenrightbigg (7) Wherer∗(q,c,r )is the true reward function capturing the appropriateness of evidence selection and rationale generation based on the query. Proof.We proceed with the proof step by step. Step 1: Recall from Rafailov et al. [2023a] that DPO is derived from the following principle: in the Bradley-Terry preference model, the probability that response | https://arxiv.org/abs/2505.16014v2 |
(ew,rw)is preferred over (el,rl)given query qis: P((ew,rw)≻(el,rl)|q) =σ(r(q,ew,rw)−r(q,el,rl)) (8) Wherer(q,e,r )is the reward function and σis the logistic function. Step 2: The optimal policy given a reward function rand a reference policy πrefis: πr(e,r|q)∝πref(e,r|q) exp/parenleftbigg1 βr(q,e,r )/parenrightbigg (9) This follows from the constrained optimization problem of maximizing expected reward subject to a KL-divergence constraint with the reference policy. Step 3: Plugging the optimal policy form into the Bradley-Terry model, we get: P((ew,rw)≻(el,rl)|q) =σ(r(q,ew,rw)−r(q,el,rl)) (10) =σ/parenleftbigg βlogπr(ew,rw|q) πref(ew,rw|q)−βlogπr(el,rl|q) πref(el,rl|q)/parenrightbigg (11) Step 4: The DPO objective trains πθto match these probabilities by minimizing: LDPO(πθ;πref) =−E(q,(ew,rw),(el,rl))/bracketleftbigg logσ/parenleftbigg βlogπθ(ew,rw|q) πref(ew,rw|q)−βlogπθ(el,rl|q) πref(el,rl|q)/parenrightbigg/bracketrightbigg (12) Step 5: As the amount of preference data increases, assuming the preference data accurately reflects the true reward function r∗(q,e,r )that values relevant evidences with convincing rationales over irrelevant evidences, minimizing the DPO loss will drive πθtowards the optimal policy: πθ(e,r|q)→πr∗(e,r|q)∝πref(e,r|q) exp/parenleftbigg1 βr∗(q,e,r )/parenrightbigg (13) Step 6: We can further analyze the joint probability by decomposing it: π∗(e,r|q)∝πref(e|q)πref(r|q,e) exp/parenleftbigg1 βr∗(q,e,r )/parenrightbigg (14) Step 7: Sincer∗(q,e,r )rewards both chunk relevance and rationale quality, the resulting policy will select evidences and generate rationales that are aligned with the true relevance of evidences to the query. This means the policy learns to select evidences based on their relevance through a process of rationale generation rather than simple re-ranking. 22 If the preference dataset accurately reflects the relevance of evidences to queries along with appropriate rationales, then the DPO-trained policy will select contextual evidences that are most relevant to the query while generating rationales that justify this selection. The DPO-trained policy provides advantages over traditional re-ranking in RAG systems by: 1. Jointly optimizing evidence selection and rationale generation 2. Providing transparent explanations for why specific evidences were selected 3. Learning complex relevance patterns beyond simple similarity metrics E Algorithm We now describe a step-by-step procedure for implementing DPO for rationale-based evidence selection: Algorithm 1 DPO for Rationale-Based Chunk Selection Require: Base LLM πref, dataset of user queries {qi}, relevant chunks {E+ qi}, irrelevant chunks{E− qi} Ensure: Aligned LLM πθthat selects evidences with rationales 1:Initializeπθ←πref 2:Construct preference dataset P={(qi,(ew,i,rw,i),(el,i,rl,i))}: 3:foreach query qido 4:Sample relevant evidence ew,ifromE+ qi 5:Generate rationale rw,iexplaining relevance of ew,itoqi 6:Sample irrelevant evidence el,ifromE− qi 7:Generate rationale rl,iattempting to explain relevance of el,itoqi 8:Add tuple (qi,(ew,i,rw,i),(el,i,rl,i))toP 9:end for 10:Trainπθby minimizing: LDPO(πθ;πref) =−E(q,(ew,rw),(el,rl))∼P/bracketleftbigg logσ/parenleftbigg βlogπθ(ew,rw|q) πref(ew,rw|q)−βlogπθ(el,rl|q) πref(el,rl|q)/parenrightbigg/bracketrightbigg 11:returnπθ F Measuring Chunk Relevance and Rationale Quality The functions relevance (c,q)and quality (r,q,c )can be implemented in various ways: •Evidence Relevance : –Semantic similarity between query qand evidence eusing embeddings –BM25 or other lexical similarity measures –Entailment scores from an NLI model –Human relevance judgments •Rationale Quality : –Coherence measures (how well the rationale flows logically) –Factual consistency with the evidence content –Specificity to the query (rather than generic explanations) –Explanatory power (does it actually explain why the evidence is relevant?) –Human judgments of explanation quality 23 G Rationale Fabrication – Connect this to Verifier The model might generate convincing-sounding but factually incorrect rationales to justify the selection of irrelevant chunks. Mitigation: Include factual consistency metrics in the training process and explicitly penalize fabricated or misleading rationales. H Question | https://arxiv.org/abs/2505.16014v2 |
from the reviewer: Distribution Shift The distribution of queries and evidences during deployment may differ from those in the training data. Mitigation: Include a diverse range of queries and evidence types in the training data, and implement continual learning mechanisms to adapt to new domains. H.0.1 Retrieval-Augmented Generation (RAG) The RAG architecture enhances generative models by incorporating additional context from a knowledge base, thereby improving response accuracy. Given a query q, a knowledge base of documents D={d1,d2,...,dn}, a retriever function F(q,D)→Dq⊂D, a re-ranker functionRe(q,Dq)that picks the top- kdocuments, and a generative model M, the final generation gis given by: g=M(q,Re(q,F(q,D))) H.1 Current Re-Ranking Mechanism A re-rankerRecomputes the semantic similarity Ssbetween a query qand a set of retrieved documents Dq, and then ranks the documents by relevance. In top-kSBERT-based re- ranking, the query and each document are encoded independently, and cosine similarity is used to score relevance: ReSBERT (q,Dq) ={cos(SBERT (q),SBERT (d))|d∈Dq} Intop-kcross-encoders, the query and each document are jointly encoded, producing a scalar relevance score: ReCross(q,Dq) ={CrossEncoder (q,d)|d∈Dq} Top-kContriever also encodes the query and documents independently using a contrastive learning objective: ReContriever (q,Dq) ={cos(Contriever (q),Contriever (d))|d∈Dq} While effective for ranking based on surface-level similarity, these methods have three key limitations. First, they require manual tuning of k, which is often done through hit-and-trial. Second, using a fixed kfor all queries in a domain may include less-relevant chunks in the top-k, which can negatively impact downstream generation. Third, these methods lack interpretability and provide no mechanisms to detect or filter adversarial or misleading content. 24 | https://arxiv.org/abs/2505.16014v2 |
arXiv:2505.16022v1 [cs.CL] 21 May 2025NOVER: Incentive Training for Language Models via Verifier-Free Reinforcement Learning Wei Liu⋆Siya Qi⋆Xinyu Wang⋆Chen Qian♣Yali Du⋆♠Yulan He⋆♠B ⋆King’s College London♠The Alan Turing Institute♣Shanghai Jiao Tong University {wei.4.liu, yali.du, yulan.he}@kcl.ac.uk Abstract Recent advances, such as DeepSeek R1-Zero, highlight the effectiveness of incentive training, a reinforcement learning paradigm that com- putes rewards solely based on the final answer part of a language model’s output, thereby en- couraging the generation of intermediate rea- soning steps. However, these methods funda- mentally rely on external verifiers, which lim- its their applicability to domains like mathe- matics and coding, where such verifiers are readily available. Although reward models can serve as verifiers, they require high-quality an- notated data and are costly to train. In this work, we propose NOVER ,NO-VER ifier Re- inforcement Learning, a general reinforcement learning framework that requires only standard supervised fine-tuning data with no need for an external verifier. NOVER enables incen- tive training across a wide range of text-to-text tasks and outperforms the model of the same size distilled from large reasoning models such as DeepSeek R1 671B by 7.7%. Moreover, the flexibility of NOVER enables new possibilities for optimizing large language models, such as inverse incentive training1. 1 Introduction Recent progress in Large Language Model (LLM) reasoning has been accelerated by incentive train- ing(Guo et al., 2025; Xie et al., 2025; Yu et al., 2025; Zeng et al., 2025; Liu et al., 2025a; Yuan et al., 2025b; Hu et al., 2025), a new Reinforce- ment Learning (RL) paradigm which optimizes models by computing rewards only on the final an- swer part in model response, and incentivize mod- els to generate intermediate tokens like reasoning steps spontaneously. Notably, methods such as DeepSeek R1-Zero (Guo et al., 2025) have shown that in domains like mathematics and coding, using a simple rule-based verifier to calculate reward and 1Code available at https://github.com/thinkwee/NOVER SFTSFTReward ModelReward RLFH RLVRProxyModelSyncNOVER RewardRule-BasedVerifier RewardRL RLRLLLMLLM LLMLLM Ground Truth AnswerHuman PreferenceAnnotationPrompt Model OutputFigure 1: Comparison of NOVER with other post- training paradigms. perform RL-only incentive training can achieve impressive performance, known as Reinforcement Learning with Verifiable Reward (RLVR). It has been demonstrated that such incentivized reasoning ability can generalize to areas like multi-modality (Shen et al., 2025; Zhang et al., 2025; Yang et al., 2025; Feng et al., 2025b) or language agent (Xia and Luo, 2025; Jin et al., 2025; Song et al., 2025; Wang et al., 2025; Feng et al., 2025a). However, the success of incentive training with RLVR hinges on external verifiers that can judge the correctness of model outputs. For many highly contextualized and hard-to-grade tasks requiring similar reasoning or cognitive capability, ranging from social behaviour analysis to creative writing, such verifiers are infeasible or hard to construct (Weng, 2025). Recent works have explored train- ing general-purpose large verifier models across diverse domains (Ma et al., 2025; Seed et al., 2025; Su et al., 2025). However, building accurate veri- fier models involves a complex pipeline, which is significantly more costly than traditional reward 1 Aphotonasafundamentalparticleisoften…Whenconsideringthesizeofaphotonweareessentiallyaskingaboutitsspatialextentorlocalization...Foraphotonthemomentum(𝑝)isrelatedtoitsenergy(𝐸)by…InconclusionifaphotonishypotheticallybroughttoacompletestopinanisolatedstaticenvironmenttheHeisenbergUncertaintyPrinciplewouldsuggestthatitssizebecomesundefinedorinfiniteduetothezeromomentum…Aphotonisconsideredapointparticleinthestandardmodelofparticlephysics.…Discussthetheoreticalimplicationsonitsmeasurablesize......JianjianandJingjingareclassmates…OnedaywhenschoolisabouttoendXuanxuanjoinstheirchat.JianjiantakestheopportunitywhenJingjingturnsherheadtotalktoXuanxuan…thensmilesatXuanxuanXuanxuanalsonodstoJianjian…WhydoJianjianandXuanxuanexchangeasmile?CreateanengagingstorycalledThejoyofgivingonvalentine'sdaydesignedforchildrenwithautismtodevelopimportantlifeskills.Yourstoryshouldfocusonteachingthemaboutfamiliarizeschildrenwithasdaboutvarioussocialevents…TranslatethefollowingtextfromSpanishtoFrench:Enelfondoelcanadiensedebíaestarencantadodenotenerlocontraél.Tocreateanengagingstorythateffectivelyteachesaboutfamiliarizeschildrenwithasdaboutvarioussocialevents…1.Identifythekeythemesandmessages:…2.Choosecharacters:…3.Developarelatablesetting:……ByfollowingthesestepsIwouldcreateastorythateffectivelyteaches…<think><think> ThecontextprovidedrevealsthatthereisnodirectindicationthatJianjianandXuanxuanareplanningtoseeamovietogether…SinceXuanxuanalsonoddedandsmileditcouldbeamutualaffirmationofthisidea…themostlogicalreasonforJianjianandXuanxuanexchangingasmilewould…<think>Thetheoreticalimplicationsofaphoton'ssizeinanisolatedstaticenvironmentwhereitsmotionishaltedaremultifacetedanddeeplyrootedinquantummechanicsandstringtheory.Accordingto…<answer> JianjianwantstosignaltoXuanxuanhewantstowatchamoviealonewithXuanxuandonotinviteJingjing.<answer> Aufondlecanadiendevaitêtreenchantédenepasleavoircontrelui.<answer>Title:TheJoyofGivingonValentine'sDayOnceuponatimeinadiverseneighborhoodtherelivedayoungboynamedAlex…<answer> To translate the given Spanish text to French | https://arxiv.org/abs/2505.16022v1 |
I will follow these steps:…Step 1: Identify key words and phrases…Step 2: Translate each word or phrase into French…Step 3: Rearrange the translated words and phrases to form a coherent sentence in French…Howeverthe word order in the original Spanish sentence is a bit different…<think>QuestionReasoningAnswer Prompt Question:{question}Answerthequestionandreturninthefollowingformat:<think>...</think><answer>...</answer>Figure 2: Examples of Qwen2.5-7B-NOVER on a range of text-to-text tasks, demonstrating its ability to handle open-ended questions such as “Discuss”, “Why”, or “Create”. These tasks often require free-form answers that are difficult to verify with clear-cut labels, posing challenges for incentive training. models for preference alignment (Liu et al., 2025b). Moreover, once trained, these verifier models de- mand substantial computational resources to be deployed during the subsequent RL training. To address these limitations of RLVR, we pro- pose NOVER ,NO-VER ifier Reinforcement Learn- ing, a novel framework for incentive training with- out an external verifier. As shown in Figure 1, compared with traditional Supervised Fine-Tuning (SFT), NOVER can perform incentive training sim- ilar to RLVR; compared to Reinforcement Learn- ing from Human Feedback (RLHF), NOVER does not need a trained reward model for verification; and compared with RLVR, NOVER extends the incentive training to areas where a reliable verifier is hard to acquire. NOVER requires only stan- dard SFT data and utilizes the model itself to build a reward proxy for lightweight RL training. By calculating perplexity-based reward based on the model’s reasoning process, it enables incentive- driven reinforcement learning across a wide range of text-to-text tasks, as shown in Figure 2. Our main contributions are as follows: 1.We introduce NOVER , a verifier-free in- centive training framework that generalizes RLVR to arbitrary text-to-text tasks with min- imal SFT data. 2.Our experiments and analysis demonstrate NOVER’s superior performance. It is stable to train compared with model-as-verifier meth- ods and mitigates reward hacking, shapes rea- soning patterns, and adapts to diverse tasks.3.NOVER’s flexibility enables imaginative ap- proaches such as inverse incentive training. This paradigm teaches a model how to fish rather than simply give it a fish, surpassing standard incentive training on tasks that need creativity. 2 Related Work Language Model Reasoning Early research de- signed prompting techniques and workflows based on human cognitive priors to enhance the reason- ing capabilities of LLMs. Chain-of-Thought (CoT) prompting (Wei et al., 2022) enabled step-by-step reasoning, later extended by Zero-Shot CoT (Ko- jima et al., 2022) and Self-Consistency (Wang et al., 2023b). More recent methods, such as Tree of Thoughts (Yao et al., 2023), Least-to-Most (Zhou et al., 2023), Plan-and-Solve (Wang et al., 2023a), Sketch-Navigation (Liu et al., 2024), and Multi- Perspective Self-Reflection (Yan et al., 2024), in- troduced structured exploration over reasoning tra- jectories. Beyond prompting, verification-based approaches were proposed (Cobbe et al., 2021) to solve math reasoning. Neuro-symbolic methods (Pan et al., 2023) fuse LLMs with symbolic solvers. Previous works also utilize Process Reward Mod- els (Lightman et al., 2024) combined with Monte Carlo Tree Search (Kocsis and Szepesvári, 2006) for step-level exploration to replicate OpenAI’s o1 (Jaech et al., 2024). Incentive Training In contrast to the above meth- ods, DeepSeek-R1 (Guo et al., 2025) proposed a 2 | https://arxiv.org/abs/2505.16022v1 |
simpler paradigm called incentive training. Build- ing on this idea, recent works such as Logic-RL (Xie et al., 2025), DAPO (Yu et al., 2025), Sim- pleRL (Zeng et al., 2025), OpenReasoner (Hu et al., 2025), Dr.GRPO (Liu et al., 2025a), and V APO (Yuan et al., 2025b) investigated best practices on aspects such as exploration-exploitation, emer- gence of "aha" moments, and task difficulty vari- ance. Several recent efforts attempted to extend in- centive training beyond math and coding (He et al., 2025; Lu et al., 2025; Gurung and Lapata, 2025; Su et al., 2025; Ma et al., 2025). However, these approaches often rely on domain-specific rules for verification or involve training large verifier mod- els, which can be computationally expensive. 3 Background Rule-based Reward Given a training prompt p with a template (Guo et al., 2025) asking to gen- erate intermediate tokens t(e.g., reasoning steps) followed by final answer a, incentive training aims to use RL to optimize a model πθwith outcome reward Rrule, which is solely computed by a rule- based verifier v:A → { 0,1}ona: Rrule=v(a) (1) Such a sparse but accurate outcome reward encour- ages the model to autonomously generate interme- diate tokens tthat could lead to a better a. In rea- soning tasks, for example, tmay be reasoning be- haviours like task decomposition or self-reflection. To ensure the verifier can easily parse outputs, DeepSeek-R1 introduces a tag format reward Rf, which requires tto be enclosed in <think> tags andain<answer> tags: Rf=fformat (t, a) (2) where fformat :T × A → { 0,1}is the regular expression based format check function. Then the combined reward is: R(p, t, a ) =wrule·Rrule+wf·Rf (3) GRPO After the calculation of reward, Group Relative Policy Optimization (GRPO) (Shao et al., 2024) is then used to optimize πθ. For each prompt p, the model rolls out a group of com- pletions C={(t1, a1), ...,(tG, aG)}. The group- normalized advantage is: Ai=R(p, ti, ai)−µC σC(4)where µCandσCare the mean and standard devi- ation of rewards in the group. Then the policy is updated with a clipped objective: JGRPO(θ) =E{p,ti,ai} 1 GGX i=1rclip i−βD KL(πθ∥πref)! (5) rclip i= min ( ri,clip(ri,1−ϵ,1+ϵ))Ai (6) ri=πθ(ti, ai|p) πθold(ti, ai|p)(7) where ϵis the clip ratio and βis the KL penalty weight. ris the policy ratio (Schulman et al., 2017). This framework enables LLMs to develop explicit, high-quality reasoning processes without dense su- pervision for intermediate steps. 4 Method Policy Model 𝝅𝜽reasoning tokensanswer tokensGround TruthProxy Model 𝝅𝒑Reward ProxyReasoningRewardEfficiencyRewardFormatRewardGRPOSync Prompt RL Training Figure 3: The overall process of NOVER. It can be seen in Equation 1 that the incentive training described above requires a verifier to com- puteRrule.NOVER eliminates the need for a veri- fier by introducing reasoning perplexity as a reward proxy, which can be calculated on any SFT data. 4.1 Reasoning Perplexity as Proxy The core idea of NOVER is that the perplexity of the ground truth answer, conditioned on the model’s reasoning trajectory, can serve as a natural proxy for reward. Given a prompt p, the ground truth answer g, and a model response consisting of reasoning tokens tand answer tokens | https://arxiv.org/abs/2505.16022v1 |
a, we compute the reasoning perplexity Prusing a proxy model πpas follows: Pr(p, t, g ) = exp −P|g| i=1logπp(gi|p, t, g <i) |g| ·N(|t|)! (8) where |g|denotes the number of tokens in the ground truth. N(|t|)is a simple normalization fac- tor based on the length of the reasoning tokens, which alleviates the length bias of perplexity: N(|t|) = max (1 ,1 + log( |t|)) (9) 3 A lower reasoning perplexity Prindicates a higher probability of generating the correct answer based on the reasoning tokens. As the calculation of Pris done in a teacher-forcing way without any decoding, computation of Prrequires only ∼5% of total training time (see details in Appendix A. Then Prcan be utilized to calculate the reward defined in §4.3. 4.2 Policy-Proxy Synchronization Unlike recent work that employs an extra frozen model as the proxy (Gurung and Lapata, 2025), we instead use the policy model πθitself as the proxy πp. This choice reflects their shared goal: minimizing the perplexity of correct answers given high-quality reasoning. Moreover, an extra frozen proxy can diverge from the evolving policy, leading to inconsistency (see §6.3). Importantly, use πθas πpdoes not mean the model acts as both athlete and referee, as πpitself does not judge but leverages the objective ground truth gfor fair evaluation. In practice, we initialize both πθandπpfrom the same pretrained checkpoint and periodically sync πθto πpevery Tsync steps via exponential smoothing, following TR-DPO (Gorbatovski et al., 2025). πp←α·πp+ (1−α)·πθ (10) where α∈[0,1]denotes the synchronization rate. Such synchronization enables the proxy model to gradually adapt to improvements in the policy while ensuring a stable calculation of per- plexity. Since we employ LoRA (Hu et al., 2022) for efficient training, the πθandπpshare the same base model and can be switched seamlessly by re- placing the LoRA adapter, which comprises only about 0.4% of the full model parameters. 4.3 Verifier-Free Reward Given a prompt pand its corresponding group of completions (ti, ai), we compute two rewards based on the reasoning perplexity Pr. Reasoning Reward Reasoning perplexity Prcan serve directly as a reward, as it reflects the qual- ity of the reasoning trajectory. VR-CLI (Gurung and Lapata, 2025) and reasoning advantage SFT (Foster et al., 2024) suggest calculating relative perplexity gain to stabilize training, but we found that GRPO’s group normalization already achieves a similar calculation (see Appendix E), so there is no need for such redundant operation. In prac- tice, we observe that relative perplexity improve- ments still vary widely across different samples,causing high variance in clip ratios and unstable training. To mitigate this, we propose to discretize reasoning perplexity into reasoning rewards Rr. For each completion (pi, ti, ai)withPi r, we com- puteRrbased on its rank among nvalid valid com- pletions. Let P={P1 r, P2 r, ..., Pnvalidr}, sorted asP(1) r≤ ··· ≤ P(nvalid) r , reasoning rewards are assigned by quantile rank: Rr(pi, ti, ai) =( nvalid−rank( Pi r)+1 nvalid,ifrank( Pi r)≤k 0, otherwise (11) In practice, we use either k= 1 ork=nvalid. The former assigns a reward of 1.0 only to the best (lowest-perplexity) completion, which is suitable for tasks with | https://arxiv.org/abs/2505.16022v1 |
objectively correct answers. The latter distributes rewards to all completions based on their normalized rank and is better suited for subjective or open-ended tasks. Efficiency Reward Recent work (Sui et al., 2025) suggests that longer reasoning is not always better, and effective reasoning length is what mat- ters. We introduce an Efficiency Reward Re, en- couraging the model to generate shorter but better reasoning trajectories. A completion should be rewarded if it achieves lower reasoning perplex- ity with fewer reasoning tokens compared to other completions. The Recan be calculated as: Re(pi, ti, ai) =Pn j=1,j̸=i1 Pi r< Pj r∧ |ti|<|tj| nvalid−1 (12) It is notable that although Reshares a similar target with Rr, where lower perplexity is better, we can not use Reto replace Rrsince in the early training stage, most completions could not develop efficient reasoning to acquire the Re. These two rewards are then combined with the tag format reward Rfin Equation 2 to form the final reward for GRPO training. The combined reward function is then: Rtotal=wfRf+I(Rf= 1)·(wrRr+weRe)(13) where wf, wr, weare weights that control the rel- ative importance of each reward component. All these rewards are in the range [0,1]. Notably, they are conditionally dependent rather than simply ad- ditive: only when the tag format is correct ( Rf= 1) are the other two rewards calculated and added; otherwise, they are set to 0. We provide a detailed analysis of how this design effectively prevents the 4 "curse of proxy" in §6.3. Although our experiments are conducted using GRPO for efficiency, NOVER can be combined with various RL algorithms, such as PPO (Schulman et al., 2017). 5 Experimental Setup Dataset To minimize data contamination from pretraining, we prioritized the use of the most re- cent datasets available. We focused on general reasoning datasets that require multi-step inference and span multiple domains, not limited to STEM. In addition, we included text generation datasets that demand reasoning and planning abilities, such as creative writing, social intelligence, and multi- lingual translation. To evaluate the effectiveness of the NOVER-incentivized model, we select datasets spanning four broad domains: (1)General Rea- soning: This category includes challenging tasks that go beyond standard STEM benchmarks, requir- ing complex reasoning, including three datasets, Natural Reasoning (Yuan et al., 2025a), General Thought (General Reasoning, 2025), and WebIn- struct (TIGER-Lab, 2025), which require models to produce factually grounded answers, often in- volving multi-step reasoning and justification. (2) Creative Writing: To assess long-form narrative generation and planning ability, we use SS-GEN (Feng et al., 2025c), a benchmark focused on co- herent, socially themed storytelling. (3)Social Intelligence: We evaluate models on emotionally and socially grounded reasoning using EmoBench (Sabour et al., 2024) and ToMBench (Chen et al., 2024), which test emotion recognition, social ac- tion prediction, and theory-of-mind reasoning. (4) Multilingual Ability: We employ the OPUS book corpus (Tiedemann, 2012), which includes transla- tion tasks across 16 languages and 64 source-target language pairs, to evaluate cross-lingual reasoning and generalization. We construct training and test sets for all datasets and subcategories via uniform sampling. All datasets were cleaned and filtered to ensure | https://arxiv.org/abs/2505.16022v1 |
high-quality data. Specifically: •Natural Reasoning We excluded samples lacking a reference answer. We also filtered out samples where the reference answer was merely a number, a single word, or a single sentence, as such cases are often trivial for rule-based verifiers and do not reflect the open- ended reasoning tasks we aim to study. •General Thoughts and WebInstruct These datasets underwent the same preprocessingasNatural Reasoning . What’s more, these two datasets contain multiple-choice ques- tions, and we converted them into free-form QA format, ensuring that the reference an- swers included both the correct choice and its content. Due to inconsistent option formatting in the original data (such as 1., A), A, (a).), we designed seven regex-based patterns to clean and standardize the multiple-choice items. •EmoBench We selected two subsets, emo- tional understanding and emotional appli- cation . For emotional understanding , the prompt was adapted to specify a dual-option response format, reflecting the original struc- ture in which each question is paired with two multiple-choice items (with four candi- dates each), targeting emotion recognition and causality, respectively. •FANToM We chose two subsets aligned with free-form reasoning, which are full_fact (ask- ing for an entity) and full_belief_gen (asking for opinion in the perspective of somebody). •Other Datasets No preprocessing was ap- plied to the remaining datasets. For datasets with existing train/test splits, we re- tained them. For others, we created splits by evenly sampling across subcategories (e.g., academic sub- jects) to ensure distributional consistency. For very large datasets, we sample 2,000 examples for train- ing, 1,000 for validation, and 1,000 for testing. Dataset #Original #Filtered #Train #Validation #Test NR 1,150,000 192,178 2,000 1,000 1,000 GT 431,000 78,483 2,000 1,000 1,000 WI 232,000 231,833 2,000 1,000 1,000 SGN 5,087 5,087 4,070 509 508 EB 400 400 272 – 128 TB 2,860 2,860 1,432 571 857 OPUS 1,250,632 1,250,632 2,000 1,000 1,000 FT 1,863 1,863 1,308 – 555 Table 1: Dataset statistics after filtering and splits into training, validation, and test sets. Baselines We evaluate 3B and 7B versions of Qwen 2.5 (Yang et al., 2024) against several baselines. Prior work suggests that instruction- following and basic CoT reasoning can emerge dur- ing multi-stage pretraining (Wei et al., 2022; Zeng et al., 2025). Thus, we include both vanilla and CoT responses to isolate the effect of NOVER incentive tuning from capabilities already acquired in pre- training. We also consider an SFT baseline, where 5 MethodGeneral Reasoning Writing Social Intelligence MultilingualAvg. NR GT WI SGN EB TB OPUS Qwen2.5-3B Base Model 21.80 % 43.10 % 18.40 % 18.70 % 32.03 % 46.79 % 16.70 % 28.22 % +CoT 24.40 % 48.90 % 24.20 % 14.76 % 28.12 % 51.23 % 1.40 % 27.57 % +SFT 27.00 % 36.20 % 27.30 % 20.08 % 36.72 % 48.66 % 17.30 % 30.47 % +NOV ER 28.60 % 60.30 % 28.10 % 41.64 % 38.28 % 57.88 % 20.70 % 39.36 % Qwen2.5-7B Base Model 31.80 % 48.50 % 20.70 % 24.21 % 28.91 % 44.22 % 19.30 % 31.09 % +CoT 31.20 % 57.60 % 29.20 % 33.46 | https://arxiv.org/abs/2505.16022v1 |
% 38.28 % 50.99 % 1.60 % 34.62 % +SFT 27.50 % 45.20 % 33.50 % 37.85 % 47.66 % 57.06 % 23.30 % 38.87 % +NOV ER 38.20 % 61.80 % 36.60 % 50.79 % 49.22 % 67.79 % 26.80 % 47.31 % Qwen2.5-3B-Instruct 27.10 % 50.00 % 31.50 % 21.25 % 40.62 % 58.69 % 19.90 % 35.58 % Qwen2.5-7B-Instruct 29.90 % 56.20 % 35.60 % 67.72 % 46.88 % 65.23 % 23.50 % 46.43 % R1-Distill-Qwen-7B 41.00 % 60.20 % 38.00 % 40.16 % 35.16 % 54.61 % 8.20 % 39.62 % Table 2: Overall performance. Each cell is shaded based on its relative improvement. Values in bold indicate the best-performing variant, excluding other post-trained models. NR: Natural Reasoning, GT: General Thoughts-430k, WI: WebInstruct, SGN : SS-GEN, EB: EmoBench, TB: TomBench, OP: OPUS-BOOK-TRANSLATION. the same pretrained checkpoints are fine-tuned on each dataset’s training split (identical to that used by NOVER). This comparison highlights whether NOVER enhances generalization and abstraction in contrast to SFT (Chu et al., 2025). For reference, we report results from three strong post-trained models: Qwen2.5-3B/7B-Instruct, and R1-Distill- Qwen-7B, a 7B variant distilled from DeepSeek- R1 671B (Guo et al., 2025). These models, trained with massive instruction-following and reasoning data, serve as high-performance baselines. Notation Definition Value r LoRA rank 16 αLoRA LoRA scaling factor 32 pdropout LoRA dropout ratio 0.1 dtype Training precision bfloat16 G Group Size 8 B Batch size per GPU 8 β KL coefficient 0.1 wf Format reward weights 1 wr Reasoning reward weights 1 we Efficiency reward weights 1 τ Rollout temperature 0.6 ηmax Maximum learning rate 1e−5 α Synchronization coefficient 0.9 Tsync Synchronization interval 100 Lgen Max completion length [512, 2048] ϵ clip range 0.1 ϵhigh Upper clip range 0.2 Tmax Max training steps 5000 Table 3: Hyperparameters used for NOVER training. Training and Evaluation Our training frame- work is built on Huggingface TRL (von Werra et al., 2020), using LoRA adapters (Hu et al., 2022) for efficient fine-tuning. All hyperparameters for train-ing NOVER are listed in Table 3. For general rea- soning tasks and the SS-GEN creative writing task, we set the maximum completion length to 2048, while for other tasks, it is limited to 512. Training is run for up to 5,000 steps, although we typically apply early stopping when the validation reward, es- pecially the tag format reward, no longer improves. For 3B models, training generally requires more steps than for 7B models. Training time is mainly determined by rollout efficiency, which depends on the maximum generation length and varies by task. For example, training Qwen2.5-7B for 1,000 steps with a max length of 1,024 tokens on two H100 GPUs takes about 2 hours. For evaluation, we re- port accuracy on all datasets. For choice questions, we extend the extractor from Team et al. (2025) to identify both the option letter and corresponding text, then match against ground truth. For open- ended QA, we use Gemini-2.0-flash (Anil et al., 2023; Gemini et al., 2024) in an LLM-as-a-Judge setup. All results are averaged over three runs. 6 Results and | https://arxiv.org/abs/2505.16022v1 |
Analysis We present a comprehensive and structured experi- ments organized into three main parts. (1)Toex- aminewhen NOVER performs well, §6.1 analyzes its performance across different task types, and §6.2 investigates how data distribution influence reasoning ability. (2)Tounderstand whyNOVER iseffective, §6.3 examines how NOVER addresses reward hacking for stable learning; §6.4 compares its verifier-free design with Model as Verifier ap- proaches; and §6.5 studies how incentivized reason- 6 QA choice Format02505007501000125015001750Count62.0% 38.0% 0.00.10.20.30.40.50.60.70.8 Mean Accuracy 0.3020.534 calculate analyze find classify determine identify explain prove Question Type02004006008001000Count25.1% 19.7% 14.9% 13.5% 12.1% 5.4%4.8% 4.5% 0.00.20.40.60.81.0 Mean Accuracy 0.368 0.3130.523 0.4350.455 0.3480.4170.491 [0,19) [20,39) [40,59) [60,79) [80,99) [100,119) [120+) Question Length020040060080010001200Count 9.8%29.1%31.2% 17.8% 7.1% 2.3% 2.7% 0.00.20.40.60.81.0 Mean Accuracy 0.4690.484 0.3520.330 0.2660.319 0.312 [0,4) [5,9) [10,14) [15,19) [20,24) [25,29) [30+) Reference Length020040060080010001200Count37.0% 20.2% 6.0% 3.7%2.5%2.0%28.6% 0.00.20.40.60.81.0 Mean Accuracy 0.4550.443 0.337 0.270 0.2760.2540.314 [0,19) [20,39) [40,59) [60,79) [80,99) [100,119) [120+) Answer Length025050075010001250150017502000Count61.1% 10.2% 4.4% 3.6%2.3% 1.9%16.4% 0.00.20.40.60.81.0 Mean Accuracy 0.454 0.309 0.2580.3030.290 0.1720.298 [0,59) [60,119) [120,179) [180,239) [240,299) [300,359) [360+) Reasoning Length0100200300400500600700800Count 1.3%17.1% 17.0% 16.8% 15.0% 12.8%20.0% 0.00.20.40.60.81.0 Mean Accuracy 0.658 0.455 0.4590.427 0.3490.3220.302Figure 4: Accuracy of NOVER on three general reasoning tasks by the distribution of different aspects. ing patterns evolve during training. (3)Todemon - strates NOVER’s flexibility, in §6.6 we explored inverse incentive training for creative writing. 6.1 Overall Performance Table 2 summarizes the performance of NOVER and all baselines. We observe that, for both 3B and 7B model sizes, NOVER consistently outperforms all baselines across all types of tasks. Remarkably, NOVER enables the 3B model to achieve nearly 60% accuracy on General Thoughts, approaching the performance of the much larger R1-Distill- Qwen-7B model distilled from a 671B teacher. In Appendix C, we give a detailed example on how reasoning perplexity discriminates between good and bad responses, thus helping the model to re- inforce learning. The improvement is particularly pronounced on datasets that challenge areas where less data is pretrained, such as EmoBench (for so- cial intelligence) and OPUS (for multilingual). In these areas, direct CoT prompting may reduce ac- curacy. Our manual inspection of CoT outputs reveals that the base model is capable of generat- ing well-structured and fluent CoT. However, these CoT-generated rationales can be easily hallucinated (Huang et al., 2025; Ye et al., 2024; Li et al., 2024). In contrast, NOVER effectively corrects such hal- lucinated reasoning processes, as illustrated by a detailed example in Appendix B. Notably, SFT sometimes even underperforms the base model, as it encourages the model to directly map questions to answers without an explicit reasoning process. In contrast, the CoT, NOVER, and even the base model can generate intermediate reasoning tokens that aid in answering.6.2 When and Where for Effective Reasoning Question format We analyze NOVER’s perfor- mance distribution on the general reasoning area, as shown in Figure 4. For question format, the ac- curacy on multiple-choice questions remains con- sistently higher than that on open-ended QA. This is primarily because the presence of candidate op- tions in the question effectively reduces the search space during RL optimization, thereby lowering the task difficulty. We further extract and | https://arxiv.org/abs/2505.16022v1 |
analyze the key action words from the questions. The model achieves higher accuracy on questions with clear solution directions, such as find,determine , and classify , since the corresponding reasoning chains are more likely to be sampled and reinforced dur- ing training. In contrast, for questions with more flexible requirements and less prior constraint on the reasoning process, such as analyze , the model’s performance is relatively weaker. Through the lens of length We also analyze the effect of length across four components: question, reference, generated answer, and reasoning process. For the first three, we observe that shorter inputs or outputs generally lead to higher accuracy. In- terestingly, for the length of the model-generated reasoning process, accuracy remains relatively sta- ble within the range of 60 to 240 tokens. This suggests that NOVER effectively incentivizes the model to adaptively generate reasoning of appro- priate length according to the difficulty of each question, a property that is closely related to the design of the efficiency reward. False Premise Task Recent work on RLVR sug- gests that RL may not exceed the capabilities of pre- 7 Method 3B 7B Base 12.43% 14.59% +CoT 14.23% 19.28% +SFT 26.49% 29.73% +NOV ER 18.74% 23.42% Table 4: Experiments on FANToM (Kim et al., 2023), a theory of mind task with false premise problems. trained models (Yue et al., 2025) in domains with verifiable and structured answers. While NOVER focuses on free-form domains, we conduct a false premise task that similarly reveals the limits of RL. We adopt FANToM (Kim et al., 2023), a theory-of- mind dataset where models answer questions from a specific character’s perspective in multi-party di- alogues. Some questions rest on false premises (e.g., asking for character A’s opinion on a movie when A was absent during the discussion on the movie in the conversation), making them unanswer- able. Experiments show that SFT can memorize such refusal patterns from training data, whereas NOVER depends on the pretrained model to gen- erate candidate responses and selectively reinforce the better ones. Lacking exposure to refusal behav- ior during pretraining, the model struggles to reject false-premise questions, resulting in weaker per- formance than SFT (see Table 4 and Appendix D). These findings suggest that future work should con- sider integrating multiple post-training strategies. 6.3 Curse of Proxy <think>Physics.</think> + <answer> + <common_question> = <div id=answer><common_question>Question:…… <think>When considering…However…In conclusion…</think> Figure 5: The curse of proxy: inaccurate proxy reward may lead to reward hacking. NOVER uses reasoning perplexity as a proxy for the ideal reward, a common practice in RL (Ng and Russell, 2000; Christiano et al., 2017). However, this introduces the curse of proxy :imprecise re- wards can lead to reward hacking (Amodei et al.,2016), where models exploit flaws in the proxy rather than genuinely improving performance . To analyze this, we perform ablations on NOVER and record completion lengths to track the model’s training status(Figure 5). (1)Train ingwith onlyRf andRrreveals two failure modes: reasoning explo- sion (red), which is overlong, garbled and format- broken outputs, and reasoning collapse (green), where models stop thinking and generate mini- mum reasoning | https://arxiv.org/abs/2505.16022v1 |
tokens, such as a keyword point- ing out the discipline of the problem. Both stem from proxy misalignment with the evolving pol- icy, which leads to reward hacking. (2)Adding proxy -policysynchronization mitigates this a lot (blue), though eventual explosion still occasionally occurs. (3)Introducingtheefficiency reward Re andconditioned reward combination enables par- tial self-recovery (orange), as invalid completions receive zero reward, encouraging resampling. Yet, recovery is repeated from time to time, which leads to inefficient training. (4)Full NOVER integra- tion yields stable training (purple): the model stays “sober,” optimizing reasoning length only when it improves outcomes. Synchronization further re- duces proxy bias, supporting robust optimization. We have verified that NOVER can be trained sta- bly for over 200,000 steps without encountering reasoning explosion or collapse, while consistently maintaining high group diversity. 6.4 Verifier, Free, or Not Group Method 3B 7B BaselinesBase 18.40% 20.70% +CoT 24.20% 29.20% +SFT 27.30% 33.50% Model as Verifier+GV 18.30% 30.00% +LJ 21.40% 3.80% +LJS – 21.60% Verifier-Free +NOV ER 28.10% 36.60% Table 5: Experiments on WebInstruct. We compare LLM-as-a-Judge ( LJ) and the officially released verifier model for WebInstruct, the general verifier GV. To evaluate NOVER on reasoning tasks with difficult verification, we compared it against alter- native verifier designs: an LLM-as-a-judge ( LJ) and a fine-tuned verifier model. Experiments were conducted on WebInstruct, which includes an offi- cial general verifier model ( GV) (Ma et al., 2025). ForLJ, we used Gemini-2.0-flash with two prompt variants: a lenient “judge” prompt, and a stricter 8 version ( LJs) (aligned with our evaluation setup). As shown in Table 5, model-based verifiers were highly unstable. With LJ, lenient prompts encour- aged reward hacking, where πθgenerated vague but superficially valid outputs to elicit positive re- wards (e.g., giving rough ideas instead of precise answers). In contrast, strict prompts yielded sparse rewards and unstable training, thus, the 3B model failed to train. The dedicated verifier also proved unreliable, often misled by the policy model. For example, the policy might only sketch initial steps and prompt the verifier to complete the calcula- tion, causing the verifier to abandon judgment and instead solve the task, then assign an undeserved positive reward. 6.5 Incentivized Reasoning Patterns CausationDirectDecomposition CausationDirectDecomposition Enumer ation Reflection AssumptionDecomposition DirectDecomposition Direct DirectDecomposition Enumer ation Reflection AssumptionEnumer ation Reflection Assumption CausationEnumer ation CausationReflection AssumptionEnumer ation Reflection Assumption CausationReasoning P attern T ransitions Across Epochs 0 (CoT) 500 1000 1500 2000 Figure 6: Change of reasoning patterns over steps. We further investigated how reasoning patterns evolved during the training of NOVER. We ex- tracted the reasoning tokens generated by the model at different training steps, including the outputs pro- duced using CoT prompting before training (as 0 step), on a fixed test set. These reasoning traces were then classified using Gemini-2.0-flash into one of several predefined reasoning patterns. Fol- lowing the design in (Zeng et al., 2025), we catego- rize reasoning patterns in the NOVER incentivized reasoning process into six main kinds. Here is the definition of each kind: 1.Direct The direct recall of factual information, definitions, or established concepts without further analysis or transformation. This pat- | https://arxiv.org/abs/2505.16022v1 |
tern involves stating information from mem- ory as-is. 2.Decomposition The systematic organization of a problem into manageable components, establishing clear steps, intermediate goals, ormethodical frameworks. This pattern involves creating a structured approach to solving com- plex problems. 3.Enumeration The listing of multiple possi- bilities, options, alternatives, or cases without immediately selecting or committing to any specific one. This pattern involves comprehen- sively covering various aspects or potential scenarios. 4.Reflection The process of revisiting, ques- tioning, or reassessing previously stated ideas, assumptions, or conclusions. This pattern in- volves reflecting on one’s own reasoning and making adjustments based on further consid- eration. 5.Assumption The introduction of hypothetical conditions or premises that serve as a foun- dation for further reasoning. This pattern in- volves making conditional statements to ex- plore potential scenarios or outcomes. 6.Causation The establishment of cause-effect relationships between events, actions, or con- ditions. This pattern involves explaining how one factor leads to or influences another. As shown in Figure 6, before training, the model primarily used task decomposition typical of CoT prompting. As training progressed with the influ- ence of the efficiency reward, the model learned to skip redundant reasoning steps when it could di- rectly provide intermediate conclusions, resulting in a notable increase in the proportion of the direct reasoning pattern. At the same time, other reason- ing types began to appear and gradually stabilized, reflecting the development of reasoning that is both effective and efficient. 6.6 Inverse Incentive Training We further explore NOVER’s flexibility in an inter- esting setting using the SS-GEN creative writing dataset (Feng et al., 2025c), where both ground truth stories and structural rubrics are provided. Unlike typical setups, this experiment adopts a new format, where models generate a story within <story> tags, followed by a self-assessment in <assessment> tags. Figure 7 illustrates how we construct the inverse incentive training pipeline. Crucially, training data omits ground-truth stories and provides only rubric-based narrative criteria. We adapt NOVER by treating these rubrics as the 9 LLMreasoning tokensstory tokens Prompt LLMstory tokensassessment tokens Prompt ground truth storyNOVER rubricsNOVERFigure 7: Comparison between standard NOVER train- ing (up) and NOVER_RUBRIC training (down). guidelines for self-assessment, while the story be- comes the intermediate process to incentivize, de- noted NOVER_RUBRIC. This approach inverts the standard paradigm, implementing a “process as outcome” strategy. What we want is the inter- mediate tokens (story) instead of the final outputs (assessment). NOVER’s design thus effectively steers generation toward rubric satisfaction, with- out needing ideal story exemplars during training. Empirical results show that Qwen2.5-7B’s accu- 0% 25% 50% 75% 100%Clear StructureDescribe more than DirectAppropriate PerspectivePositive ToneAccurate ExpressionAppropriate VocabularyOverall 25% 26% 49%15% 18% 67%14% 41% 45%10% 51% 39%9% 37% 54%11% 43% 46%14% 36% 50% NOVER wins Draw NOVER_RUBRIC wins Figure 8: Human judgement on different rubrics for SS-GEN story generation. racy improves from 50.79% (standard NOVER) to 64.37% after rubric-based training. To further vali- date the gains, we sampled 100 examples and asked human annotators to express preferences for each story concerning each rubric. The details of each rubric are provided to human annotators, including:1.Clear Structure A clear structure refers | https://arxiv.org/abs/2505.16022v1 |
to that there is a straightforward title establishing the intervention goal, an introduction explain- ing the story theme and setting, a main body depicting the specific situation and appropri- ate responses, and a conclusion summarizing the core information presented. 2.Describe more than Direct The number of descriptive sentences should exceed twice the number of coaching sentences in a Social Story. A descriptive sentence accurately de- scribes the interaction, an event, or an expla- nation of the rationale that underlies what and why people think, say, or do, including per- spective and affirmative sentences. Coaching sentences gently and safely guide the behav- ior. 3.Appropriate Perspective A): The Social Story should never use the second-person perspective to prevent being overly direct. B): When describing negative behaviors, the Social Story should never employ the first- person perspective to safeguard the dignity and esteem of the audience. 4.Positive Tone The Social Story should always describe situations or guide behaviors in a positive manner, especially when depicting typical or expected behaviors in specific situa- tions. 5.Accurate Expression The story should use language that is as clear and unambiguous as possible because ASD children typically interpret things literally rather than inferring implicit meanings that require social insight. 6.Appropriate Vocabulary The Social Story should choose the most comfortable and ac- curate vocabulary for the audience. Firstly, use positive verbs while also being mindful of the varying implications of verbs. Avoid us- ing terms that are likely to evoke strong emo- tional feelings such as "shouldn’t", "must", "supposed to" and so on. We compare stories from the standard NOVER and the rubric-trained variant. Results consistently favor the latter across all criteria, highlighting NOVER’s flexibility for creative applications be- yond classical reasoning tasks. 10 7 Conclusion In this paper, we present NOVER, a verifier-free framework for incentive training that pushes the boundaries of incentive learning to encompass any text-to-text task. NOVER demonstrates strong performance, and we discuss both the advantages and limitations of such a reinforcement learning paradigm, highlighting promising future directions for achieving robust and effective post-training. The flexibility of NOVER also enables novel learn- ing paradigms such as inverse incentive training. Limitations (Zeng et al., 2025; Liu et al., 2025a) show that incentive training requires the base model to pos- sess certain fine-tuned capabilities, such as par- tial CoT reasoning and instruction-following, to effectively incentivize reasoning. This observa- tion is consistent with our findings on Qwen mod- els. Moreover, general reasoning tasks in free- form format demand stronger base model capa- bilities than structured-answer tasks like math or coding. Currently, Qwen is a suitable choice in open-source models that meet the above require- ments, so we conduct experiments on Qwen instead of other open-source models. Major open-source model teams have also recognized this and strength- ened multi-stage pretraining by incorporating large amounts of CoT and instruction data, which gives us confidence that NOVER will be able to incen- tivize more models in the future. Due to computational constraints, our experi- ments were limited to 3B and 7B models, without scaling up to hundred-billion-parameter models. Nevertheless, we believe NOVER provides | https://arxiv.org/abs/2505.16022v1 |
a new direction for scaling in the post-training era. In- stead of scaling the policy model or the reward model independently, NOVER suggests coordi- nated scaling, since the policy model itself serves as a proxy and effectively takes on the role of the reward model. Acknowledgments This work was supported in part by the UK En- gineering and Physical Sciences Research Coun- cil through a Turing AI Fellowship (grant no. EP/V020579/1, EP/V020579/2) and the Prosperity Partnership scheme (grant no. UKRI566). Wei is supported by a PhD studentship provided by King’s College London. The authors acknowledge the use of King’s Computational Research, Engineeringand Technology Environment (CREATE) at King’s College London. References Dario Amodei, Chris Olah, Jacob Steinhardt, Paul F. Christiano, John Schulman, and Dan Mané. 2016. Concrete Problems in AI Safety. CoRR , abs/1606.06565. Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean- Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M. Dai, Anja Hauth, Katie Milli- can, David Silver, Slav Petrov, Melvin Johnson, Ioan- nis Antonoglou, Julian Schrittwieser, Amelia Glaese, Jilin Chen, Emily Pitler, Timothy P. Lillicrap, and 33 others. 2023. Gemini: A Family of Highly Capable Multimodal Models. CoRR , abs/2312.11805. Zhuang Chen, Jincenzi Wu, Jinfeng Zhou, Bosi Wen, Guanqun Bi, Gongyao Jiang, Yaru Cao, Mengting Hu, Yunghwei Lai, Zexuan Xiong, and Minlie Huang. 2024. ToMBench: Benchmarking Theory of Mind in Large Language Models. In Proceedings of the 62nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), ACL 2024, Bangkok, Thailand, August 11-16, 2024 , pages 15959–15983. Association for Computational Lin- guistics. Paul F. Christiano, Jan Leike, Tom B. Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep Reinforcement Learning from Human Preferences. InAdvances in Neural Information Processing Sys- tems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA , pages 4299–4307. Tianzhe Chu, Yuexiang Zhai, Jihan Yang, Shengbang Tong, Saining Xie, Dale Schuurmans, Quoc V . Le, Sergey Levine, and Yi Ma. 2025. SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-training. CoRR , abs/2501.17161. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training Verifiers to Solve Math Word Prob- lems. CoRR , abs/2110.14168. Jiazhan Feng, Shijue Huang, Xingwei Qu, Ge Zhang, Yujia Qin, Baoquan Zhong, Chengquan Jiang, Jinxin Chi, and Wanjun Zhong. 2025a. Retool: Reinforce- ment learning for strategic tool use in llms. arXiv preprint arXiv:2504.11536 . Kaituo Feng, Kaixiong Gong, Bohao Li, Zonghao Guo, Yibing Wang, Tianshuo Peng, Benyou Wang, and Xiangyu Yue. 2025b. Video-R1: Reinforcing Video Reasoning in MLLMs. CoRR , abs/2503.21776. Yi Feng, Mingyang Song, Jiaqi Wang, Zhuang Chen, Guanqun Bi, Minlie Huang, Liping Jing, and Jian 11 Yu. 2025c. SS-GEN: A Social Story Generation Framework with Large Language Models. In AAAI- 25, Sponsored by the Association for the Advance- ment of Artificial Intelligence, February 25 - March 4, 2025, Philadelphia, PA, USA , pages 1300–1308. AAAI Press. Tom Foster, Eshaan Ahmed, James Cook, Shaked Lif- shitz, Tim Rocktäschel, and Jakob N. Foerster. 2024. | https://arxiv.org/abs/2505.16022v1 |
On Reward Functions For Self-improving Chain- of-thought Reasoning Without Supervised Datasets (Abridged Version). In Language Gamification - NeurIPS 2024 Workshop . Gemini, Petko Georgiev, Ving Ian Lei, Ryan Burnell, Libin Bai, Anmol Gulati, Garrett Tanzer, Damien Vincent, Zhufeng Pan, Shibo Wang, and 1 others. 2024. Gemini 1.5: Unlocking multimodal under- standing across millions of tokens of context. arXiv preprint arXiv:2403.05530 . General Reasoning. 2025. GeneralThought- 430K: Open Reasoning Dataset. https://huggingface.co/ datasets/GeneralReasoning/ GeneralThought-430K . Alexey Gorbatovski, Boris Shaposhnikov, Alexey Malakhov, Nikita Surnachev, Yaroslav Aksenov, Ian Maksimov, Nikita Balagansky, and Daniil Gavrilov. 2025. Learn Your Reference Model for Real Good Alignment. In The Thirteenth International Con- ference on Learning Representations, ICLR 2025, Singapore, April 24-28, 2025 . OpenReview.net. Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, Aixin Liu, and 80 others. 2025. DeepSeek-R1: Incentivizing Reasoning Capa- bility in LLMs via Reinforcement Learning. CoRR , abs/2501.12948. Aashish Gurung and Mirella Lapata. 2025. Learning to Reason for Long-form Story Generation. arXiv preprint arXiv:2503.22828 . Minggui He, Yilun Liu, Shimin Tao, Yuanchang Luo, Hongyong Zeng, Chang Su, Li Zhang, Hongxia Ma, Daimeng Wei, Weibin Meng, Hao Yang, Boxing Chen, and Osamu Yoshie. 2025. R1-T1: Fully Incen- tivizing Translation Capability in LLMs via Reason- ing Learning. CoRR , abs/2502.19735. Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-rank Adaptation of Large Language Models. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022 . OpenReview.net. Jingcheng Hu, Yinmin Zhang, Qi Han, Daxin Jiang, Xi- angyu Zhang, and Heung-Yeung Shum. 2025. Open- Reasoner-Zero: An Open Source Approach to Scal- ing Up Reinforcement Learning on the Base Model. CoRR , abs/2503.24290.Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, and Ting Liu. 2025. A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Chal- lenges, and Open Questions. ACM Trans. Inf. Syst. , 43(2):42:1–42:55. Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richard- son, Ahmed El-Kishky, Aiden Low, Alec Hel- yar, Aleksander Madry, Alex Beutel, Alex Carney, Alex Iftimie, Alex Karpenko, Alex Tachard Passos, Alexander Neitz, Alexander Prokofiev, Alexander Wei, Allison Tam, Ally Bennett, Ananya Kumar, and 80 others. 2024. OpenAI o1 System Card. CoRR , abs/2412.16720. Bowen Jin, Hansi Zeng, Zhenrui Yue, Dong Wang, Hamed Zamani, and Jiawei Han. 2025. Search- R1: Training LLMs to Reason and Leverage Search Engines with Reinforcement Learning. CoRR , abs/2503.09516. Hyunwoo Kim, Melanie Sclar, Xuhui Zhou, Ronan Le Bras, Gunhee Kim, Yejin Choi, and Maarten Sap. 2023. FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions. In Proceed- ings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Sin- gapore, December 6-10, 2023 , pages 14397–14413. Association for Computational Linguistics. Levente Kocsis and Csaba Szepesvári. 2006. Bandit Based Monte-carlo Planning. In Machine Learning: ECML 2006, 17th European Conference on Machine Learning, | https://arxiv.org/abs/2505.16022v1 |
Berlin, Germany, September 18-22, 2006, Proceedings , volume 4212 of Lecture Notes in Com- puter Science , pages 282–293. Springer. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yu- taka Matsuo, and Yusuke Iwasawa. 2022. Large Lan- guage Models are Zero-shot Reasoners. In Advances in Neural Information Processing Systems 35: An- nual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022 . Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. 2023. Effi- cient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles . Bangzheng Li, Ben Zhou, Fei Wang, Xingyu Fu, Dan Roth, and Muhao Chen. 2024. Deceptive Semantic Shortcuts on Reasoning Chains: How Far Can Mod- els Go without Hallucination? In Proceedings of the 2024 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), NAACL 2024, Mexico City, Mexico, June 16-21, 2024 , pages 7675–7688. Association for Com- putational Linguistics. 12 Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harri- son Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. 2024. Let’s Verify Step by Step. In The Twelfth International Conference on Learning Representa- tions, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net. Wei Liu, Chenxi Wang, YiFei Wang, Zihao Xie, Rennai Qiu, Yufan Dang, Zhuoyun Du, Weize Chen, Cheng Yang, and Chen Qian. 2024. Autonomous agents for collaborative task under information asymmetry. InThe Thirty-eighth Annual Conference on Neural Information Processing Systems . Zichen Liu, Changyu Chen, Wenjun Li, Penghui Qi, Tianyu Pang, Chao Du, Wee Sun Lee, and Min Lin. 2025a. Understanding R1-Zero-like Training: A Critical Perspective. CoRR , abs/2503.20783. Zijun Liu, Peiyi Wang, Runxin Xu, Shirong Ma, Chong Ruan, Peng Li, Yang Liu, and Yu Wu. 2025b. Inference-time scaling for generalist reward model- ing. arXiv preprint arXiv:2504.02495 . Yilun Lu, Cheng Zhang, Jiaming Song, Linxi Fan, and William Wang. 2025. ToM-RL: Reinforcement Learning Unlocks Theory of Mind in Small LLMs. arXiv preprint arXiv:2504.01698 . Xueguang Ma, Qian Liu, Dongfu Jiang, Ge Zhang, Zejun Ma, and Wenhu Chen. 2025. General- Reasoner: Advancing LLM Reasoning Across All Domains. https://github.com/ TIGER-AI-Lab/General-Reasoner/ blob/main/General_Reasoner.pdf . Andrew Y Ng and Stuart J Russell. 2000. Algorithms for Inverse Reinforcement Learning. In Proceedings of the Seventeenth International Conference on Ma- chine Learning , pages 663–670. Liangming Pan, Alon Albalak, Xinyi Wang, and William Yang Wang. 2023. Logic-LM: Empower- ing Large Language Models with Symbolic Solvers for Faithful Logical Reasoning. In Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023 , pages 3806– 3824. Association for Computational Linguistics. Siya Qi, Yulan He, and Zheng Yuan. 2024. Can we catch the elephant? the evolvement of hallucination evaluation on natural language generation: A survey. arXiv e-prints , pages arXiv–2404. Sahand Sabour, Siyang Liu, Zheyuan Zhang, June Liu, Jinfeng Zhou, Alvionna Sunaryo, Tatia Lee, Rada Mi- halcea, and Minlie Huang. 2024. EmoBench: Evalu- ating the Emotional Intelligence of | https://arxiv.org/abs/2505.16022v1 |
Large Language Models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers) , pages 5986–6004, Bangkok, Thailand. Association for Computational Linguistics. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal Policy Optimization Algorithms. CoRR , abs/1707.06347.ByteDance Seed, Yufeng Yuan, Yu Yue, Mingxuan Wang, Xiaochen Zuo, Jiaze Chen, Lin Yan, Wenyuan Xu, Chi Zhang, Xin Liu, and 1 others. 2025. Seed- thinking-v1. 5: Advancing superb reasoning mod- els with reinforcement learning. arXiv preprint arXiv:2504.13914 . Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Mingchuan Zhang, Y . K. Li, Y . Wu, and Daya Guo. 2024. DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models. CoRR , abs/2402.03300. Haozhan Shen, Peng Liu, Jingcheng Li, Chunxin Fang, Yibo Ma, Jiajia Liao, Qiaoli Shen, Zilun Zhang, Kangjia Zhao, Qianqian Zhang, and 1 oth- ers. 2025. Vlm-r1: A stable and generalizable r1- style large vision-language model. arXiv preprint arXiv:2504.07615 . Huatong Song, Jinhao Jiang, Yingqian Min, Jie Chen, Zhipeng Chen, Wayne Xin Zhao, Lei Fang, and Ji- Rong Wen. 2025. R1-Searcher: Incentivizing the Search Capability in LLMs via Reinforcement Learn- ing. CoRR , abs/2503.05592. Yi Su, Dian Yu, Linfeng Song, Juntao Li, Haitao Mi, Zhaopeng Tu, Min Zhang, and Dong Yu. 2025. Crossing the Reward Bridge: Expanding RL with Verifiable Rewards Across Diverse Domains. CoRR , abs/2503.23829. Yang Sui, Yu-Neng Chuang, Guanchu Wang, Jiamu Zhang, Tianyi Zhang, Jiayi Yuan, Hongyi Liu, An- drew Wen, Shaochen Zhong, Hanjie Chen, and Xia Ben Hu. 2025. Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models. CoRR , abs/2503.16419. M.-A-P. Team, Xinrun Du, Yifan Yao, Kaijing Ma, Bingli Wang, Tianyu Zheng, Kang Zhu, Minghao Liu, Yiming Liang, Xiaolong Jin, Zhenlin Wei, Chujie Zheng, Kaixin Deng, Shian Jia, Sichao Jiang, Yiyan Liao, Rui Li, Qinrui Li, Sirun Li, and 77 others. 2025. SuperGPQA: Scaling LLM Evaluation across 285 Graduate Disciplines. CoRR , abs/2502.14739. Jörg Tiedemann. 2012. Parallel Data, Tools and In- terfaces in OPUS. In Proceedings of the Eighth International Conference on Language Resources and Evaluation, LREC 2012, Istanbul, Turkey, May 23-25, 2012 , pages 2214–2218. European Language Resources Association (ELRA). TIGER-Lab. 2025. WebInstruct-verified Dataset. https://huggingface.co/datasets/ TIGER-Lab/WebInstruct-verified . Leandro von Werra, Younes Belkada, Lewis Tun- stall, Edward Beeching, Tristan Thrush, Nathan Lambert, Shengyi Huang, Kashif Rasul, and Quentin Gallouédec. 2020. TRL: Transformer Re- inforcement Learning. https://github.com/ huggingface/trl . 13 Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, and Ee-Peng Lim. 2023a. Plan-and-solve Prompting: Improving Zero- shot Chain-of-thought Reasoning by Large Language Models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023 , pages 2609–2634. Association for Computational Linguistics. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V . Le, Ed H. Chi, Sharan Narang, Aakanksha Chowd- hery, and Denny Zhou. 2023b. Self-consistency Im- proves Chain of Thought Reasoning in Language Models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023 . OpenReview.net. Zihan Wang, Kangrui Wang, | https://arxiv.org/abs/2505.16022v1 |
Qineng Wang, Pingyue Zhang, Linjie Li, Zhengyuan Yang, Kefan Yu, Minh Nhat Nguyen, Licheng Liu, Eli Gottlieb, and 1 others. 2025. Ragen: Understanding self-evolution in llm agents via multi-turn reinforcement learning. arXiv preprint arXiv:2504.20073 . Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V . Le, and Denny Zhou. 2022. Chain-of-thought Prompting Elicits Reasoning in Large Language Models. In Ad- vances in Neural Information Processing Systems 35: Annual Conference on Neural Information Process- ing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022 . Lilian Weng. 2025. Why we think. lilian- weng.github.io . Xiaobo Xia and Run Luo. 2025. Gui-r1: A generalist r1-style vision-language action model for gui agents. arXiv preprint arXiv:2504.10458 . Tian Xie, Zitian Gao, Qingnan Ren, Haoming Luo, Yuqian Hong, Bryan Dai, Joey Zhou, Kai Qiu, Zhi- rong Wu, and Chong Luo. 2025. Logic-RL: Unleash- ing LLM Reasoning with Rule-based Reinforcement Learning. CoRR , abs/2502.14768. Hanqi Yan, Qinglin Zhu, Xinyu Wang, Lin Gui, and Yulan He. 2024. Mirror: Multiple-perspective Self- reflection Method for Knowledge-rich Reasoning. In Proceedings of the 62nd Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), ACL 2024, Bangkok, Thailand, Au- gust 11-16, 2024 , pages 7086–7103. Association for Computational Linguistics. An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayi- heng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Ji- axi Yang, Jingren Zhou, Junyang Lin, Kai Dang, and 22 others. 2024. Qwen2.5 Technical Report. CoRR , abs/2412.15115. Yi Yang, Xiaoxuan He, Hongkun Pan, Xiyan Jiang, Yan Deng, Xingtao Yang, Haoyu Lu, Dacheng Yin,Fengyun Rao, Minfeng Zhu, Bo Zhang, and Wei Chen. 2025. R1-Onevision: Advancing Generalized Multimodal Reasoning through Cross-modal Formal- ization. CoRR , abs/2503.10615. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. 2023. Tree of Thoughts: Deliberate Problem Solv- ing with Large Language Models. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Sys- tems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 . Hongbin Ye, Tong Liu, Aijia Zhang, Wei Hua, and Weiqiang Jia. 2024. Cognitive Mirage: A Re- view of Hallucinations in Large Language Models. InProceedings of the First International OpenKG Workshop: Large Knowledge-Enhanced Models co- locacted with The International Joint Conference on Artificial Intelligence (IJCAI 2024), Jeju Island, South Korea, August 3, 2024 , volume 3818 of CEUR Workshop Proceedings , pages 14–36. CEUR-WS.org. Qiying Yu, Zheng Zhang, Ruofei Zhu, Yufeng Yuan, Xiaochen Zuo, Yu Yue, Tiantian Fan, Gaohong Liu, Lingjun Liu, Xin Liu, Haibin Lin, Zhiqi Lin, Bole Ma, Guangming Sheng, Yuxuan Tong, Chi Zhang, Mofan Zhang, Wang Zhang, Hang Zhu, and 16 others. 2025. DAPO: An Open-source LLM Reinforcement Learning System at Scale. CoRR , abs/2503.14476. Weizhe Yuan, Jane Yu, Song Jiang, Karthik Padthe, Yang Li, Dong Wang, Ilia Kulikov, Kyunghyun Cho, Yuandong Tian, Jason E Weston, and 1 oth- ers. 2025a. Naturalreasoning: Reasoning in the wild with 2.8 m challenging questions. arXiv preprint | https://arxiv.org/abs/2505.16022v1 |
arXiv:2502.13124 . Yufeng Yuan, Qiying Yu, Xiaochen Zuo, Ruofei Zhu, Wenyuan Xu, Jiaze Chen, Chengyi Wang, TianTian Fan, Zhengyin Du, Xiangpeng Wei, and 1 others. 2025b. VAPO: Efficient and reliable reinforcement learning for advanced reasoning tasks. arXiv preprint arXiv:2504.05118 . Yang Yue, Zhiqi Chen, Rui Lu, Andrew Zhao, Zhaokai Wang, Shiji Song, and Gao Huang. 2025. Does Re- inforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model? arXiv preprint arXiv:2504.13837 . Weihao Zeng, Yuzhen Huang, Qian Liu, Wei Liu, Keqing He, Zejun Ma, and Junxian He. 2025. SimpleRL-Zoo: Investigating and Taming Zero Re- inforcement Learning for Open Base Models in the Wild. CoRR , abs/2503.18892. Jingyi Zhang, Jiaxing Huang, Huanjin Yao, Shunyu Liu, Xikun Zhang, Shijian Lu, and Dacheng Tao. 2025. R1-VL: Learning to Reason with Multimodal Large Language Models via Step-wise Group Relative Pol- icy Optimization. CoRR , abs/2503.12937. Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, 14 Claire Cui, Olivier Bousquet, Quoc V . Le, and Ed H. Chi. 2023. Least-to-most Prompting Enables Com- plex Reasoning in Large Language Models. In The Eleventh International Conference on Learning Rep- resentations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023 . OpenReview.net. 15 A Profiling Rollout 78.8% (12.43s)Tag Format Reward 1.1% (0.17s)Reasoning Reward 5.1% (0.80s)Efficiency Reward 5.4% (0.85s)Loss Calculation 3.9% (0.61s)VLLM Parameter Sync 3.2% (0.50s)Logits Calculation 2.6% (0.41s) Figure 9: Profiling of the training time in NOVER. Figure 9 shows the average training time per sample using two H100 GPUs with a completion length of 2,048 tokens. Most of the time is spent on the rollout stage, where vLLM (Kwon et al., 2023) is used to sample multiple completions from the policy model efficiently. In contrast, computing the reasoning and efficiency rewards accounts for only about 5% of the total training time. B Reasoning Hallucination Case Figure 10 presents an example from the OPUS translation dataset where both the base model and NOVER perform reasoning to generate a transla- tion. As shown, the base model is able to pro- duce seemingly correct reasoning steps, sometimes nearly identical to those of NOVER. However, de- spite generating the right steps, the base model fails to reach the correct conclusion. For instance, its reasoning chain ends with a self-check that asserts the translation is perfect and faithfully conveys the meaning of the original sentence, including the description of the “woman’s appearance.” Yet, in the earlier steps, the model translates only the subject, verb, and object, omitting any reference to the woman’s appearance. This kind of confi- dent but incorrect self-verification, known as one kind of reasoning hallucination (Huang et al., 2025; Qi et al., 2024), can cause the model to overlook essential content, resulting in worse performance than directly producing the translation. In contrast, NOVER follows a similar reasoning path but in- corporates critical refinements. For example, it ex- plicitly notes the need to translate not only the core grammatical elements but also any additional de-scriptive components. As a result, in the initial step of identifying main elements, NOVER successfully detects phrases related to the woman’s appearance, such | https://arxiv.org/abs/2505.16022v1 |
as “perfect figure” and “elegant dress,” and ac- curately translates and integrates them in later steps. Moreover, NOVER provides more rigorous rubrics, such as emphasizing in the rearrangement step that Spanish word order “may differ from English.” C Reasoning Perplexity Example Figure 11 shows two completions generated by NOVER in response to a medical question from the natural reasoning dataset, each associated with a different level of reasoning perplexity. As illus- trated, Completion 1, which has lower reasoning perplexity, is clearly superior to Completion 2 in terms of physiological mechanisms, terminologi- cal accuracy, and the discussion on “the role of dietary factors”, and overall scientific reasoning. In contrast, Completion 2 is relatively coarse and con- tains misleading or missing descriptions of relevant terms and mechanisms. Although reasoning per- plexity does not directly compare the answer with a reference, it estimates the likelihood of the refer- ence based on the inferred reasoning path, thereby providing a concise and effective measure of an- swer quality, as evidenced by this example. D False Premise Example Figure 12 illustrates an example from the FANToM dataset, shedding light on why NOVER may un- derperform compared to SFT in the false premise task setting. As shown, the example involves a multi-party dialogue and poses a question requir- ing higher-order theory of mind reasoning—for in- stance, inferring how person B would think about a certain topic from person A’s perspective. The trap, however, is that B may not have participated in the conversation and thus is unaware of the topic, mak- ing the question unanswerable. The correspond- ing reference typically reflects this, such as “B is unaware of. . . ”. SFT can quickly memorize and reproduce such response patterns (e.g., “somebody is unaware of. . . ”), whereas NOVER relies on the base model to first generate a correct answer, which is then reinforced. However, the base model may struggle to recognize the false premise and might not even consider the question potentially unan- swerable. Consequently, as shown in the figure, NOVER attempts to answer the question but re- mains constrained by the flawed assumption that B 16 Case Study in OPUS Translation Task Question: Translate the following text from English to Spanish: With her perfect figure and elegant dress she was, indeed, a strange apparition upon a lonely moorland path. Reference: Dada la perfección y elegancia de su vestido, resultaba, desde luego, una extraña aparición en la solitaria senda del páramo. Base Model CoT: To translate the given text from English to Spanish, I will follow these steps: 1. Identify the main components of the sentence: subject, verb, and object. 2. Translate each component individually. 3. Rearrange the components to form a grammatically correct sentence in Spanish. 4. Ensure that the translated sentence maintains the original meaning and tone. Step 1: Identify the main components of the sentence. Subject: "She" ...... Step 2: Translate each component individually. Subject: "Ella" (She) ...... Step 3: Rearrange the components to form a grammatically correct sentence in Spanish. "Ella era, en efecto, una extraña aparición en un sendero solitario de tierra | https://arxiv.org/abs/2505.16022v1 |
mojada." Step 4: Ensure that the translated sentence maintains the original meaning and tone. The translated sentence maintains the original meaning and tone, as it describes the woman’s appearance and the setting in which she appears. Base Model Translation: Ella era, en efecto, una extraña aparición en un sendero solitario de tierra mojada. NOVER thinking process: <think> To translate the given text from English to Spanish, I will follow these steps: 1. Identify the main components of the sentence: subject, verb, object, and any additional descriptive elements. 2. Translate each component into Spanish while maintaining the original meaning and context. 3. Rearrange the translated components to fit the Spanish sentence structure, which may differ from English. 4. Review the final translation for accuracy and fluency. Here’s the step-by-step translation process: 1. Main components: - Subject: "she" ...... -Additional descriptive elements: "perfect figure", "elegant dress", "indeed", "upon a lonely moorland path" 2. Translations: - "she" -> "ella" ...... - "perfect figure" -> "figura perfecta" - "elegant dress" -> "traje elegante" - "indeed" -> "realmente" - "upon a lonely moorland path" -> "en una solitaria senda de moorland" 3. Rearranged Spanish sentence structure: - "Ella era, realmente, una extraña aparición en una solitaria senda de moorland con su figura perfecta y traje elegante." 4. Reviewing the final translation: - The translation maintains the original meaning and context, and the sentence structure is appropriate for Spanish. I have translated the given text from English to Spanish by identifying the main components, translating each component, rearranging them to fit the Spanish sentence structure, and reviewing the final translation for accuracy and fluency. </think> NOVER translation: <answer> Ella era, realmente, una extraña aparición en una solitaria senda de moorland con su figura perfecta y traje elegante. </answer> Figure 10: A case in the OPUS translation dataset, where the base model generates hallucinated CoT that leads to wrong translation, while NOVER follows similar steps but corrects the wrong parts in the thinking process. 17 Two Completions of NOVER on Natural Reasoning Dataset Question: What is the primary mechanism by which the body eliminates excess cholesterol, and how do dietary factors such as fiber intake and fatty food consumption affect this process? Provide a detailed explanation of the gastrohepatic cycle and the role of bile secretion in cholesterol elimination. Reference: The primary mechanism by which the body eliminates excess cholesterol is through the gastrohepatic cycle, where excess cholesterol is secreted by the bile ducts back into the gastrointestinal tract. Dietary factors such as fiber intake can help prevent the digestion of cholesterol, while fatty food consumption can increase cholesterol production. NOVER completion 1, with reasoning perplexity = 1.406 (the good one) <think> The primary mechanism by which the body eliminates excess cholesterol involves a process called the gastrohepatic cycle. This cycle consists of several steps: 1. Cholesterol in the blood is taken up by the small intestine and converted into bile acids, which are then secreted into the intestine. 2. Bile acids are reabsorbed by the small intestine and returned to the liver via the portal vein. 3. In | https://arxiv.org/abs/2505.16022v1 |
the liver, the bile acids are converted back into cholesterol and stored or used for other purposes. 4. When the body has an excess of cholesterol, the liver produces more bile acids, which are excreted in the feces. ...... In summary, the gastrohepatic cycle is the primary mechanism by which the body eliminates excess cholesterol, and dietary factors such as fiber intake and fatty food consumption can affect this process in various ways. </think> <answer> The primary mechanism by which the body eliminates excess cholesterol involves a process called the gastrohepatic cycle. This cycle consists of several steps: Cholesterol in the blood is taken up by the small intestine and converted into bile acids, which are then secreted into the intestine. Bile acids are reabsorbed by the small intestine and returned to the liver via the portal vein. In the liver, the bile acids are converted back into cholesterol and stored or used for other purposes. When the body has an excess of cholesterol, the liver produces more bile acids, which are excreted in the feces. Dietary factors such as fiber intake and fatty food consumption can affect this process in various ways. Dietary fiber, particularly soluble fiber, can bind to bile acids in the intestine and prevent their reabsorption. This leads to an increased excretion of bile acids in the feces, which in turn increases the amount of cholesterol that is eliminated from the body. Fatty foods, particularly those high in saturated and trans fats, can increase the amount of cholesterol produced by the liver. This can lead to an increase in the amount of cholesterol that needs to be eliminated from the body. Certain fatty acids, such as omega-3 fatty acids, can also have a beneficial effect on cholesterol metabolism. Omega-3 fatty acids can reduce the amount of cholesterol produced by the liver and increase the amount of cholesterol that is eliminated from the body. </answer> NOVER completion 2, with reasoning perplexity = 1.516 (the bad one) <think> The primary mechanism by which the body eliminates excess cholesterol is through the excretion of cholesterol-rich bile acids in the feces. This process involves the breakdown and conversion of cholesterol into bile acids in the liver, followed by the release of these bile acids into the small intestine during digestion. Bile acids are then excreted from the body in the feces. Dietary factors such as fiber intake and fatty food consumption can affect this process in several ways. Fiber-rich foods can increase the viscosity of the stool, which slows down the passage of bile acids through the digestive tract. This allows more time for the bile acids to be absorbed back into the bloodstream and reused, reducing the amount of cholesterol that is excreted in the feces. ...... Bile secretion plays a critical role in cholesterol elimination by providing the necessary environment for the breakdown and conversion of cholesterol into bile acids in the liver. Bile acids are then released into the small intestine during digestion, where they interact with fatty acids and triglycerides to form micelles. These micelles help to emulsify | https://arxiv.org/abs/2505.16022v1 |
the fatty acids and make them more available for absorption by the small intestine. </think> <answer> The primary mechanism by which the body eliminates excess cholesterol is through the excretion of cholesterol-rich bile acids in the feces. Dietary factors such as fiber intake and fatty food consumption can affect this process by increasing or decreasing the amount of cholesterol that is excreted in the feces. The gastrohepatic cycle is the process by which bile is produced in the liver, stored in the gallbladder, and released into the small intestine during digestion. Bile secretion plays a critical role in cholesterol elimination by providing the necessary environment for the breakdown and conversion of cholesterol into bile acids in the liver. </answer> Figure 11: We show two completions from NOVER on the Natural Reasoning dataset, showing how reasoning perplexity discriminates good responses from bad ones. 18 is aware of the topic, and reasons accordingly. This highlights why NOVER may perform worse than SFT on counterintuitive tasks. Addressing such cases may require a hybrid approach: first using SFT to expose the model to appropriate response patterns and challenge its assumptions, followed by reinforcement learning to strengthen accurate reasoning. E Proof of Equivalence of Relative Perplexity Gain and Absolute Reasoning Perplexity after Group Normalization Let us denote the reasoning perplexity of i-th com- pletion in a group Cof size GasPabs i, where this group of completions share the same ground truth gand the same prompt p, with different reasoning tokens ti, Pabs i=Pr(p, ti, g), (14) and define the relative perplexity gain (Gurung and Lapata, 2025; Foster et al., 2024) as Prel i, Prel i=Pr(p, ti, g)−Pbase(p, g) Pbase(p, g). (15) Where Pbaseis the base perplexity of the proxy model’s prediction on ground truth tokens gwith- out reasoning tokens t, Pbase(p, g) = exp −P|g| i=1logπp(gi|p, g<i) |g|! (16) Since within one group, all completions share the same pandg, hence all completions share the same Pbase(p, a). It can be regarded as a positive scalar constant shared by all completions in the group, and we can rewrite Prel ias an affine transformation ofPabs i: Prel i=αPabs i+β, (17) where α=1 Pbase(p, a)>0, β =−1. (18) The group-normalized advantage for either form of perplexity is computed by standard normaliza- tion within one group: A(·) i=P(·) i−µ(·) C σ(·) C, (19)where the group mean and standard deviation are µ(·) C=1 GGX j=1P(·) j σ(·) C=vuut1 GGX j=1 P(·) j−µ(·) C2(20) Substituting the affine relation into the mean, we have µrel C=1 GGX j=1Prel j =1 GGX j=1 αPabs j+β =α·1 GGX j=1Pabs j+β =αµabs C+β(21) Similarly, the standard deviation transforms as σrel C=vuut1 GGX j=1 αPabs j+β−µrel C2 =|α|σabs C =ασabs C(since α >0)(22) Therefore, the relative advantage Arel ican be expressed as Arel i=Prel i−µrel C σrel C =αPabs i+β− αµabs C+β ασabs C =α(Pabs i−µabs C) ασabs C =Pabs i−µabs C σabs C =Aabs i(23) This demonstrates that the group-normalized ad- vantage values computed from absolute reasoning perplexity Prand from relative perplexity gain normalized by Pbaseare exactly identical. Con- sequently, both formulations yield the same ad- vantage | https://arxiv.org/abs/2505.16022v1 |
signals for policy optimization under the GRPO training. 19 A False Premise Example in FANToM Dataset Question Here is a dialogue: Eliana: Hello, guys. I find it fascinating how various cultures influence food and cuisine. For instance, in my culture, Brazilian cuisine is a culmination of Europe, Africa, and Native American influences. Kade: How so, Eliana? Could you give an example? Eliana: Absolutely, Kade! Take feijoada for example, it’s a Brazilian dish that’s similar to a bean stew, filled with beef and pork. It’s traditionally served with rice, collard greens, and farofa, a cassava flour mixture. Feijoada was often made by slaves, who would use whatever meats were available, hence the mixture. Addison: That’s very interesting, Eliana. Food does tell a powerful story about the people, doesn’t it? I hail from Scotland, and over there, we have a dish called haggis. It’s a combination of sheep’s heart, liver, and lungs minced with onion, oatmeal, suet, and spices. I know it sounds a bit... exotic, but it’s actually quite delicious. Kade: Wow, Addison! I’ve never heard of anything like that before. I must add that to my food bucket list. Although, as a Canadian, I guess the most iconic food must be poutine: Wow, Addison! I’ve never heard of anything like that before. I must add that to my food bucket list. Although, as a Canadian, I guess the most iconic food must be poutine: French fries and cheese curds topped with brown gravy. It’s a simple dish yet so full of flavor. ...... Alberto: I’d be happy to share, Addison! Well, for starters, we have Dia de los Muertos, or Day of the Dead... Alberto: Dia de los Muertos is a very important festival in Mexico. It’s a time when we honor and celebrate our deceased loved ones, remembering them with music, food, and colorful altars. Kade: Sounds like a profound tradition, Alberto. In Canada, we have the National Indigenous Peoples Day. It’s a day to celebrate and appreciate the cultural diversity of the First Nations, Inuit, and Métis peoples. Addison: I love the sound of both those celebrations. In Scotland, we have numerous festivals, but one of the biggest is Hogmanay, our New Year’s Eve celebration. It involves lots of food, drink, and merrymaking, as well as the singing of the traditional folk song Auld Lang Syne. Alberto: That sounds like so much fun, Addison! I think it’s so powerful how we keep our traditions alive and pass them on to future generations. Kade: Absolutely, Alberto. It’s these traditions, these festivals, the food and the language that makes our cultures unique and gives us our identities. Addison: Well said, Kade. I’ve learned so much about Mexican and Canadian culture today. It’s great when we can all sit and learn from each other. Alberto: I agree, Addison. And this exchange isn’t just limited between us, but it should be encouraged everywhere. It fosters understanding and respect for diversity. Kade: Definitely, Alberto. So, I guess we meet at the next food tasting session, where we have a taste of Brazil, Scotland, Canada, and | https://arxiv.org/abs/2505.16022v1 |
Mexico! What dishes from Scotland and Canada does Eliana think that Alberto believes were discussed by Addison and Kade respectively? Reference: Alberto is unaware about which dishes from Scotland and Canada were discussed by Addison and Kade respectively as he was not involved in the conversation when this was discussed. Thus, Eliana does not have any basis to form an opinion on what Alberto believes about the mentioned dishes. NOVER Response: <think> Addison and Kade were the ones who discussed the dishes from Scotland and Canada. From the dialogue, it is evident that Addison mentioned the dish "haggis", which is a combination of sheep’s heart, liver, and lungs minced with onion, oatmeal, suet, and spices. The other dish mentioned is the "poutine" which is a dish consisting of French fries and cheese curds topped with brown gravy. Alberto, on the other hand, mentioned the dish "mole sauce" which is a mixture of spices, chocolate, chilies, and other ingredients that was born out of a blend of indigenous and Spanish influences. Kade, during the conversation, mentioned the "powwow", which is a time when people gather to celebrate their deceased loved ones with music, food, and colorful altars, and is a traditional Mexican festival. So, Alberto believed that Addison and Kade discussed the dishes from Scotland and Canada, which are haggis and poutine respectively. </think> <answer> Haggis is a dish from Scotland. Kade mentioned the dish "powwow" during the conversation, which is a traditional Mexican festival. So, Alberto believed that Addison and Kade discussed the dishes from Scotland and Canada, which are haggis and poutine respectively. </answer> Figure 12: A false premise example in FANToM (Kim et al., 2023) dataset, where the question can not be answered. 20 | https://arxiv.org/abs/2505.16022v1 |
arXiv:2505.16023v2 [cs.CL] 23 May 2025Prototypical Human-AI Collaboration Behaviors from LLM-Assisted Writing in the Wild Sheshera Mysore1△Debarati Das2†Hancheng Cao1Bahar Sarrafzadeh1△ 1Microsoft, WA, USA2University Of Minnesota, MN, USA △Corresponding authors: smysore@iesl.cs.umass.edu ,bahar.sarrafzadeh@microsoft.com Abstract As large language models (LLMs) are used in complex writing workflows, users engage in multi-turn interactions to steer generations to better fit their needs. Rather than passively ac- cepting output, users actively refine, explore, and co-construct text. We conduct a large- scale analysis of this collaborative behavior for users engaged in writing tasks in the wild with two popular AI assistants, Bing Copilot and WildChat. Our analysis goes beyond sim- ple task classification or satisfaction estimation common in prior work and instead character- izes how users interact with LLMs through the course of a session. We identify proto- typical behaviors in how users interact with LLMs in prompts following their original re- quest. We refer to these as Prototypical Human- AI Collabora tion Be haviors ( PATHs) and find that a small group of PATHs explain a majority of the variation seen in user-LLM interaction. These PATHs span users revising intents, ex- ploring texts, posing questions, adjusting style or injecting new content. Next, we find statisti- cally significant correlations between specific writing intents and PATHs, revealing how users’ intents shape their collaboration behaviors. We conclude by discussing the implications of our findings on LLM alignment. 1 Introduction LLMs’ generalization ability and natural language interfaces have made powerful AI models acces- sible to a range of users engaged in diverse tasks (Ouyang et al., 2022). The logged natural language interactions from LLM-powered AI assistants have emerged as a rich data source for understanding user-AI interaction (Zhu et al., 2025). Leveraging this data, recent studies have explored the high- level tasks users engage in (e.g., search, coding, writing, etc.) (Tamkin et al., 2024), their varia- tion across occupations (Handa et al., 2025), and †Work done during an internship at Microsoft 13 I sell handmade knives and want to cold email people. Please write an eye catching email subject. “Fire up your collection…” Suggest ten other subjects Original requestRequests more outputs Write a document to take to a meeting to review a womans 3 month internship with City Hall.Original request Certainly! Here is a draft document What else can I add for areas of improvement ? Requests answers Figure 1: Users follow up their original requests to collaborate with LLMs in writing sessions. We identify prototypical human-AI collaboration behaviors ( PATHs), and find statistically significant correlations between users’ writing intents and P ATHs. measured user satisfaction based on interaction pat- terns (Lin et al., 2024). However, little prior work have examined how users collaborate with LLMs in real-world LLM deployments. A notable characteristic of current AI assistants is their conversational nature, which enables users to engage in follow-up interactions after stating their original requests. These follow-ups allow users to articulate their needs better and obtain more helpful responses from the LLM (Figure 1). Analyzing these follow-up interactions promises to provide a rich characterization of human-AI col- laboration and guide research on LLM alignment based | https://arxiv.org/abs/2505.16023v2 |
on realistic in-the-wild interactions. We contribute such an analysis by focusing on LLM-assisted writing, a important and increasingly prevalent use case for AI assistants (Tamkin et al., 2024; Suri et al., 2024). Recent studies have found that LLM-assisted writing is now common in im- pactful domains such as press releases, job postings, 1 and peer reviews, among others (Liang et al., 2025, 2024a). Despite this, no prior research has system- atically analyzed how users collaborate with AI assistants for writing tasks in the wild. We address this gap by formulating two key research ques- tions: (RQ1) What high-level collaboration behav- iors emerge from user interactions in AI-assisted writing? And (RQ2) How do these collaboration behaviors differ across writing intents? To address these questions, we conduct a large- scale analysis of writing sessions from two AI as- sistants: Bing Copilot (Mehdi, 2023) and WildChat (Zhao et al., 2024a). Our datasets span 20.5M and 800k English user-LLM conversation sessions over seven and thirteen months of global Bing Copi- lot and WildChat usage, respectively. The two datasets enable us to identify shared collaboration behaviors across distinct AI assistants, and the pub- lic WildChat logs support reproducibility and fu- ture research. To answer RQ1, we use GPT-4o to classify users’ follow-up utterances into high-level types and cluster them using Principal Component Analysis (PCA) (Bengio et al., 2013). Each cluster represents sessions with consistent collaboration behavior, which we term Prototypical Human- AI Collabora tion Be haviors ( PATH). To address RQ2, we use GPT-4o to identify writing intents from the users original requests and conduct regression analysis to correlate them with PATHs. This lets us detect statistically significant relationships between writing intents and collaboration behaviors. Takeaways: We identify seven PATHs that capture 80-85% of variance across datasets, with shared be- haviors like revising intents, exploring texts, asking questions, or modifying generations despite differ- ences in deployments. Correlating writing intents andPATHs enables us to uncover intent-specific alignment needs. For instance, users sought to explore diverse generations in follow-ups in brain- storming eye-catching texts, indicating users need for LLMs aligned for brainstorming applications. Users generated long texts by staging generation and interactively providing feedback with different levels of specificity, indicating the need for session- level alignment from under-specific feedback. And in generating professional or technical texts, users followed up with questions aimed at learning about a domain’s norms or to seek feedback, indicating the need to align LLMs for promoting learning in users. By analyzing collaborative writing behav- iors in the wild, we offer insights to guide future research on LLM alignment.2 Related Work The rise of interactive LLM systems has seen the emergence of user-LLM interaction log datasets (Kirk et al., 2024; Zhao et al., 2024a) and analysis, building on rich traditions of log analysis in HCI and Information Retrieval (Jansen and Spink, 2006; Dumais et al., 2014). This work has developed an understanding of how users interact with intelligent systems in-the-wild aiming to inform future system and model development. Such work has analyzed user-LLM interactions in the context of informa- tion seeking (Trippas et al., 2024), theorem | https://arxiv.org/abs/2505.16023v2 |
proving (Collins et al., 2024a), image generation (Palmini et al., 2024; V odrahalli and Zou, 2024), and writing assistance (Lee et al., 2022). Most relevant is the prior work on log analysis for UI based writing as- sistance (Lee et al., 2022; Sarrafzadeh et al., 2021). Respectively, they analyze model outputs, collab- oration patterns, and the ability of interactions to predict the different stages of writing in UI based applications. Our work differs in its examination of conversation logs to uncover collaboration be- haviors. Our focus on user-LLM conversations ties to recent large scale analysis of such logs to infer high level tasks of conversations (Tamkin et al., 2024; Suri et al., 2024). We extend this beyond analyzing tasks alone by focusing on collaboration behaviors captured in users’ follow-up utterances. Further, correlating collaboration behaviors with intents enables us to uncover meaningful implica- tions for LLM alignment (§7). Finally, Collins et al. (2024a) present a notable exception in conducting a small-scale qualitative analysis of user-LLM collab- oration behaviors in mathematical theorem proving. Similar to our findings (§6.1,6.2), they find users to engage in exploration and question asking be- haviors. Our work differs in its focus on writing, at scale analysis of in-the-wild interactions and, corre- lating writing intents with collaboration behaviors. We review related work on LLM-assisted writing with a human-centered focus and user satisfaction estimation from logs in Appendix A, and further discuss relevant studies in the context of our results in Section 6. 3 Analysis Setup Our analysis is based on user-LLM conversational logs from Bing Copilot (Mehdi, 2023, BCP) and WildChat-1M (Zhao et al., 2024a, WC). The two systems vary in their base LLMs, interfaces, and user bases and allow our analysis to identify shared 2 Users Countries Top 5 countries BCP Wr 202k 219 US30%, IN 15%, GB 6% PH6%, AU 6% WC Wr 22k 166 US25%, GB 10%, RU 9% IN5%, PH 4% Table 1: The number of users, countries, and countries where sessions originate (in %) in BCP Wrand WC Wr. They have 250k and 68k sessions respectively. behaviors likely to hold beyond deployments. Fur- ther, the public WildChat-1M dataset enables repro- ducibility and future work based on our analysis. For our analysis, we focus on English sessions en- gaged in writing tasks, excluding sessions focused on tasks like search or software development. We conceptualize writing sessions broadly to be the ones where users generated inter-personal or pub- lic communicative texts, technical texts, creative texts, and those focused on summarization. We treat complete generations and rewrites of whole or parts of texts as writing. We operationalize our definition in an iteratively developed and manually validated GPT-4o based multi-label Task Classifier (fCoarseT ) and use it to identify sessions focused on writing. We refer to the writing log datasets as BCP Wrand WC Wr. Appendix B details both datasets, their filtering, and fCoarseT . Bing Copilot - BCP WrWe construct BCP Wr from a daily random sample of sessions from Bing Copilot gathered from April-Oct 2024, resulting in 20.5M sessions. To enable the study of user- LLM collaboration, | https://arxiv.org/abs/2505.16023v2 |
we ensure that sessions con- tain users’ follow-up utterances and retain sessions with at least 2 user utterances and those in English. On the resulting 2.8M sessions, we run fCoarseT to identify writing sessions and retain 250k sessions in BCP Wr. GPT-4 powered all interactions in BCP. WildChat - WC WrWe follow a similar proce- dure to construct WC Wrfrom the public WildChat- 1M. Zhao et al. (2024a) gathered in a research study from April 2023 to May 2024. We retain sessions with at least 2 user utterances and English sessions. This results in 160k sessions of which 68k are iden- tified as writing sessions by fCoarseT . GPT-4 and GPT-3.5-Turbo powered the interactions in WC. In Table 1, we see that the resulting BCP Wrand WC Wrcontain sessions from a large group of users from over 150 countries. While many sessions orig- inate in English-speaking countries, we observe a long tail of countries. In Table 5, we present ses- sion length characteristics, and find that sessions 14 PCA Regression AnalysisPredictor: Writing Intent in ORTarget: PATHFollow-up featuresConversationsPATHsFigure 2: Our analysis methods aim to discover PATHs (above) and identify how PATHs differ across writing intents in original requests (ORs) (below). had 2 follow-ups on average and that half of all sessions contain only one writing intent. 4 Log Analysis with P ATHs Our analysis centers around two research ques- tions: (RQ1) What high-level collaboration behav- iors emerge from user interactions in LLM-assisted writing? And (RQ2) How do these collaboration be- haviors differ across writing intents? We overview our analysis method here (see Figure 2) and include detailed descriptions in Appendix C. Identifying PATHs. To answer RQ1 we classify user utterances into “Original Requests” (OR) and set of follow-up types (Table 2) using a GPT-4o based follow-up classifier ( fFollowU ). ORs repre- sent the primary writing goal of a user in a session, and follow-up types categorize users’ follow-ups into higher-level behaviors (e.g., Figure 1). fFollowU was developed iteratively and manually validated to ensure its accuracy for subsequent analysis (see Appendix C.1). Then, we identify co-occurring patterns of follow-up types using Principal Com- ponent Analysis (PCA) (Bengio et al., 2013). We take each principal component to represent a PATH. Specifically, we represent a dataset’s sessions Swith a “ tf-idf ” representation of its follow-up types (F) and run PCA on F. This transforms it as P=FW where each dimension of Prepresents a mutually co-occurring set of follow-up types and segments Sinto subsets of sessions with a consis- tentPATH. We retain the first ldimensions of P that explain 80-85% variance in F, treating the rest as noise. Our use of PCA follows its standard use for exploratory data analysis (Eagle and Pentland, 2009; Reades et al., 2009), its linear nature enables us to easily visualize how follow-up types combine to form PATHs (through W), and its closed form solution ensures the consistency of results across 3 User utterance type Description ORIGINAL REQUEST User makes a new request RESTATES REQUEST Reformulates their request ELABORATES REQUEST Expands on their request REQUESTS ANSWERS Question related to output | https://arxiv.org/abs/2505.16023v2 |
REQUESTS MORE OUTPUTS Asks for additional output CHANGE STYLE Changes style of output ADDS CONTENT Adds content to output REMOVES CONTENT Remove content from output COURTESY RESPONSE A courtesy or pleasantry RESPONDS POSITIVELY Explicitly pleased with output RESPONDS NEGATIVELY Explicitly unhappy with output UNDEFINED RESPONSE No defined label applies Table 2: User utterances are classified into ORIGI - NAL REQUESTS and high-level follow-up types.Writing Intent Types IMPROVE TEXT GENERATE PROFESSIONAL DOC GENERATE MESSAGE GENERATE CATCHY TEXT GENERATE BIO GENERATE STORY GENERATE SUMMARY GENERATE TECHNICAL TEXT GENERATE SCRIPT GENERATE CHARACTER GENERATE ESSAY GENERATE POEM GET REFERENCES GENERATE SONG GENERATE ONLINE POST GENERATE JOKE QUESTION ABOUT WRITING UNDEFINED REQUEST Table 3: Users ORIGINAL REQUESTS are classified into the above writing intents. re-runs (Greene et al., 2014). Correlating Intents and PATHs.To answer RQ2, we classify Original Requests into a finer- grained set of writing intents (Table 3) with a GPT- 4o based multi-label intent classifier ( fWritingI ). fWritingI was also developed iteratively and man- ually validated (see Appendix C.2). Then we run a logistic regression correlating intents (predictors) fromfWritingI with PATHs (targets). Analyzing the learned coefficients of the regression models al- lows us to identify statistically significant corre- lations between writing intents and PATHs in a principled manner. Logistic regressions also en- able easy interpretation and follow on a large body of prior work (Gujarati, 2021). Finally, to gain a deeper understanding of user behaviors in a cor- related intent- PATH pair, two authors conducted a qualitative analysis of the pairs, which showed sta- tistically significant correlations and were repeated in BCP Wrand WC Wr. Author-driven manual anal- ysis was conducted to overcome the lack of access to Bing Copilot or WildChat users, a fundamental challenge in all log-based studies (Dumais et al., 2014). In §6 we include example conversations ex- amined by both authors to illustrate our findings. 5 Results – Exploring P ATHs We start with (RQ1): What high-level collaboration behaviors emerge from user interactions in LLM- assisted writing? We do this by visualizing the frequency of follow-up types identified by fFollowU (Figure 3a) and the correlations between the follow- up types and PATHs identified by PCA (Figure 3b). We discuss specific PATHs alongside writing intents and examples in §6. Follow-up Trends. In Figure 3a, we see that the most frequent follow-up types across BCP Wr and WC Wrare similar (Table 9 contains examples).Across datasets, users frequently ( 18-30% sessions) follow up by revising or elaborating on their Origi- nal Request (F1, F2 in Fig. 3a), ask questions about the generation (F3), explore additional outputs (F4), or modify the generations (F5, F6). Follow-ups with explicit positive/negative feedback or courtesy responses indicating satisfaction (F8-F10) are rare and occur only in 1-5% of the sessions. High-level trends in PATHs.In Figure 3b we vi- sualize how follow-up types form PATHs (Wfrom PCA) and the variance explained by each PATH/PC. Seven PATHs explain 80-85% of the variance, show- ing that a small set of collaboration behaviors ex- plains the bulk of variance in follow-up behaviors in BCP Wrand WC Wr. Further, each PATH accounts for a | https://arxiv.org/abs/2505.16023v2 |
similar and small percentage of variance (8- 14%), with most PATHs corresponding to a single follow-up type. This suggests that each follow-up type captures a distinct form of collaboration, with multiple co-occurring follow-up types being less frequent. When follow-up types do co-occur, this is more common in rare follow-up types (e.g., ELAB - ORATES REQUEST and COURTESY RESPONSE co- occur in P ATH5). PATHs in BCP Wrvs WC Wr.In Figure 3b, we also see that the discovered PATHs share significant similarities despite differences in Bing Copilot and WildChat deployments (e.g. base LLMs, system prompts, interfaces, and user bases). Specifically, we find RESTATES REQUEST (F1) accounts for sim- ilar amounts ( 14.5-14.6%) and the maximum vari- ance in both datasets. Revising requests is similar to query reformulations in search, such as Google or Bing Search. This is a dominant mode of inter- action that is familiar to users (Alaofi et al., 2022, Sec 7.4), and they continue to engage in it. Further, exploring additional outputs ( PATH2) explains the 2nd largest amount of variance in both datasets. 4 1 (a) Frequencies. 2 PATH1PATH2PATH3PATH4PATH5PATH6PATH7PATH1PATH2PATH3PATH4PATH5PATH6PATH7 (b) Correlations between follow-up types and PCs in BCP Wrand WC Wr. Figure 3: (a) The fraction of sessions which contain a follow-up type. (b) The correlation between follow-up types and principal components (PC) inferred by PCA. Each PC represents a PATH. Large positive values (pink boxes) indicate stronger correlations. The percentages (bottom) depict the variance explained by each P ATH. Similarly, PATH5 and PATH6 are also shared across both datasets, though these constitute less frequent follow-up types. These trends suggest that users collaborate with LLMs in very similar ways across both systems we examined. However, there are also some notable differences. PATH3 while correlating with REQUESTS ANSWERS in both datasets, also correlated with CHANGE STYLE in WC Wr. Simi- larly, PATH4 correlated with ADDS CONTENT and CHANGE STYLE respectively. Despite the differ- ences in PATH3 and 4, note that they aim to modify LLM generations in different ways. We hypothe- size that this difference is due to varying writing intent mixes across BCP Wrand WC Wr(Figure 8). As we see in §6, writing intents correlate with dif- ferent PATHs and may result in different behaviors at the dataset level when their proportions vary. Our results suggest the following implications for future work. Implications: •Leverage implicit feedback in users’ follow-ups for LLM alignment. •Under- stand why users request revisions and leverage it for LLM alignment. •Investigate users’ exploration behaviors in writing sessions and examine how it can be used for better alignment. 6 Results – Correlating Intents and PATHs Here we answer (RQ2): How do users’ collabo- ration behaviors differ across writing intents? We do this by correlating writing intents and PATHs in regressions. When PATHs aren’t shared across BCP Wrand WC Wr, we use follow-up types as tar-Target in BCP Wr, WC Wr Description Section PATH2, P ATH2 Requesting more output. §6.1 PATH3,REQUESTS ANSWERS Requesting answers. §6.2 PATH4,ADDS CONTENT Adding content. §6.3 CHANGE STYLE , PATH4 Changing style. §D.1 PATH1, P ATH1 Revising requests. §D.2 PATH6, P ATH6 Elaborating | https://arxiv.org/abs/2505.16023v2 |
on requests. §D.2 Table 4: Overview of PATHs in our regression analysis. gets. Table 4 summarizes the behaviors examined in our analysis. To ensure generalization of our findings, we highlight statistically significant corre- lations shared across BCP Wrand WC Wr, and exam- ine frequent writing intents (Figure 8). We present the results of our mixed-methods analysis with re- gression coefficient plots, example conversations (truncated and rephrased) examined in qualitative analysis, and discuss the implications of our find- ings for future work. 6.1 Requesting more outputs to brainstorm or stage long generations Figure 4a depicts writing intents correlated with PATH2 where users REQUEST MORE OUTPUTS . We start by noting that GEN CATCHY TEXT (I10) shows a strong positive correlation with PATH2. Analysis of sessions revealed that when users aimed to generate eye-catching texts such as prod- uct names, document titles, email subjects, etc, users requested more outputs aiming to brainstorm more creative, catchy texts. Figure 4b depicts exam- ples (Ex 1and 2) from BCP Wrand WC Wr. This behavior finds precedent in prior work on creativity 5 −0.5 0.0 0.5 1.0 Est1mat.d c5.ff1c1.ntREQUEST LENGTHUNDEFINED REQ.Q. AB UT WRITINGGENERATE J KEGENERATE S NGGENERATE P EMGEN CHARACTERGENERATE SCRIPTGENERATE ST RYGEN CATCHY TEXTGENERATE BI GEN NLINE P STGET REFERENCESGENERATE ESSAYGEN TECHN. TEXTGEN SUMMARYGEN PR FESS. D CGEN MESSAGEIMPR VE TEXTWC: L50R.0 - T ask vs PC 2 p <= 0.05 p > 0.05 I18I17I1 6I15I14I13I12I11I10I9I8I7I6I5I4I3I2I1 (a) Logistic regression coefficients for intents vs PATH2 (requesting more outputs) in WC Wr. 4 Ex 1 Ex 2 Ex 3 Ex 4 Ex 5PC2 (b) Users engaged in intents I10, I12, and I13. Figure 4: (a) Large positive values in coefficient plots indicate strong correlations. The intents discussed in §6.1 are highlighted in color. Coefficients for BCP Wrare plotted in Figure 12. (b) Example conversations from the intents highlighted in (a) – intent and example colors are matched. support, who note the value of diverse ideas dur- ing brainstorming (Frich et al., 2019). PATH2 may also be seen as a form of pluralistic alignment, i.e., eliciting overtone alignment (Sorensen et al., 2024) – exploring overtone alignment for brainstorming represents meaningful future work. Next, the intents GENERATE SCRIPT (I12) and GEN CHARACTER (I13) show a weaker positive correlation with PATH2. Here, users attempted to generate media scripts (e.g., YouTube videos) or fictional characters (Ex 4,3, and 5in Fig- ure 4b). Analysis of sessions revealed that while some GENERATE CHARACTER sessions engaged in brainstorming (Ex 4), users primarily engaged in staged generation of long texts (Ex 3and 5). Here, users’ follow-ups also varied in specificity from simply asking for more output (Ex 5) to be- ing more specific (Ex 3). To our knowledge, this represents the first evidence of multi-turn construc- tion of creative narratives in the wild. While prior work has explored interactive generation of long narratives through plans (Yao et al., 2019) interac- tions (Brahman et al., 2020), and complex instruc- tions (Pham et al., 2024), multi-turn construction of creative narratives remains under-explored. Implications: •Develop and evaluate over- tone-aligned LLMs for brainstorming. •Develop resources and models to generate creative | https://arxiv.org/abs/2505.16023v2 |
narra- tives with under-specific multi-turn interaction. 6.2 Asking follow-up questions to learn or stage long generations Figure 5a shows writing intents correlated with PATH3, where users ask questions in response tomodel generations. Users tended to ask questions when they generated professional documents (I3), summaries (I4), technical texts (I5), and fictional character narratives (I13). Our analysis revealed that users’ questions differed across these intents, from those asking about domain-specific knowl- edge and norms (Ex 1,2,4,5) to those fo- cused on the LLMs prior generation (Ex 3,6). We demonstrate this empirically in Figure 13. When users generated technical texts or sum- maries, they asked knowledge-rich questions – needing the LLM to be grounded in domain- specific knowledge (Ex 2) or the task context (Ex 5). While prior work notes that users engage in information-seeking during knowledge-rich writ- ing (Shen et al., 2023) – we provide evidence for this behavior in the wild. Information seeking dur- ing knowledge-rich writing and the use of these questions for LLM alignment remain under-studied and may be explored in future work. We also uncover novel and emerging behaviors in users asking questions to learn about profes- sional norms or in generating fictional narratives. When users generated professional documents such as cover letters, their questions sought to learn about professional norms and expectations (Ex 1, 4) – needing the LLM to be grounded in these norms. While some prior work has explored writ- ers’ feedback-seeking behavior (Gero et al., 2023) and LLMs’ potential to provide writing feedback (Li et al., 2024; Liang et al., 2024b), aligning LLMs to provide feedback in writing sessions represents an emerging problem. Finally, when users generated characters for their 6 −0.75 −0.50 −0.25 0.00 0.25 0.50 0.75 1.00 E7t1mat.d c5.ff1c1.ntREQUEST LENGTHUNDEFINED REQ.Q. AB UT WRITINGGENERATE J KEGENERATE S NGGENERATE P EMGEN CHARACTERGENERATE SCRIPTGENERATE ST RYGEN CATCHY TEXTGENERATE BI GEN NLINE P STGET REFERENCESGENERATE ESSAYGEN TECHN. TEXTGEN SUMMARYGEN PR FESS. D CGEN MESSAGEIMPR VE TEXTBCP: L50R.0 - T ask vs PC3 p <= 0.05 p > 0.05 I18I1 7I16I15I14I13I12I11I10I9I8I7I6I5I4I3I2I1 (a) Logistic regression coefficients for intents vs PATH3 (requesting answers) in BCP Wr. 6 Ex 1 Ex 6PC3 Ex 3 Ex 4 Ex 5 Ex 2(b) Users engaged in intents I3, I4, I5, and I13. Figure 5: (a) The intents discussed in §6.2 are highlighted in color. Coefficients for WC Wrare plotted in Figure 12. (b) Example conversations from the intents highlighted in (a) – intent and example colors are matched. stories (I13), their questions followed up on the generated stories. Analysis of sessions revealed that users sought to direct the generation of long- form fictional narratives (Ex 3,6), mirroring §6.1. While some prior work explores question answering for fictional narratives (Xu et al., 2022), a detailed understanding of users’ motivations for this behavior and the use of questions to interac- tively build narratives remains under-explored. Implications: •Investigate question asking be- haviors in knowledge-rich and creative writing and explore its use for session-level alignment. •Evalu- ate and develop methods to use LLMs for providing writing feedback in high-stakes document writing. 6.3 Adding to generations when they lacked | https://arxiv.org/abs/2505.16023v2 |
content known only to the user Figure 6a shows writing intents correlated with PATH4 where users add content to model genera- tions. This behavior correlates with generating pro- fessional documents (I3), messages (I2), and cre- ative narratives (I11-I13). Our analysis of sessions found that users overwhelmingly added content when model generations were missing information likely to be known only to the users. When users generated professional documents such as resumes, cover letters, clinical notes, etc. they added infor- mation about their personal encounters or skills (Ex 1,4). Similarly, when they generated commu- nicative texts such as emails, letters, or speeches, they added personal stories (Ex 2,5). This sug- gests that these intents may benefit from person- alization from users’ historical chats or through proactive question asking to obtain missing infor- mation. While a large body of recent work hasexplored personalization of LLMs (Mysore et al., 2024; Magister et al., 2024) or proactive interaction (Deng et al., 2025), we highlight intents where per- sonalization or proactive interaction may be most meaningful in the wild. Finally, in generating fic- tional texts such as stories, scripts, or character descriptions (I11-I13), users added plots which tai- lored generations to their fictional visions (Ex 3, 6). This behavior overlaps with PATHs where users requested more outputs (§6.1), and asked questions to stage long generations (§6.2), and re- mains understudied. Implications: •Develop methods to infer when long communicative texts are missing users’ per- sonal knowledge, if appropriate, obtain this in- formation proactively, or personalize generations based on users’ historical documents. •Develop resources and methods to incorporate long-form feedback into creative narratives generated in mul- ti-turn interactions. In Appendix D we show how users change generation style to better match readers’ expecta- tions, and find dataset-dependent trends in revising (PATH1) and elaborating (P ATH5) on requests. 7 Discussion Taken together, our results reveal that users interact with LLMs in highly collaborative and goal-driven ways — they co-construct creative texts, probe LLMs for domain knowledge as they write, and shape text to meet long-term communicative goals. These PATHs highlight behaviors not captured by simpler measures of satisfaction and classification into tasks, and highlight the importance of model- ing user behaviors to align LLMs with long-term 7 −0.8 −0.6 −0.4 −0.2 0.0 0.2 0.4 0.6 Es8ima8.d co.ffici.48REQUEST LENGTHUNDEFINED REQ.Q. AB UT WRITINGGENERATE J KEGENERATE S NGGENERATE P EMGEN CHARACTERGENERATE SCRIPTGENERATE ST RYGEN CATCHY TEXTGENERATE BI GEN NLINE P STGET REFERENCESGENERATE ESSAYGEN TECHN. TEXTGEN SUMMARYGEN PR FESS. D CGEN MESSAGEIMPR VE TEXTBCP: Lo0R.0 - T ask vs PC4 p <= 0.05 p > 0.05 I18I1 7I16I15I14I13I12I11I10I9I8I7I6I5I4I3I2I1 (a) Logistic regression coefficients for intents vs PATH4 (adding content) in BCP Wr. 11 Ex 1 Ex 6AddCon Ex 3 Ex 4 Ex 5 Ex 2 (b) Users engaged in intents I2, I3, I11-I13. Figure 6: (a) The intents discussed in §6.3 are highlighted in color. Coefficients for WC Wrare plotted in Figure 12. (b) Example conversations from the intents highlighted in (a) – intent and example colors are matched. user needs. Next, we synthesize the implications of our results and highlight meaningful areas of future | https://arxiv.org/abs/2505.16023v2 |
work suggested by our results. Toward Session-Level Alignment from Nat- ural Feedback. We showed how user follow- ups rarely included explicit positive or negative feedback in §5 and engage in open-ended collab- oration behaviors in §6. However, a majority of present work on LLM alignment remains restricted to single-turn alignment from explicit preference feedback (Chaudhari et al., 2024). Our results sug- gest that aligning LLMs from natural feedback over multi-turn interactions presents an increas- ingly meaningful area of future work. Early work has begun to explore alignment with single-turn nat- ural language feedback (Don-Yehiya et al., 2024; Shi et al., 2024) or simulated multi-turn feedback (Gao et al., 2024; Wu et al., 2025). Future work may explore building and learning from simulators that follow realistic user behaviors captured in ex- isting log datasets or the design of reward models trained on implicit user feedback. Further, while some PATHs are task-oriented and aim to directly tune LLM generations (e.g. chang- ing style or adding content), others guide an ex- ploratory session (e.g. requesting more outputs or answers). While task-oriented sessions have re- ceived significant attention, alignment and evalua- tion for exploratory sessions remain largely under- studied in the LLM-era, representing exciting fu- ture work. Aligning to Under-Specified Feedback and Goals. User follow-ups vary in their level of speci- ficity, this is clearest in the staged generation oflong texts. Here, users follow-ups increased in specificity from REQUESTS MORE OUTPUT (§6.1), toREQUESTS ANSWERS (§6.2), and to ADDS CON - TENT (§6.3). We show this empirically in Figure 14. This highlights a line of future work focused on LLM-alignment from under-specified natural language feedback. Most current work assumes user interactions (ORs and PATHs) to be fully specified. While a sizable body of work has focused on uncertainty estimation for LLM outputs (Geng et al., 2024), limited work has explored uncertainty in user in- teractions. This early work has sought to resolve under-specificity through task context (Malaviya et al., 2024; Sarkar et al., 2025), proactive question asking (Kobalczyk et al., 2025; Li et al., 2025), and coverage-based methods (Hou et al., 2024). Explor- ing these and other strategies is a valuable direction for future work. LLMs as Learning Partners: Aligning for User Growth. Across a writing intent and PATHs, we find users to leverage LLMs not merely as task assistants, but as partners in their learning and de- velopment. Users brainstormed with models to gen- erate more compelling or creative text (§6.1), asked questions to understand professional norms while drafting documents (§6.2), requested feedback to improve fictional narratives (§6.2), and iterated on text to better suit their audiences (§D.1). These behaviors reflect a desire not only to produce better outputs, but to use LLMs for longer-term learning. Our findings suggest that aligning LLMs to sup- port users’ growth and learning represents a com- pelling and underexplored design goal. Recent 8 work has echoed this vision (Collins et al., 2024b), and prototype systems are beginning to emerge (Li et al., 2024; Chamoun et al., 2024). However, real- izing this potential will require addressing critical challenges: mitigating the risk | https://arxiv.org/abs/2505.16023v2 |
of cultural bias in “coaching,” preventing homogenization of users’ voice and ideas (Agarwal et al., 2025; Drosos et al., 2025), and designing interactions that preserve the “productive struggle” known to facilitate deeper learning (Rus and Kendeou, 2025). Ultimately, building LLM systems for learning will demand rich interdisciplinary work, drawing on learning sciences, HCI, and NLP to understand how LLM systems can guide and challenge users, rather than merely serve their short-term needs. 8 Limitations In this paper1, we analyzed a large sample of Bing Copilot and WildChat user-LLM interaction logs spanning seven and thirteen months of global us- age. We focused on user-LLM collaboration behav- iors in English writing sessions. Our analysis took a mixed-methods approach, using PCA to iden- tify prototypical human-AI collaboration behaviors (PATHs) and regression analysis to identify statis- tically significant correlations between writing in- tents and PATHs. We paired this with lightweight qualitative analysis to gain further insight into au- tomatically highlighted behaviors. We ensure the reliability of our analysis through manual valida- tion of all automated components. We contribute the first understanding of in-the-wild user-LLM collaboration behaviors and discuss their implica- tions for LLM alignment. Now, we highlight some drawbacks of our analysis and its broader impact. Data and Analysis. Our analysis was limited to English sessions. Therefore, it is likely that some collaboration behaviors specific to non-English users were missed in our analysis. We made this choice to ensure the reliability of LLM classifiers and to ensure that log text was intelligible to all authors during annotation and analysis. Extending our analysis to non-English logs represents a mean- ingful area of future work. Next, our analysis (§4) does not model the order of follow-ups in a session and uses linear PCA to identify PATHs. We rely on simpler modeling to ensure the interpretability of results in a first analysis, future work may explore richer modeling of follow-ups to discover more 1LLM assistants were used to improve the phrasing of some sentences in this paper and fix grammatical errors.complex PATHs. Finally, we rely on a light-weight qualitative analysis to understand users’ goals in correlated intent- PATH pairs. Future work may ex- plore these behaviors in controlled human-centered studies with direct access to the actual users. Broader Impacts. We highlight LLM alignment from user-LLM interactions to be a meaningful area of future work in §6. However, we note that learning from implicit interactions may pose safety risks if user interactions are adversarial (e.g. Zhao et al. (2024b)) or pose privacy risks (e.g. Xin et al. (2024)) when implicit interactions encode private user information. This may result in user interac- tions being leaked from aligned LLMs. Therefore, ensuring safety and privacy must be an important aspect of learning from implicit interactions. References Dhruv Agarwal, Mor Naaman, and Aditya Vashistha. 2025. Ai suggestions homogenize writing to- ward western styles and diminish cultural nuances. Preprint , arXiv:2409.11360. Marwah Alaofi, Luke Gallagher, Dana Mckay, Lau- ren L. Saling, Mark Sanderson, Falk Scholer, Dami- ano Spina, and Ryen W. White. 2022. Where do queries come from? In Proceedings of the 45th In- ternational | https://arxiv.org/abs/2505.16023v2 |
ACM SIGIR Conference on Research and Development in Information Retrieval , SIGIR ’22, page 2850–2862, New York, NY , USA. Association for Computing Machinery. Aparna Ananthasubramaniam, Hong Chen, Jason Yan, Kenan Alkiek, Jiaxin Pei, Agrima Seth, Lavinia Dunagan, Minje Choi, Benjamin Litterer, and David Jurgens. 2023. Exploring linguistic style matching in online communities: The role of social context and conversation dynamics. In Proceedings of the First Workshop on Social Influence in Conversations (SICon 2023) , pages 64–74, Toronto, Canada. Asso- ciation for Computational Linguistics. Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2013. Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. , 35(8):1798–1828. Advait Bhat, Saaket Agashe, Parth Oberoi, Niharika Mohile, Ravi Jangir, and Anirudha Joshi. 2023. Inter- acting with next-phrase suggestions: How suggestion systems aid and influence the cognitive processes of writing. In Proceedings of the 28th International Conference on Intelligent User Interfaces , IUI ’23, page 436–452, New York, NY , USA. Association for Computing Machinery. Ananya Bhattacharjee, Jina Suh, Mahsa Ershadi, Shamsi T. Iqbal, Andrew D. Wilson, and Javier Hernandez. 2024. Understanding communication 9 preferences of information workers in engagement with text-based conversational agents. Preprint , arXiv:2410.20468. Param Biyani, Yasharth Bajpai, Arjun Radhakrishna, Gustavo Soares, and Sumit Gulwani. 2024. Rubicon: Rubric-based evaluation of domain-specific human ai conversations. In Proceedings of the 1st ACM International Conference on AI-Powered Software , AIware 2024, page 161–169, New York, NY , USA. Association for Computing Machinery. Faeze Brahman, Alexandru Petrusca, and Snigdha Chaturvedi. 2020. Cue me in: Content-inducing approaches to interactive story generation. In Pro- ceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Lin- guistics and the 10th International Joint Conference on Natural Language Processing , pages 588–597, Suzhou, China. Association for Computational Lin- guistics. Natalie Grace Brigham, Chongjiu Gao, Tadayoshi Kohno, Franziska Roesner, and Niloofar Mireshghal- lah. 2024. Developing story: Case studies of generative ai’s use in journalism. Preprint , arXiv:2406.13706. Daniel Buschek. 2024. Collage is the new writing: Ex- ploring the fragmentation of text and user interfaces in ai tools. In Proceedings of the 2024 ACM Design- ing Interactive Systems Conference , DIS ’24, page 2719–2737, New York, NY , USA. Association for Computing Machinery. Tuhin Chakrabarty, Vishakh Padmakumar, Faeze Brah- man, and Smaranda Muresan. 2024. Creativity sup- port in the age of large language models: An em- pirical study involving emerging writers. Preprint , arXiv:2309.12570. Eric Chamoun, Michael Schlichtkrull, and Andreas Vla- chos. 2024. Automated focused feedback generation for scientific writing assistance. In Findings of the As- sociation for Computational Linguistics: ACL 2024 , pages 9742–9763, Bangkok, Thailand. Association for Computational Linguistics. Shreyas Chaudhari, Pranjal Aggarwal, Vishvak Mura- hari, Tanmay Rajpurohit, Ashwin Kalyan, Karthik Narasimhan, Ameet Deshpande, and Bruno Castro da Silva. 2024. Rlhf deciphered: A critical analysis of reinforcement learning from human feedback for llms. arXiv preprint arXiv:2404.08555 . Jia Chen, Jiaxin Mao, Yiqun Liu, Fan Zhang, Min Zhang, and Shaoping Ma. 2021. Towards a better un- derstanding of query reformulation behavior in web search. In Proceedings of the Web Conference 2021 , WWW ’21, page 743–755, New York, | https://arxiv.org/abs/2505.16023v2 |
NY , USA. Association for Computing Machinery. Katherine M. Collins, Albert Q. Jiang, Simon Frieder, Lionel Wong, Miri Zilka, Umang Bhatt, Thomas Lukasiewicz, Yuhuai Wu, Joshua B. Tenenbaum, William Hart, Timothy Gowers, Wenda Li, AdrianWeller, and Mateja Jamnik. 2024a. Evaluating lan- guage models for mathematics through interactions. Proceedings of the National Academy of Sciences , 121(24):e2318124121. Katherine M Collins, Ilia Sucholutsky, Umang Bhatt, Kartik Chandra, Lionel Wong, Mina Lee, Cedegao E Zhang, Tan Zhi-Xuan, Mark Ho, Vikash Mansinghka, and 1 others. 2024b. Building machines that learn and think with people. Nature human behaviour , 8(10):1851–1863. Yang Deng, Lizi Liao, Wenqiang Lei, Grace Hui Yang, Wai Lam, and Tat-Seng Chua. 2025. Proactive con- versational ai: A comprehensive survey of advance- ments and opportunities. ACM Trans. Inf. Syst. , 43(3). Paramveer S. Dhillon, Somayeh Molaei, Jiaqi Li, Max- imilian Golub, Shaochun Zheng, and Lionel Peter Robert. 2024. Shaping human-ai collaboration: Var- ied scaffolding levels in co-writing with language models. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems , CHI ’24, New York, NY , USA. Association for Computing Machinery. Shachar Don-Yehiya, Leshem Choshen, and Omri Abend. 2024. Naturally Occurring Feedback is Common, Extractable and Useful. Preprint , arXiv:2407.10944. Fiona Draxler, Anna Werner, Florian Lehmann, Matthias Hoppe, Albrecht Schmidt, Daniel Buschek, and Robin Welsch. 2024. The ai ghostwriter ef- fect: When users do not perceive ownership of ai- generated text but self-declare as authors. ACM Trans. Comput.-Hum. Interact. , 31(2). Ian Drosos, Advait Sarkar, Neil Toronto, and 1 others. 2025. " it makes you think": Provocations help re- store critical thinking to ai-assisted knowledge work. arXiv preprint arXiv:2501.17247 . Susan Dumais, Robin Jeffries, Daniel M. Russell, Diane Tang, and Jaime Teevan. 2014. Understanding User Behavior Through Log Data and Analysis , pages 349– 372. Springer New York, New York, NY . Nathan Eagle and Alex Sandy Pentland. 2009. Eigenbe- haviors: Identifying structure in routine. Behavioral ecology and sociobiology , 63:1057–1066. Jonas Frich, Lindsay MacDonald Vermeulen, Christian Remy, Michael Mose Biskjaer, and Peter Dalsgaard. 2019. Mapping the landscape of creativity support tools in hci. In Proceedings of the 2019 CHI Confer- ence on Human Factors in Computing Systems , CHI ’19, page 1–18, New York, NY , USA. Association for Computing Machinery. Yue Fu, Sami Foell, Xuhai Xu, and Alexis Hiniker. 2024. From text to self: Users’ perception of aimc tools on interpersonal communication and self. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems , CHI ’24, New York, NY , USA. Association for Computing Machinery. 10 Ge Gao, Alexey Taymanov, Eduardo Salinas, Paul Mineiro, and Dipendra Misra. 2024. Aligning LLM agents by learning latent preference from user edits. InThe Thirty-eighth Annual Conference on Neural Information Processing Systems . Jiahui Geng, Fengyu Cai, Yuxia Wang, Heinz Koeppl, Preslav Nakov, and Iryna Gurevych. 2024. A sur- vey of confidence estimation and calibration in large language models. In Proceedings of the 2024 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies (Volume 1: Long Papers) , pages 6577–6595, Mexico | https://arxiv.org/abs/2505.16023v2 |
City, Mexico. Association for Computational Linguistics. Katy Ilonka Gero, Tao Long, and Lydia B Chilton. 2023. Social dynamics of ai support in creative writing. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems , CHI ’23, New York, NY , USA. Association for Computing Machinery. Derek Greene, Derek O’Callaghan, and Pádraig Cun- ningham. 2014. How many topics? stability analysis for topic models. In Machine Learning and Knowl- edge Discovery in Databases: European Conference, ECML PKDD 2014, Nancy, France, September 15- 19, 2014. Proceedings, Part I 14 , pages 498–513. Springer. Damodar N Gujarati. 2021. Essentials of econometrics . Sage Publications. Jeffrey T Hancock, Mor Naaman, and Karen Levy. 2020. Ai-mediated communication: Definition, re- search agenda, and ethical considerations. Journal of Computer-Mediated Communication , 25(1):89–100. Kunal Handa, Alex Tamkin, Miles McCain, Saffron Huang, Esin Durmus, Sarah Heck, Jared Mueller, Jerry Hong, Stuart Ritchie, Tim Belonax, and 1 oth- ers. 2025. Which economic tasks are performed with ai? evidence from millions of claude conversations. arXiv preprint arXiv:2503.04761 . Bairu Hou, Yujian Liu, Kaizhi Qian, Jacob Andreas, Shiyu Chang, and Yang Zhang. 2024. Decompos- ing uncertainty for large language models through input clarification ensembling. In Proceedings of the 41st International Conference on Machine Learning , volume 235 of Proceedings of Machine Learning Research , pages 19023–19042. PMLR. Xi Yu Huang, Krishnapriya Vishnubhotla, and Frank Rudzicz. 2024. The gpt-writingprompts dataset: A comparative analysis of character portrayal in short stories. Preprint , arXiv:2406.16767. Angel Hsing-Chi Hwang, Q. Vera Liao, Su Lin Blod- gett, Alexandra Olteanu, and Adam Trischler. 2024. "It was 80% me, 20% AI": Seeking Authenticity in Co-Writing with Large Language Models. Preprint , arXiv:2411.13032. Bernard J. Jansen and Amanda Spink. 2006. How are we searching the world wide web? a comparisonof nine search engine transaction logs. Information Processing & Management , 42(1):248–263. Formal Methods for Information Retrieval. Maryam Kamvar and Shumeet Baluja. 2006. A large scale study of wireless search behavior: Google mo- bile search. In Proceedings of the SIGCHI Confer- ence on Human Factors in Computing Systems , CHI ’06, page 701–709, New York, NY , USA. Association for Computing Machinery. Hannah Rose Kirk, Alexander Whitefield, Paul Röttger, Andrew Bean, Katerina Margatina, Juan Ciro, Rafael Mosquera, Max Bartolo, Adina Williams, He He, Bertie Vidgen, and Scott A. Hale. 2024. The prism alignment dataset: What participatory, representa- tive and individualised human feedback reveals about the subjective and multicultural alignment of large language models. Preprint , arXiv:2404.16019. Katarzyna Kobalczyk, Nicolas Astorga, Tennison Liu, and Mihaela van der Schaar. 2025. Active task disambiguation with llms. arXiv preprint arXiv:2502.04485 . Mina Lee, Katy Ilonka Gero, John Joon Young Chung, Simon Buckingham Shum, Vipul Raheja, Hua Shen, Subhashini Venugopalan, Thiemo Wambs- ganss, David Zhou, Emad A. Alghamdi, Tal August, Avinash Bhat, Madiha Zahrah Choksi, Senjuti Dutta, Jin L.C. Guo, Md Naimul Hoque, Yewon Kim, Simon Knight, Seyed Parsa Neshaei, and 17 others. 2024. A design space for intelligent and interactive writing assistants. In Proceedings of the 2024 CHI Confer- ence on Human Factors in Computing Systems , CHI ’24, New York, NY , USA. Association | https://arxiv.org/abs/2505.16023v2 |
for Computing Machinery. Mina Lee, Percy Liang, and Qian Yang. 2022. Coau- thor: Designing a human-ai collaborative writing dataset for exploring language model capabilities. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems , CHI ’22, New York, NY , USA. Association for Computing Machinery. Ge Li, Danai Vachtsevanou, Jérémy Lemée, Simon Mayer, and Jannis Strecker. 2024. Reader-aware writing assistance through reader profiles. In Pro- ceedings of the 35th ACM Conference on Hypertext and Social Media , HT ’24, page 344–350, New York, NY , USA. Association for Computing Machinery. Shuyue Stella Li, Jimin Mun, Faeze Brahman, Jonathan S Ilgen, Yulia Tsvetkov, and Maarten Sap. 2025. Aligning llms to ask good questions a case study in clinical reasoning. arXiv preprint arXiv:2502.14860 . Weixin Liang, Zachary Izzo, Yaohui Zhang, Haley Lepp, Hancheng Cao, Xuandong Zhao, Lingjiao Chen, Hao- tian Ye, Sheng Liu, Zhi Huang, Daniel A. McFarland, and James Y . Zou. 2024a. Monitoring ai-modified content at scale: A case study on the impact of chat- gpt on ai conference peer reviews. In ICML . 11 Weixin Liang, Yaohui Zhang, Mihai Codreanu, Jiayu Wang, Hancheng Cao, and James Zou. 2025. The widespread adoption of large language model-assisted writing across society. Preprint , arXiv:2502.09747. Weixin Liang, Yuhui Zhang, Hancheng Cao, Binglu Wang, Daisy Yi Ding, Xinyu Yang, Kailas V odrahalli, Siyu He, Daniel Scott Smith, Yian Yin, Daniel A. McFarland, and James Zou. 2024b. Can large lan- guage models provide useful feedback on research papers? a large-scale empirical analysis. NEJM AI , 1(8):AIoa2400196. Ying-Chun Lin, Jennifer Neville, Jack Stokes, Longqi Yang, Tara Safavi, Mengting Wan, Scott Counts, Sid- dharth Suri, Reid Andersen, Xiaofeng Xu, Deepak Gupta, Sujay Kumar Jauhar, Xia Song, Georg Buscher, Saurabh Tiwary, Brent Hecht, and Jaime Teevan. 2024. Interpretable user satisfaction estima- tion for conversational systems with large language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers) , pages 11100–11115, Bangkok, Thailand. Association for Computational Linguistics. Lucie Charlotte Magister, Katherine Metcalf, Yizhe Zhang, and Maartje ter Hoeve. 2024. On the way to llm personalization: Learning to remember user conversations. arXiv preprint arXiv:2411.13405 . Moushumi Mahato, Avinash Kumar, Kartikey Singh, Javaid Nabi, Debojyoti Saha, and Krishna Singh. 2024. Exploring user dissatisfaction: Taxonomy of implicit negative feedback in virtual assistants. In Proceedings of the 21st International Conference on Natural Language Processing (ICON) , pages 230– 242, AU-KBC Research Centre, Chennai, India. NLP Association of India (NLPAI). Chaitanya Malaviya, Joseph Chee Chang, Dan Roth, Mohit Iyyer, Mark Yatskar, and Kyle Lo. 2024. Contextualized evaluations: Taking the guesswork out of language model evaluations. arXiv preprint arXiv:2411.07237 . Yusuf Mehdi. 2023. Reinventing search with a new ai-powered microsoft bing and edge, your copilot for the web. Accessed: 3 March 2025. Sheshera Mysore, Zhuoran Lu, Mengting Wan, Longqi Yang, Bahareh Sarrafzadeh, Steve Menezes, Tina Baghaee, Emmanuel Barajas Gonzalez, Jennifer Neville, and Tara Safavi. 2024. Pearl: Personal- izing large language model writing assistants with generation-calibrated retrievers. In Proceedings of the 1st Workshop on Customizable NLP: Progress and Challenges in Customizing NLP for a | https://arxiv.org/abs/2505.16023v2 |
Domain, Application, Group, or Individual (CustomNLP4U) , pages 198–219, Miami, Florida, USA. Association for Computational Linguistics. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, JohnSchulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems , volume 35, pages 27730–27744. Curran Associates, Inc. Maria-Teresa De Rosa Palmini, Laura Wagner, and Eva Cetinic. 2024. Civiverse: A dataset for analyz- ing user engagement with open-source text-to-image models. Preprint , arXiv:2408.15261. Shramay Palta, Nirupama Chandrasekaran, Rachel Rudinger, and Scott Counts. 2025. Speaking the right language: The impact of expertise alignment in user-ai interactions. Preprint , arXiv:2502.18685. Chau Minh Pham, Simeng Sun, and Mohit Iyyer. 2024. Suri: Multi-constraint instruction following in long- form text generation. In Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 1722–1753, Miami, Florida, USA. Association for Computational Linguistics. Martin J Pickering and Simon Garrod. 2013. An inte- grated theory of language production and comprehen- sion. Behavioral and brain sciences , 36(4):329–347. Chen Qu, Liu Yang, W. Bruce Croft, Johanne R. Trippas, Yongfeng Zhang, and Minghui Qiu. 2018. Analyzing and characterizing user intent in information-seeking conversations. In The 41st International ACM SIGIR Conference on Research & Development in Informa- tion Retrieval , SIGIR ’18, page 989–992, New York, NY , USA. Association for Computing Machinery. Jonathan Reades, Francesco Calabrese, and Carlo Ratti. 2009. Eigenplaces: analysing cities using the space– time structure of the mobile phone network. En- vironment and planning B: Planning and design , 36(5):824–836. Vasile Rus and Panayiota Kendeou. 2025. Are llms actually good for learning? AI & SOCIETY , pages 1–2. Rupak Sarkar, Bahareh Sarrafzadeh, Nirupama Chan- drasekaran, Nagu Rangan, Philip Resnik, Longqi Yang, and Sujay Kumar Jauhar. 2025. Conversa- tional user-ai intervention: A study on prompt rewrit- ing for improved llm response generation. Preprint , arXiv:2503.16789. Bahareh Sarrafzadeh, Sujay Kumar Jauhar, Michael Gamon, Edward Lank, and Ryen W. White. 2021. Characterizing stage-aware writing assistance for col- laborative document authoring. Proc. ACM Hum.- Comput. Interact. , 4(CSCW3). Zejiang Shen, Tal August, Pao Siangliulue, Kyle Lo, Jonathan Bragg, Jeff Hammerbacher, Doug Downey, Joseph Chee Chang, and David Sontag. 2023. Be- yond summarization: Designing ai support for real- world expository writing tasks. arXiv preprint arXiv:2304.02623 . 12 Taiwei Shi, Zhuoer Wang, Longqi Yang, Ying-Chun Lin, Zexue He, Mengting Wan, Pei Zhou, Sujay Jauhar, Sihao Chen, Shan Xia, and 1 others. 2024. Wildfeed- back: Aligning llms with in-situ user interactions and feedback. arXiv preprint arXiv:2408.15549 . Prasann Singhal, Tanya Goyal, Jiacheng Xu, and Greg Durrett. 2024. A long way to go: Investigating length correlations in RLHF. In First Conference on Lan- guage Modeling . Taylor Sorensen, Jared Moore, Jillian Fisher, Mitchell L. Gordon, Niloofar Mireshghallah, Christopher Michael Rytting, Andre Ye, Liwei Jiang, Ximing Lu, Nouha Dziri, Tim Althoff, and Yejin Choi. 2024. Position: A roadmap to pluralistic alignment. In ICML . Siddharth Suri, Scott Counts, Leijie Wang, Chacha Chen, Mengting Wan, | https://arxiv.org/abs/2505.16023v2 |
Tara Safavi, Jennifer Neville, Chirag Shah, Ryen W. White, Reid Andersen, Georg Buscher, Sathish Manivannan, Nagu Rangan, and Longqi Yang. 2024. The use of generative search engines for knowledge work and complex tasks. Preprint , arXiv:2404.04268. Alex Tamkin, Miles McCain, Kunal Handa, Esin Dur- mus, Liane Lovitt, Ankur Rathi, Saffron Huang, Al- fred Mountfield, Jerry Hong, Stuart Ritchie, Michael Stern, Brian Clarke, Landon Goldberg, Theodore R. Sumers, Jared Mueller, William McEachen, Wes Mitchell, Shan Carter, Jack Clark, and 2 others. 2024. Clio: Privacy-preserving insights into real-world ai use. Preprint , arXiv:2412.13678. Trang Tran and Mari Ostendorf. 2016. Characterizing the language of online communities and its relation to community reception. In Proceedings of the 2016 Conference on Empirical Methods in Natural Lan- guage Processing , pages 1030–1035, Austin, Texas. Association for Computational Linguistics. Johanne R. Trippas, Sara Fahad Dawood Al Lawati, Joel Mackenzie, and Luke Gallagher. 2024. What do users really ask large language models? an ini- tial log analysis of google bard interactions in the wild. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval , SIGIR ’24, page 2703–2707, New York, NY , USA. Association for Computing Machinery. Kailas V odrahalli and James Zou. 2024. ArtWhisperer: A dataset for characterizing human-AI interactions in artistic creations. In Proceedings of the 41st Inter- national Conference on Machine Learning , volume 235 of Proceedings of Machine Learning Research , pages 49627–49654. PMLR. Qian Wan, Siying Hu, Yu Zhang, Piaohong Wang, Bo Wen, and Zhicong Lu. 2024. "it felt like having a second mind": Investigating human-ai co-creativity in prewriting with large language models. Proc. ACM Hum.-Comput. Interact. , 8(CSCW1).Liang Wang, Nan Yang, Xiaolong Huang, Binx- ing Jiao, Linjun Yang, Daxin Jiang, Rangan Ma- jumder, and Furu Wei. 2024. Text embeddings by weakly-supervised contrastive pre-training. Preprint , arXiv:2212.03533. Sterling Williams-Ceci, Maurice Jakesch, Advait Bhat, Kowe Kadoma, Lior Zalmanson, and Mor Naaman. 2024. Bias in ai autocomplete suggestions leads to attitude shift on societal issues. PsyArXiv . Shujin Wu, Yi R. Fung, Cheng Qian, Jeonghwan Kim, Dilek Hakkani-Tur, and Heng Ji. 2025. Aligning LLMs with individual preferences via interaction. In Proceedings of the 31st International Conference on Computational Linguistics , pages 7648–7662, Abu Dhabi, UAE. Association for Computational Linguis- tics. Rui Xin, Niloofar Mireshghallah, Shuyue Stella Li, Michael Duan, Hyunwoo Kim, Yejin Choi, Yulia Tsvetkov, Sewoong Oh, and Pang Wei Koh. 2024. A false sense of privacy: Evaluating textual data san- itization beyond surface-level privacy leakage. In Neurips Safe Generative AI Workshop 2024 . Ying Xu, Dakuo Wang, Mo Yu, Daniel Ritchie, Bing- sheng Yao, Tongshuang Wu, Zheng Zhang, Toby Li, Nora Bradford, Branda Sun, Tran Hoang, Yisi Sang, Yufang Hou, Xiaojuan Ma, Diyi Yang, Nanyun Peng, Zhou Yu, and Mark Warschauer. 2022. Fan- tastic questions and where to find them: FairytaleQA – an authentic dataset for narrative comprehension. InProceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 447–460, Dublin, Ireland. Association for Computational Linguistics. Lili Yao, Nanyun Peng, Ralph Weischedel, Kevin Knight, Dongyan Zhao, and Rui Yan. 2019. Plan-and- write: | https://arxiv.org/abs/2505.16023v2 |
Towards better automatic storytelling. Pro- ceedings of the AAAI Conference on Artificial Intelli- gence , 33(01):7378–7385. Wenting Zhao, Xiang Ren, Jack Hessel, Claire Cardie, Yejin Choi, and Yuntian Deng. 2024a. Wildchat: 1m chatGPT interaction logs in the wild. In The Twelfth International Conference on Learning Representa- tions . Wenting Zhao, Alexander M Rush, and Tanya Goyal. 2024b. Challenges in trustworthy human evaluation of chatbots. arXiv preprint arXiv:2412.04363 . Zixin Zhao, Damien Masson, Young-Ho Kim, Ger- ald Penn, and Fanny Chevalier. 2025. Making the write connections: Linking writing support tools with writer’s needs. arXiv preprint arXiv:2502.13320 . Shengqi Zhu, Jeffrey M. Rzeszotarski, and David Mimno. 2025. Data paradigms in the era of llms: On the opportunities and challenges of qualitative data in the wild. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems , CHI EA ’25, New York, NY , USA. Association for Computing Machinery. 13 A Extended Related Work Having discussed prior work on user-LLM log anal- ysis in §2 we discuss additional related work here. Measuring user satisfaction. Our analysis of users’ follow-up behaviors also ties it to prior work on predicting binary signals of dis/satisfaction from conversations (Lin et al., 2024; Biyani et al., 2024; Mahato et al., 2024). In our work, we go beyond bi- nary notions of dis/satisfaction and provide a richer characterization of users’ collaboration behaviors. Further, we find explicit satisfaction signals to be rare and highlight a need to understand and use im- plicit behaviors for LLM alignment (see §5). Con- current analysis on Bing Copilot and WildChat-1M logs is also relevant to our work. Here, Palta et al. (2025) correlates the difference between expertise in human and LLM utterances with binary metrics of user satisfaction. And Brigham et al. (2024) con- duct a small-scale analysis of journalistic writing sessions in WildChat-1M and discuss its implica- tions for responsible AI use in journalism. While these works differ in focus, together with our work, they paint a more complete picture of human-LLM interactions in the wild. Human-centered Studies on Writing Assis- tance A sizable body of work in HCI has focused on writing assistance. Recent surveys review this body of work more completely (Lee et al., 2024; Zhao et al., 2025). Prior work examining interac- tions with LLM writing assistants is most relevant to our work. In contrast to user-LLM log analysis, this research has had more direct interaction with users – enabling studies to probe user perceptions, behaviors, and impacts more deeply while forgo- ing an understanding of in-the-wild user behaviors outside study settings. Broadly, this work has fo- cused on the impacts of LLM-assisted writing (Han- cock et al., 2020), studied user-LLM co-creation be- haviors, and designed novel interaction paradigms (Buschek, 2024). Work on impacts has examined aspects such as writers perceived ownership over texts co-created with personalized LLMs (Draxler et al., 2024; Hwang et al., 2024), convergence to- ward normative stances, stereotypes, and styles in LLM assisted writing (Williams-Ceci et al., 2024; Huang et al., 2024; Agarwal et al., 2025), and the perception of fit between | https://arxiv.org/abs/2505.16023v2 |
LLM outputs and tasks contexts (Fu et al., 2024; Bhattacharjee et al., 2024), among others. Studies focusing on the dynamics of human-AI co-creation examine the differences in how writers seek feedback from humans vs LLMs(Gero et al., 2023), or how they seek assistance at different stages of writing (Wan et al., 2024; Chakrabarty et al., 2024). Still others have exam- ined the impact of specific interaction behaviors on productivity and decision making (Bhat et al., 2023; Dhillon et al., 2024). Our analysis complements this line of work by supporting laboratory findings through large-scale analysis where applicable, and uncovers novel user behaviors likely to be missed because of smaller samples or constrained study designs. B Analysis Setup - Details In §3 we presented an overview of our writing session datasets BCP Wrand WC Wrfiltered from Bing Copilot and WildChat-1M logs. Both datasets are obtained after two broad stages of filtering, we describe this filtering here: •Appendix B.1: Preliminary filtering, i.e. Bing Copilot →BCP Alland WildChat-1M → WC All. •Appendix B.2: Writing session filtering i.e. BCP All→BCP Wrand WC All→WC Wr. •Appendix B.3: Validating Task Classifiers used to identify writing sessions. B.1 Dataset Description and Preliminary Filtering Bing Copilot →BCP All. We start with a daily random sample of Bing Copilot logs from a seven- month period spanning 1 April to 31 October 2024, amounting to 20.5M conversational sessions. GPT- 4 powered Bing Copilot during this period. We per- form an initial filtering based on logged metadata. Since we aim to study users’ follow-up behaviors, we retain sessions with 2 or more user utterances, sessions that are likely to be in English (based on the users system language) to ensure familiarity of data to all authors, and sessions conducted on a personal computer – this results in 3.6M sessions. We choose to retain sessions on personal computers to limit the impact of behavioral differences across devices, e.g., prior work notes limited diversity of queries in search on mobile phones compared to personal computers (Kamvar and Baluja, 2006). Next, we perform a second stage of language filtering based on the first 1500 characters (about 300 words) of a session text using a Language ID model.2Metadata-based filtering alone proved to be insufficient to filter non-English sessions. We re- 2FastText Language ID model: lid.176.bin 14 Figure 7: The proportion of coarse tasks in BCP All (2.8M sessions) and WC All(160k sessions). The ses- sions are in English and likely to have at least 1 user follow-up. A session can have multiple coarse tasks. On average, BCP AllandWC Allhad 1.25 and 1.43 tasks per session. tain 2.8M English sessions, and we refer to this as BCP All. This dataset was labeled with a coarse Task Classifier ( fCoarseT ) to identify writing sessions. Note that all our Bing Copilot data was scrubbed for personally identifying information, held on se- cure servers, labeled with privacy-compliant LLM deployments, and our analysis was conducted in ac- cordance with organizational privacy policies and data handling practices. WildChat-1M →WC All. We start with the public3WildChat-1M dataset available under an AI2 ImpACT license. Zhao | https://arxiv.org/abs/2505.16023v2 |
et al. (2024a) gath- ered this data over thirteen months from 9 April 2023 to 1 May 2024 in a HuggingFace Hub space. The sessions contain interactions with GPT-4 and GPT-3.5-Turbo. Of the 840k publicly available ses- sions, we retain sessions labeled as English in the metadata and with 2 or more user utterances – this results in 160k sessions, we refer to this as WC All. Then WC Allsessions are labeled with fCoarseT to identify writing sessions. Dataset Characteristics In Table 1 we show the number of users and countries of origin for BCP AllandWC All. In Figure 7 we show the pro- portion of tasks identified by fCoarseT before filter- ing them down to writing sessions alone. We see that Bing Copilot sessions are focused on search (“Search Information”), whereas WildChat has a more even proportion of tasks. A likely explana- tion is that Bing Copilot was available alongside the Bing search engine, whereas WildChat was framed as a general-purpose conversational assistant in a research study. Further, fCoarseT , identified 1.25 and 1.43 tasks per session on average in BCP All andWC All, respectively. The median was 1 task 3allenai/WildChat-1M. Accessed 12 December 2024.Utterances Original Follow requests ups BCP Wr 7.5/6 1 .6/1 2 .1/2 WC Wr 7.1/6 1 .6/1 1 .9/1 Table 5: Session level characteristics for our datasets (mean / median). We classify session utterances from the user into Original Requests and Follow ups, where ORs may have 1 or more fine-grained intents. – indicating that most sessions focus on a single coarse task. B.2 Identifying Writing Sessions Task Classifier. We follow prior work and con- ceptualize writing sessions to be the ones where users translate their thoughts into writing (Lee et al., 2024). We interpret this broadly and treat sessions where users write or re-write, whole or parts of texts to be a writing session. We implement this in GPT-4o based fCoarseT and retain sessions labeled as generating communicative, creative, technical, and summarizing text as writing tasks – Figure 7 depicts their proportions. Prompt 1 was used for fCoarseT . We generate our task labels and label definitions by collating tasks identified in Suri et al. (2024), and through addi- tional manual analysis of 200 BCP Allconversations. We ensure the label coverage by manually apply- ing them to 200 sessions in WC All. In both data sets,fCoarseT inputs the first 10 utterances to pre- dict a session’s coarse task. Given the larger size of BCP Alland the expense (time and dollar amounts) of GPT-4o calls, we trained an embedding-based scalable classifier on labels obtained from GPT-4o for 100k BCP Allsessions. Our scalable classifier used a frozen E5-large-v2 (Wang et al., 2024) model for embedding the conversation text and fine- tuned logistic regression classifiers in a one-vs-all setup to obtain a fCoarseT for BCP All. Since WC All is a smaller dataset, we used GPT-4o directly for it. BCP All→BCP WrTo obtain writing sessions, we label BCP Allsessions with fCoarseT and retain sessions labeled with the labels Gen Communica- tive Text, Gen Creative Text, Gen Technical Text, and | https://arxiv.org/abs/2505.16023v2 |
Summarize Text as writing sessions. Out of 2.8M sessions, we identify 500k as writing sessions. These made up 17-19% sessions in each month. Finally, we randomly subsample 250k of 500k, re- sulting in BCP Wr. Our sub-sampling was done to lower the costs of subsequent analysis, which relies on LLM classifiers while retaining a large enough 15 Coarse Tasks Acc. ( %) Agr. ( κ) BCP All 89.09 1.00 WC All 80.00 0.87 Writing Tasks BCP All 85.71 1.00 WC All 91.07 0.87 Table 6: Accuracy (Acc.) of GPT-4o based Task Classi- fier (fCoarseT ) in BCP AllandWC All. Prompt 1 contains label definitions for the eleven coarse task classes. Four of the eleven classes are treated as writing sessions – accuracy and agreement for these classes alone are also included above. Agreements (Agr.) are computed from judgments made by two independent annotators. sample for analysis. WC All→WC WrOut of 160k sessions, 68k (42%) were identified as writing sessions by fCoarseT to form WC Wr. We follow the same fil- tering process as for BCP All. Session length characteristics of BCP Wrand WC Wrare shown in Table 5. We see that sessions had 2 follow-ups on average, enabling our analysis to focus on collaboration behaviors. B.3 Validating Task Classifiers Setup. Table 6 presents the results of manually validating fCoarseT . Our validation aims to estab- lish the accuracy of identifying coarse tasks so that subsequent analysis is not affected by noisy pre- dictions. Two annotators judged the correctness offCoarseT predictions for 55 sessions from BCP All andWC All– 110 sessions in all. We sample an equal number of sessions for each label. The two annotators’ judgments were used to compute agree- ments with Cohen’s Kappa, κ. In Table 6 we report metrics across all labels and for the labels consid- ered as writing (i.e. the labels Gen Communicative Text, Gen Creative Text, Gen Technical Text, and Summarize Text). Results. From Table 6 we see that fCoarseT is able to identify writing sessions in BCP Alland WC Allwith a sufficiently high accuracy and sub- stantial agreement. We also note that BCP Allhas a slightly lower accuracy than WC Allfor identifying writing tasks. We attribute this to the difference in task frequencies in the two datasets (see Figure 7). Bing Copilot was used most frequently for search, and most errors occurred because of mislabeling search sessions as a different task. On the other hand, WildChat was used in nearly equal propor-tion for various tasks, with errors resulting from different types of writing tasks confused for each other (e.g., communicative vs creative). Since dif- ferent writing tasks are collapsed into one writing class, the accuracy of identifying writing sessions remains high. C Log Analysis with P ATHs - Details In §4 we describe our analysis method focused on identifying PATHs and on analyzing how PATHs vary across writing intents (Figure 2). We provide the details of the methods here: •Appendix C.1: Details for the Follow-up Clas- sifier and Identifying P ATHs with PCA. •Appendix C.2: Details for with Intent Clas- sifier, Regression Analysis, and Qualitative | https://arxiv.org/abs/2505.16023v2 |
Analysis. •Appendix C.3: Details about manually vali- dating the Follow-up and Intent Classifier. C.1 Identifying P ATHs We identify PATHs by first classifying users’ follow- up utterances into high-level follow-up types and clustering these into P ATHs using PCA. Follow-up Classifier. We use a GPT-4o based follow-up classifier ( fFollowU ) to classify user utter- ances in BCP Wrand WC Wrinto original requests or one of eleven follow-up types F(Table 2). Table 9 shows example follow-up utterances for each label. Prompt 2 was used for fFollowU and contains la- bels and their definitions. Our follow-up type labels were derived by clustering user utterances embed- ded with a pretrained language model and manually generating labels for each cluster. We iteratively re- peat this process until no new labels are generated. Then the labels were merged to remove redundan- cies. We confirm their coverage by manually ap- plying the labels to 100 randomly sampled BCP Wr sessions. Our labels (Table 2) closely mirror la- bels generated in prior conversational interaction datasets (Qu et al., 2018) – indicating their gener- ality. We validate the accuracy of fFollowU through manual annotation in Appendix C.3. PATHs with PCA. After labeling user utterances withfFollowU we identify PATHs. Given a dataset of sessions S, we represent each session S∈ S us- ing a “ tf-idf ” representation of follow-up types, F∈R|S|×|F|. Each feature vector per session in F contains the follow-up type counts normalized by the total number of user utterances in the session (“tf”). These frequencies are then multiplied by 16 the log inverse of the follow-up frequency in the entire dataset (“ idf”). This ensures that more fre- quent follow-ups don’t dominate the variance of our dataset. Next, we run PCA on Fand treat each identified principal component as a PATH. PCA is a linear di- mensionality reduction technique which maps Fto a new space of variables as P=FW . The dimen- sions of Pare uncorrelated with each other, and Wensures that the variance of Fis retained in P (Bengio et al., 2013). Further, the dimensions of P capture decreasing amounts of the variance in F. In our work, we retain the first l(<|F|) dimensions ofPthat explains 80-85% variance in Fwhile re- jecting the rest as noise, therefore, Pl∈R|S|× l. Because each dimension of Plrepresents a mutu- ally co-occurring set of follow-ups, they segment S into sessions with a consistent PATH. We visualize W l, referred to as the “loading matrix”, in §5 to show the correlations between follow-up types and PATHs. C.2 Correlating intents and P ATHs Intent Classifier. To correlate writing intents with PATHs, we classify ‘Original Requests’ identified byfFollowU into fine-grained writing intents (Ta- ble 3) using a GPT-4o based classifier ( fWritingI ). We set up fWritingI as a multi-label classifier since requests can express more than one intent. Prompt 3 was used for fWritingI and contains la- bels and their definitions. Mirroring label genera- tion for follow-ups, we derive writing intents iter- atively. We cluster user utterances in BCP Wrand manually generate intent labels per cluster while merging redundant intents. We repeat | https://arxiv.org/abs/2505.16023v2 |
this proce- dure until new labels aren’t generated. We man- ually apply the labels to 200 randomly sampled BCP Wrsessions to ensure their coverage. Table 10 shows example original requests for each writing intent. We validate the accuracy of fWritingI through manual annotation in Appendix C.3. Regression Analysis. We run logistic regression analysis4by treating fine-grained intents of a ses- sion as predictors and the sessions PATH as a target variable. We rely on logistic regressions due to their ease of interpretation and follow a large body of prior work (Gujarati, 2021). In our regression, we represent the fine-grained intents identified by fWritingI as one-hot features, I, of the sessions – these serve as predictors. Next, we obtain the ses- 4statsmodels.Logit Figure 8: Frequencies of writing intents in BCP Wrand WC Wridentified by multi-label fWritingI . From the top-3 intents, we see that Bing Copilot was used for creating communication or professional texts, while WildChat was used for creative texts such as stories, scripts, or characters. Intents occurring in fewer than 3% sessions are excluded for space. sions membership in ldifferent PATHs,M l, by thresholding the columns of Pl. Then, we correlate the intents Iwith PATHs ofM l. This results in l independent logistic regressions: sigmoid (b+Ic). In thresholding Pl, we select a score to retain 15- 20% of the sessions with the highest scores per dimension in Pl. Our thresholds were selected to ensure the accuracy of our logistic regressions. In our results, we examine the coefficients c– cor- relating the intents with each one of the lPATHs. Finally, note that the presence of a follow-up type in a session can be used directly as a binary target variable instead of PCA-based PATHs – we pursue both paths in our analysis. Note that we also use the length of original re- quests as predictors and find that they are uncor- related with PATHs (§6). Similarly, early analysis also found the deployment LLM be uncorrelated with PATHs – likely due to PATHs capturing high- level behaviors. Therefore they were dropped as features in subsequent analysis. To indicate the goodness of classifier fit, Table 7 presents the ac- curacy of classifiers used to identify correlations between writing intents and PATHs/follow-up types and shows significantly better accuracy than a ran- dom classifier (50% accuracy). Qualitative Analysis. Our automatic approach highlights correlated writing intents and PATHs. To gain a deeper understanding of user behaviors in a correlated intent- PATH pair, two authors conducted a qualitative analysis of the pairs, which showed sta- tistically significant correlations and were repeated in BCP Wrand WC Wr. For each pair, both authors independently examined the top 300 conversations with the highest principal component scores, aim- 17 Target in BCP Wr, WC Wr Description Accuracy (%) BCP Wr, WC Wr PATH2, P ATH2 Requesting more output. 79, 79 PATH3,REQ.ANS. Requesting answers. 77, 74 PATH4,ADDS CON . Adding content. 76, 74 CH.STY., PATH4 Changing style. 61, 78 PATH1, P ATH1 Revising requests. 73, 74 PATH6, P ATH6 Elaborating on requests. 84, 84 Table 7: The accuracies for logistic regressions used to find | https://arxiv.org/abs/2505.16023v2 |
correlations between writing intents and PATHs or follow-up types in our regression analysis. ing to understand the user goals for engaging in aPATH. Then both authors met and resolved the differences in their understanding. In our analy- sis, the two authors differed in the interpretation of one of the six intent- PATHs presented in our results. Our analysis was conducted to overcome the lack of access to Bing Copilot or WildChat users who could be probed about their intents and behaviors – a fundamental challenge in all log-based studies (Dumais et al., 2014). C.3 Validating Follow-up and Intent Classifiers Setup. Table 8 presents the results of manually validating fFollowU andfWritingI . Similar to fCoarseT , our annotation validates the accuracy of fFollowU andfWritingI to establish that the subsequent anal- ysis is error-free. Validation was conducted by two co-authors serving as annotators, they manu- ally judged the correctness of fFollowU predictions for follow-ups and fWritingI predictions for original requests. Both annotators judged the correctness of the labels independently and on the basis of a shared annotation guideline. Predictions were considered incorrect if a different label was appli- cable. The two annotators’ judgments were used to compute agreements with Cohen’s Kappa, κ. We constructed our validation data by selecting follow- ups from 110 sessions and original requests from a different set of 110 sessions – 220 sessions in all. We ensured that every follow-up and writing intent label was uniformly present in the validation data. BCP Wrand WC Wrhad 110 sessions each in the validation data. For the multi-label fWritingI an- notators independently judged the correctness of all predicted labels. Results. From Table 8 we note that the follow- up and writing intent classifiers have a reasonably high accuracy and substantial agreement between the annotators. Our annotators noted that most writing intent errors were the result of hard-to-Follow-ups Intents Acc. ( %) Agr. ( κ) Acc. ( %) Agr. ( κ) BCP Wr 79.09 0.78 81.58 0.79 WC Wr 84.55 0.74 78.70 0.82 Table 8: Accuracy (Acc.) of GPT-4o classifiers for labeling follow-up types ( fFollowU ) and writing intents (fWritingI ) in BCP Wrand WC Wr. Agreements (Agr.) are computed from judgments made by two independent annotators. distinguish boundaries between creative writing tasks (i.e. GENERATE STORY ,GENERATE CHAR - ACTER ,GENERATE SCRIPT ). Similarly, follow-up type errors were the result of hard-to-distinguish boundaries between different ways to update a gen- eration (i.e. CHANGE STYLE ,ADD CONTENT ,RE- MOVE CONTENT ). D Extended Results We discuss how writing intents correlate with PATHs in §6. We include additional results here. •Appendix D.1: Discusses writing intents cor- related with CHANGE STYLE /PATH4 – we in- clude it here for a shortage of space. •Appendix D.2: Includes dataset dependent trends for PATH1 and 5 – we include it here for completeness. •Figures 12, 13, and 14 supplement the result (§6) and discussion (§7). D.1 Changing style to align generations with readers Figure 9a shows the writing intents correlated with CHANGE STYLE where users modified the style of LLM generations. This PATH correlated with intents to improve texts | https://arxiv.org/abs/2505.16023v2 |
across domains (I1), and generate messages, summaries, and online posts (I2, I4, I8). Analysis of sessions revealed that users frequently changed the style of generations to better match the assumed norms for a communication (Ex 2,3,4) or to the likely preference of readers (Ex 1,5). Specifically, when users sought to IMPROVE TEXT (I1), their follow-ups changed the style to better match the preferences of specific readers or the business context of a communication (Ex 1,2). This remained true for communicative texts intended for groups or individuals (Ex 3, 4). And when users generated summaries, they sought to change their format for readability (Ex 5), focusing on readers. Our findings lend support 18 −0.8 −0.6 −0.4 −0.2 0.0 0.2 0.4 0.6 Es8ima8.d co.ffici.48REQUEST LENGTHUNDEFINED REQ.Q. AB UT WRITINGGENERATE J KEGENERATE S NGGENERATE P EMGEN CHARACTERGENERATE SCRIPTGENERATE ST RYGEN CATCHY TEXTGENERATE BI GEN NLINE P STGET REFERENCESGENERATE ESSAYGEN TECHN. TEXTGEN SUMMARYGEN PR FESS. D CGEN MESSAGEIMPR VE TEXTBCP: Lo0R.0 - T ask vs CHANGE RESP NSE p <= 0.05 p > 0.05 I18I1 7I16I15I14I13I12I11I10I9I8I7I6I5I4I3I2I1 (a) Logistic regression coefficients for intents vs CHANGE STYLE in BCP Wr. 9 Ex 1 ChSty Ex 3 Ex 4 Ex 6 Ex 2 Ex 5(b) Users engaged in intents I1, I2, I4, and I8. Figure 9: (a) The intents discussed in §D.1 are highlighted in color. Coefficients for WC Wrare plotted in Figure 12. (b) Example conversations from the intents highlighted in (a) – intent and example colors are matched. to prior work on style matching in communicative settings. This work notes that style matching is crucial for engagement and community identity on online platforms (Ananthasubramaniam et al., 2023; Tran and Ostendorf, 2016), and an important element of effective communication (Pickering and Garrod, 2013). Our work extends style matching to the case of user-LLM co-creation. Inferring and aligning LLMs to aid co-creation for specific audiences remains under-explored – this may be seen as a form of reader personalization. Finally, across intents we also noted CHANGE STYLE to request more concise outputs (Ex 6) – this may be attributed to RLHF aligned models’ generating lengthy outputs (Singhal et al., 2024). Implications: •Infer intended readers and their preferred style, enabling LLMs to customize the generation style to them. D.2 Dataset dependent trends in revising and elaborating on requests Figure 10 and 11 show writing intents correlated with PATH1 and PATH5. We avoid discussing them in detail and include them for completeness. Re- spectively, they represent users revising original requests and elaborating on requests in follow- ups. For both PATHs, we don’t see positively cor- related and statistically significant intents across BCP Wrand WC Wr. We hypothesize that the dataset- dependent correlation with writing intents in PATH1 may be because users provide many different forms of feedback in a revised prompt (Chen et al., 2021), making correlation with specific writing intents less likely. Further, we also hypothesize that both PATH1 and 5 may be closely tied to user satisfac- tion, which is likely to be system dependent. In thisregard, recent work (Sarkar et al., 2025) leverages the feedback contained in revised requests | https://arxiv.org/abs/2505.16023v2 |
to im- prove LLM performance. Future work may attempt to identify more distinguished PATHs through a finer-grained analysis of sessions with revised or elaborated requests. 19 −0.8 −0.6 −0.4 −0.2 0.0 Es8ima8.d co.ffici.48REQUEST LENGTHUNDEFINED REQ.Q. AB UT WRITINGGENERATE J KEGENERATE S NGGENERATE P EMGEN CHARACTERGENERATE SCRIPTGENERATE ST RYGEN CATCHY TEXTGENERATE BI GEN NLINE P STGET REFERENCESGENERATE ESSAYGEN TECHN. TEXTGEN SUMMARYGEN PR FESS. D CGEN MESSAGEIMPR VE TEXTBCP: Lo0R.0 - T ask vs PC1 p <= 0.05 p > 0.05 I18I1 7I16I15I14I13I12I11I10I9I8I7I6I5I4I3I2I1(a) Writing intents vs P ATH1 in BCP Wr. −1.25 −1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50 E7t1mat.d c5.ff1c1.ntREQUEST LENGTHUNDEFINED REQ.Q. AB UT WRITINGGENERATE J KEGENERATE S NGGENERATE P EMGEN CHARACTERGENERATE SCRIPTGENERATE ST RYGEN CATCHY TEXTGENERATE BI GEN NLINE P STGET REFERENCESGENERATE ESSAYGEN TECHN. TEXTGEN SUMMARYGEN PR FESS. D CGEN MESSAGEIMPR VE TEXTWC: L50R.0 - T ask vs PC1 p <= 0.05 p > 0.05 I18I1 7I16I15I14I13I12I11I10I9I8I7I6I5I4I3I2I1 (b) Writing intents vs P ATH1 in WC Wr. Figure 10: Coefficient plots for writing intents vs P ATH1 (revising requests). −0.4 −0.2 0.0 0.2 0.4 0.6 Estimat.d co.ffici.4tREQUEST LENGTHUNDEFINED REQ.Q. AB UT WRITINGGENERATE J KEGENERATE S NGGENERATE P EMGEN CHARACTERGENERATE SCRIPTGENERATE ST RYGEN CATCHY TEXTGENERATE BI GEN NLINE P STGET REFERENCESGENERATE ESSAYGEN TECHN. TEXTGEN SUMMARYGEN PR FESS. D CGEN MESSAGEIMPR VE TEXTBCP: Lo0R.0 - T ask vs PC5 p <= 0.05 p > 0.05 I18I1 7I16I15I14I13I12I11I10I9I8I7I6I5I4I3I2I1 (a) Writing intents vs P ATH5 in BCP Wr. −0.75 −0.50 −0.25 0.00 0.25 0.50 0.75 1.00 E7t1mat.d c5.ff1c1.ntREQUEST LENGTHUNDEFINED REQ.Q. AB UT WRITINGGENERATE J KEGENERATE S NGGENERATE P EMGEN CHARACTERGENERATE SCRIPTGENERATE ST RYGEN CATCHY TEXTGENERATE BI GEN NLINE P STGET REFERENCESGENERATE ESSAYGEN TECHN. TEXTGEN SUMMARYGEN PR FESS. D CGEN MESSAGEIMPR VE TEXTWC: L50R.0 - T ask vs PC5 p <= 0.05 p > 0.05 I18I1 7I16I15I14I13I12I11I10I9I8I7I6I5I4I3I2I1 (b) Writing intents vs P ATH5 in WC Wr. Figure 11: Coefficient plots for writing intents vs P ATH5 (Elaborating on requests). 20 −0.50 −0.25 0.00 0.25 0.50 0.75 1.00 1.25 E7t1mat.d c5.ff1c1.ntREQUEST LENGTHUNDEFINED REQ.Q. AB UT WRITINGGENERATE J KEGENERATE S NGGENERATE P EMGEN CHARACTERGENERATE SCRIPTGENERATE ST RYGEN CATCHY TEXTGENERATE BI GEN NLINE P STGET REFERENCESGENERATE ESSAYGEN TECHN. TEXTGEN SUMMARYGEN PR FESS. D CGEN MESSAGEIMPR VE TEXTBCP: L50R.0 - T ask vs PC 2 p <= 0.05 p > 0.05 I18I17I1 6I15I14I13I12I11I10I9I8I7I6I5I4I3I2I1 (a) Writing intents vs P ATH2 in BCP Wr. −0.5 0.0 0.5 1.0 1.5 Est1mat.d c5.ff1c1.ntREQUEST LENGTHUNDEFINED REQ.Q. AB UT WRITINGGENERATE J KEGENERATE S NGGENERATE P EMGEN CHARACTERGENERATE SCRIPTGENERATE ST RYGEN CATCHY TEXTGENERATE BI GEN NLINE P STGET REFERENCESGENERATE ESSAYGEN TECHN. TEXTGEN SUMMARYGEN PR FESS. D CGEN MESSAGEIMPR VE TEXTWC: L50R.0 - T ask vs REQUESTS ANSWERS p <= 0.05 p > 0.05 I18I1 7I16I15I14I13I12I11I10I9I8I7I6I5I4I3I2I1 (b) Writing intents vs P ATH3 in WC Wr. −1.5 −1.0 −0.5 0.0 0.5 Est1mat.d c5.ff1c1.ntREQUEST LENGTHUNDEFINED REQ.Q. AB UT WRITINGGENERATE J KEGENERATE S NGGENERATE P EMGEN CHARACTERGENERATE SCRIPTGENERATE ST RYGEN CATCHY TEXTGENERATE BI GEN NLINE P STGET REFERENCESGENERATE ESSAYGEN TECHN. TEXTGEN SUMMARYGEN PR FESS. D CGEN MESSAGEIMPR VE TEXTWC: L50R.0 - T ask vs PC4 p <= 0.05 p > 0.05 I18I1 7I16I15I14I13I12I11I10I9I8I7I6I5I4I3I2I1 (c) Writing intents vs P ATH4 in WC Wr. −1.00 | https://arxiv.org/abs/2505.16023v2 |
−0.75 −0.50 −0.25 0.00 0.25 0.50 0.75 E7t1mat.d c5.ff1c1.ntREQUEST LENGTHUNDEFINED REQ.Q. AB UT WRITINGGENERATE J KEGENERATE S NGGENERATE P EMGEN CHARACTERGENERATE SCRIPTGENERATE ST RYGEN CATCHY TEXTGENERATE BI GEN NLINE P STGET REFERENCESGENERATE ESSAYGEN TECHN. TEXTGEN SUMMARYGEN PR FESS. D CGEN MESSAGEIMPR VE TEXTWC: L50R.0 - T ask vs ADDS C NTENT p <= 0.05 p > 0.05 I18I1 7I16I15I14I13I12I11I10I9I8I7I6I5I4I3I2I1 (d) Writing intents vs ADDS CONTENT in WC Wr. Figure 12: Supplementary coefficient plots for Figure 4a, Figure 5a, Figure 9a, and Figure 6a. Colored intents are discussed in §6 and match the colors of their corresponding coefficient plots and examples in §6. 21 −0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Cosine simil rit10.00.51.01.52.02.53.03.5KDESimil rit1 bet0een follo0 -up and previous model output . GENER A TE PROFESSIONAL DOCUMENT GENER A TE SUMMAR Y GENER A TE TECHNICAL TEXT GENER A TE CHAR ACTERFigure 13: Increasing similarity between the follow-up utterance and the previous model generation in WC Wrfor writing intents correlated with PATH3 (asking follow-up questions). We discuss this in §6.2. Similarities are increase from users co-creating professional documents, to summaries, and technical texts. But when users co-create fictional character narratives, we see the highest similarity, indicating that they tend to ask closed-domain questions grounded in the model generation. Similarity is based on a tf-idf representation of texts. Vertical lines indicate the median similarity. 0.0 0.2 0.4 0.6 0.8 1.0 Cosine simi arity0.00.51.01.52.02.53.03.54.0KDESimi arity between fo ow-up and next mode output. REQUESTS MORE OUTPUTS REQUESTS ANSWERS ADDS CONTENT (a) Similarity between follow-up and next generation in WC Wrfor creative writing intents. 12Varying specificity Increasing specificity in feedback(b) Increasing specificity of feedback in user follow-ups. Figure 14: (a) Increasing similarity between user follow-up and the next model generation across follow-up types REQUESTS MORE OUTPUTS ,REQUESTS ANSWERS , and ADDS CONTENT . This indicates an increasing amount of specificity in user feedback across the three kinds of follow-up types. We discuss this in §7. (b) Example conversations from WC Wrthat illustrate the increasingly specific follow-up types for the writing intent GENERATE CHARACTER . Similarity is based on a tf-idf representation of texts. Vertical lines indicate the median similarity. 22 Prompt 1 The prompt for GPT-4o based multi-label Task Classifier ( fCoarseT ) used to obtain the writing task sessions from BCP AllandWC All. This is discussed in Section 3 and Appendix B. The text in green maps the labels used in the prompt to the labels used in Figure 5 and the rest of the paper. Bold text is replaced with raw conversation data. # Task Instructions You an an experienced linguist who helps analyze conversations . Given a CONVERSATION between a user and an assistant , classify the CONVERSATION into one or more of the following labels . INFORMATION SEARCH : The user is asking a question about a specific document or could have searched for this information on the internet . ←Search Information GENERATING COMMUNICATIVE TEXT : The user is trying to generate text for communication with an individual , a group , or an online platform . ←Gen Communicative | https://arxiv.org/abs/2505.16023v2 |
Text SOFTWARE DEVELOPMENT : The user is seeking assistance in software development . Often phrased as a how to question with accompanying code . ←Develop Software HOMEWORK PROBLEM : The user is posing a question or problem which is likely to be from an exam or homework . ←Solve Problem GENERATING CREATIVE TEXT : The user is seeking to generate creative stories , characters , titles , slogans and other creative texts . ←Generate Creative Text GENERATING TECHNICAL TEXT : The user is working with the assistant to generate technical or research texts . ←Generate Technical Text SUMMARIZATION : The user seeks to summarize a text that they provide . ←Summarize Text TRANSLATION : The user is seeking to translate a text or understand text in a language that isnt English . ←Translate Text IMAGE GENERATION : The user asks to generate an image or other forms of visual art . ↖Generate Image EXPLICIT CONTENT : The user or bot text contains explicit or pornographic content . ↖Generate Explicit Text UNDEFINED : A request for which none of the defined labels are applicable or there isn ’t an explicit request by the USER . ←Undefined Request For your response use the following instructions : 1. Output one of more of the correct labels for the CONVERSATION 2. Output the labels in decreasing order of their relevance to the CONVERSATION CONVERSATION : {{conversation_text}} Output a explanation and then one or more of the labels for the CONVERSATION . 23 Prompt 2 The prompt for GPT-4o used to label user utterances as original requests and or one among eleven follow-up types in BCP Wrand WC Wr. This is discussed in Section 4 and Appendix C.1. The text in green maps the labels used in the prompt to the labels used in Table 2 and the rest of the paper if they differ. Bold text is replaced with raw conversation data. # Task Instructions You an an experienced linguist who helps analyze conversations . Given a conversation between a USER and an ASSISTANT , label the USER UTTERENCES with the below labels . Use the following labels : NEW REQUEST : The user is making a new request to the ASSISTANT for a new topic and for the first time . ←ORIGINAL REQUEST RESTATES REQUEST : The user is restating the NEW REQUEST with a modification . ELABORATES REQUEST : The user is elaborating on the NEW REQUEST without restating the request . COURTESY RESPONSE : The user is responding as a courtesy to the ASSISTANT (e.g. " yes please ", "go ahead " etc ) or is exchanging pleasantries with the ASSISTANT . CHANGE RESPONSE : The user is requesting the ASSISTANT to change the stylistic elements (e.g. length , tone , formality etc ) of its previous response . ←CHANGE STYLE ADDS CONTENT : The user is requesting new content to be added to the ASSISTANTs response . REMOVES CONTENT : The user is requesting specific content to be removed from the ASSISTANTs response . REQUESTS ANSWERS : The user | https://arxiv.org/abs/2505.16023v2 |
is requesting new or additional information related to the ASSISTANTs response . REQUESTS MORE OUTPUTS : The user is requesting additional output from the ASSISTANT . RESPONDS POSITIVELY : The user is responding positively to the ASSISTANTs response without providing additional detail . RESPONDS NEGATIVELY : The user is responding negatively to the ASSISTANTs response without providing additional detail . UNDEFINED RESPONSE : None of the defined labels describe the users response . Use these rules when you respond : 1. Output one best label for every USER UTTERENCE . 2. Output the labels in the order of the USER UTTERENCES . Here is the conversation : {{conversation_text}} Output a label for each USER UTTERENCE . 24 Follow-up Type Description Examples (original request and follow-up) RESTATES REQUEST Reformulates their request “Msitu Africa is requesting for funding through crowdfunding websites. Write for me a small compelling write up requesting for funds.” →“Fundraiser Story. Msitu Africa is requesting for funding through crowdfunding websites. Write for me a small compelling write up requesting for funds to help us actualize our initiative.” “wrire a detail script for animation video about Pan Am Flight 103”→“write a detail script for animation video about Pan Am Flight 103, video will be 20 minutes in length” ELABORATES REQUEST Expands on their request; of- ten after being asked by the LLMCan you help me with writing the discussion for my article? → “The article objectives are to explore and analyze the various impacts of D-penicillamine (DPCA) treatment...” “i want to provide you with 4 separate inputs as follows story1 chapter1, story1 chapter7, story2 chapter7, i would like you to write story2...” →“Chapter 1: Shadows of Rebellion...” REQUESTS ANSWERS Question related to the out- put“My potential client has just returned from a trip last week. How should I greet him in an email” →“What should I write in the subject line in my email” “I want you to write a story for me. There are two girl who hate each other. They are going to wrestling in front of their friends...” →“can you describe me last pin position ?” REQUESTS MORE OUTPUTS Asks for additional output “Write 10 funny and short answers to this comment: ...” → “Write more” “you are a novelist. settings - late medieval england like kingdom. kingdoms A and B. ... list 25 approaches A can benefit from” →“Give 25 more” CHANGE STYLE Changes style of output “rewrite this Hotel A is an excellent choice for tourists seek- ing a memorable and enjoyable stay...” →“rewrite it in 240 words” “ write an email to change the name of the HR manager, which due to mistake on my part was spelled incorrectly” →“write something instead of Dear” ADDS CONTENT Adds content to output “write in bullet points my responsibilities as an intern during a neurology rotation” →“mention discussions” “script about cincinnati blowing a 21 point lead vs johnny manziel” →“the next week seattle blew out cincinnaty by more than 30” REMOVES CONTENT Remove content from output “Make this into a episode and give it More life” →“Remove panel | https://arxiv.org/abs/2505.16023v2 |
1-4 and re number the rest” “Today is my senior patrol election and I want to win. I need to provide a 2-3 minute speech...” →“Remove the details about having less experience and emphasize how I want to use my creativity...” COURTESY RESPONSE A courtesy or pleasantry “write a short 100 word Youtube channel description for a channel...” →“thank you” “Can you write a 300 pages English novel if I give you the plot”→“Ok so can we start” RESPONDS POSITIVELY Explicitly pleased with out- put“In my book there is a central plot with the main characters who influence global events...” →“excellent, keep going” “i want to create 4 different kinds of postings. one should be Good morning” →“i like them plase continue with 10 for each type” RESPONDS NEGATIVELY Explicitly unhappy with out- put“Could you come up with some two syllables long feminine robot names? They need to be based...” →“Most of those where three syllables long” “Write a brief man to man heartfelt reply to this comment...” → “Do you not understand brief? No yapping” Table 9: Example utterances for each follow-up type selected from WC Wr. We include the original requests to better illustrate the follow-up utterance by the user. We omit the LLM response for space. 25 Prompt 3 The prompt for GPT-4o used to label original requests with finer-grained writing intents BCP Wr and WC Wr. This is discussed in Section 4 and Appendix C.2. The text in green maps the labels used in the prompt to the labels used in Table 3 and the rest of the paper if they differ. Bold text is replaced with raw conversation data. # Task Instructions You an an experienced linguist who helps analyze conversations . Given a USER REQUEST to an assistant , classify the USER REQUEST into one or more of the following labels : GENERATE SONG : Request to write or re - write a song . GENERATE JOKE : Request to write or re - write a joke . GENERATE POEM : Request to write or re - write a poem , haiku or a similar other verse . GENERATE STORY : Request to write or re - write a story . GENERATE SCRIPT : Request to write or re - write a script for a video , drama , movie or similar other media . GENERATE CHARACTER : Request to generate a fictional character and their story . GENERATE CASUAL BIO : Request to write or re - write a bio for an online or mobile app profile . ←GENERATE BIO GENERATE PROFESSIONAL DOCUMENT : Request to write or re - write a resume , recommendation letter , or other professional document . GENERATE MESSAGE : Request to write or re - write a interpersonal message , cover letter , email , letter or other interpersonal communication . GENERATE ONLINE POST : Request to write or re - write a text for an online platform like a social media post , review etc . GENERATE TITLE : Request to write a title , slogan , | https://arxiv.org/abs/2505.16023v2 |
or other eye catching text . ↖GENERATE CATCHY TEXT GENERATE SUMMARY : Request to summarize a text provided by the user . GENERATE ESSAY : Request to write an essay , blog , article or other long text on any topic . GENERATE TECHNICAL TEXT : Request to write or re - write a scientific or technical text QUESTION ABOUT WRITING : A question about literary works , writing or publishing . IMPROVE TEXT : Request to improve grammar , style , tone or other aspect in the provided text . GET REFERENCES : Request to add , format or generate references . UNDEFINED REQUEST : A request for which none of the labels are applicable , or there isn ’t an explicit request , or a request that a writing assistant cannot fulfill . For your response use the following instructions : 1. Output one of more of the correct labels for the USER REQUEST . 2. Output the labels in decreasing order of their relevance to the USER REQUEST . USER REQUEST : {{user_request}} Output a explanation and then one or more of the labels for the USER REQUEST . 26 Writing intent type Example original requests IMPROVE TEXT “say better and more professional: Beckey, really appreciate your advocacy during” “Can you improve and make a variation of this sentence?: Is to be expected of any aspect...” GENERATE MESSAGE “Hi, May I have your updated delivery schedule for face plate ... How to reply to customer and we have stock 250pcs can arrange delivery on 16 jan 2024.” “write a one sentence valentine card message for a girl who bought a new pontiac...” GENERATE PROFESSIONAL “create a school policy regarding parents using a school account ...” DOCUMENT “I am writing an annual performance appraisal for my direct report who is a sales engineer...” GENERATE SUMMARY “give introduction to my chapter 5 (conclusion and future research) based on this text: Discussion: This study utilized...” “Make a summary of this including tools and what they are used for: A version control system...” GENERATE TECHNICAL “write me a report about miss feeding amc ... in 300 words” TEXT “I am writing a project proposal for seed funding for an engineering project ... for my project “Waveguard” for about 1000 words using ... if there are extra information needed, please let me know.” GENERATE ESSAY “Write me a 500 word essay about zodiac serial killer” “Write a blog post where Pop in France has picked up several new shows...” GET REFERENCES “Please explain, with examples, and in detail, why Western cultures tend to value a sun tan ... Please cite sources, APA style.” “as a postgraduate student I want a literature review to my scientific paper ... do the references.” GENERATE ONLINE POST “Heart Shaped Pendant Necklaces ... This is the title for a jewelry I am selling on etsy. Write a short and descriptive description that is intended for a younger teen audience...” “Write 10 short tweets about today’s rainy morning at the beach from one of the most windy places” GENERATE | https://arxiv.org/abs/2505.16023v2 |
BIO “Write me a bio for me 450 charters of someone who’s loves to read books” “i want to write a short biography about myself saying that i am an audio engineer that graduated from...” GENERATE CATCHY TEXT “generate 5 new original desalination company names” “suggest me 10 thumbnail texts for this youtube video Young Boy Transfers...” GENERATE STORY “Can you write a hypothetical what if alternate history scenario, what if Russia never sold Alaska to the US...” “Once upon a time, in the heart of a vast national park, there lived a dedicated and diligent park worker ... throw a party to celebrate it’s recovery” GENERATE SCRIPT “Write a funny dialogue where /co/ anon asks Valerie to go to a cottage...” “I have a youtube short Idea I need you to write a script. It is legendary 1v1s...” GENERATE CHARACTER “Make a Really Shortest Description Backstory about SpongeBob SquarePants...” “Describe rogue gas giant named Silicodum, it has frozen dense hazes of sili- cate...” GENERATE POEM “Write me a funny poem about my brother malachi...” “Turn this into an unrhyming poem...” GENERATE SONG “make a rap song to pony by geniune about amy oppong” “Write a music-hall song called "The Parlor Upstairs" ...” GENERATE JOKE “You are a Radio DJ at 99.7 Now Radio. Write me a funny radio break about presenting the next radio program...” “tell me some jokes with a naval or military theme which can be used in a farwell speech” QUESTION ABOUT WRITING “could a person with potential with lots of friends ... be an effective Villain backstory? How can it be written well, and how can it be written poorly?” “The following excerpt is from an alternate history novel. Can you critique the system of government described in the excerpt?...” Table 10: Example original requests from WC Wrfor each writing intent type. 27 | https://arxiv.org/abs/2505.16023v2 |
arXiv:2505.16036v1 [cs.CL] 21 May 2025OpenEthics: A Comprehensive Ethical Evaluation of Open-Source Generative Large Language Models Burak Erinç Çetin, Yıldırım Özen, Elif Naz Demiryılmaz, Kaan Engür, Cagri Toraman Middle East Technical University, Computer Engineering Department, Applied NLP Group {erinc.cetin,yildirim.ozen,elif.demiryilmaz,kaan.engur,ctoraman}@metu.edu.tr Abstract Generative large language models present sig- nificant potential but also raise critical ethi- cal concerns. Most studies focus on narrow ethical dimensions, and also limited diversity of languages and models. To address these gaps, we conduct a broad ethical evaluation of 29 recent open-source large language mod- els using a novel data collection including four ethical aspects: Robustness, reliability, safety, and fairness. We analyze model behavior in both a commonly used language, English, and a low-resource language, Turkish. Our aim is to provide a comprehensive ethical assess- ment and guide safer model development by filling existing gaps in evaluation breadth, lan- guage coverage, and model diversity. Our ex- perimental results, based on LLM-as-a-Judge, reveal that optimization efforts for many open- source models appear to have prioritized safety and fairness, and demonstrated good robust- ness while reliability remains a concern. We demonstrate that ethical evaluation can be ef- fectively conducted independently of the lan- guage used. In addition, models with larger parameter counts tend to exhibit better ethical performance, with Gemma and Qwen mod- els demonstrating the most ethical behavior among those evaluated. 1 Introduction Recent advances in generative large language models (LLMs) have brought significant poten- tial in various domains, from creative writing to professional and scientific applications (Srivastava et al., 2022; Bubeck et al., 2023). However, their deployment raises a series of critical ethical con- cerns that require a thorough investigation (Wei- dinger et al., 2022; Laakso et al., 2024). Existing ethical studies often narrowly address specific aspects of ethical considerations rather than adopting comprehensive taxonomies; such as bias and fairness (Gallegos et al., 2024), halluci- nations (Manakul et al., 2023), security, and pri-vacy (Das et al., 2025). In addition, existing eth- ical frameworks focus mainly on English, leaving substantial gaps in multilingual evaluation, partic- ularly in low-resource languages. Studies consis- tently indicate a decrease in LLM safety and re- liability in non-English settings, highlighting the need for a wider linguistic coverage (Wang et al., 2023b; Agarwal et al., 2024). Moreover, existing studies focus on particular model families (Rao et al., 2023) or a limited set of models (Vida et al., 2024). Investigating a wide range of model fam- ilies is crucial to understand the advantages and shortcomings in ethical settings. Our study evaluates a broad ethical taxonomy that includes robustness, reliability, safety, and fairness. By examining these four dimensions, we aim to identify significant disparities in model be- havior, performance, and potential risks that arise due to linguistic and cultural variations. Our anal- ysis covers both a commonly used language, En- glish, and a low-resource language, Turkish. The dual-language framework allows us to evaluate whether these models uphold ethical standards in diverse linguistic settings. We analyze responses from 29 open-source large language models. We employ the LLM-as-a-Judge approach as an eval- uator for large-scale evaluation. Our motivation is to fill the | https://arxiv.org/abs/2505.16036v1 |
gaps in language coverage, evaluation breadth, and model diversity. By doing so, we are able to deliver a compre- hensive ethical evaluation of generative large lan- guage models, thus helping guide safer and more reliable model development. Our study contributes to the growing discourse on the ethical implications of LLMs by publish- ing a novel data collection spanning several eth- ical dimensions and highlighting the importance of cross-linguistic analysis in building trustworthy AI systems1. The experimental results reveal that 1We publish all related material including data, prompts, and scripts online at https://github.com/metunlp/openethics the optimization efforts for many open-source and generative large language models have prioritized safety, fairness, and robustness aspects, compared to reliability. Lastly, models with larger parame- ters tend to exhibit better ethical performance. In particular, Gemma and Qwen models are found to demonstrate the most ethical behavior. 2 Related Work Understanding and mitigating the potential social risks of large language models requires a wider scope of evaluation (Chang et al., 2024). Evalu- ating multiple ethical dimensions has become crit- ical for responsible development and deployment. Although general evaluation frameworks such as HELM (Liang et al., 2023) and BIG-bench (Sri- vastava et al., 2022) offer wide-ranging assess- ments, they often incorporate ethical considera- tions as part of a larger suite, rather than provid- ing a focused and comprehensive ethical analysis in multiple specific dimensions simultaneously. Robustness Robustness evaluations assess model stability, particularly against adversarial inputs or "jailbreak" attempts designed to bypass safety protocols. Benchmarks such as AdvBench (Zou et al., 2023) systematically test resilience, complemented by research analyzing various prompt injection and manipulation techniques (Liu et al., 2024a, 2023) Reliability Reliability, particularly truthfulness and hallucination, has been a major focus in the literature. Benchmarks such as TruthfulQA (Lin et al., 2021) evaluate whether models avoid gener- ating common misconceptions, while other stud- ies investigate methods to detect and mitigate hal- lucinations (Manakul et al., 2023; Martino et al., 2023). LLMs show biases that affect reliability, such as preference for positions, preference for the same family of models, and preference for writing style that affect reliability (Dai et al., 2024). Safety Model safety aims at preventing the gen- eration of harmful or toxic content. Broad safety benchmarks, such as SafetyBench (Zhang et al., 2023), DecodingTrust (Wang et al., 2023a), Harm- Bench (Mazeika et al., 2024), and RealToxici- tyPrompts (Gehman et al., 2020) offer comprehen- sive evaluations in multiple dimensions of safety, while narrow safety benchmarks, such as SafeText (Levy et al., 2022) target specific safety concerns. Various red-teaming methodologies and datasetsaim to uncover safety vulnerabilities (Ge et al., 2023; Wei et al., 2023). In addition, there are ap- proaches for LLM safety by incorporating safety training (Kumar et al., 2023; Liu et al., 2024b). Fairness Fairness often investigates biases em- bedded in large language models, using bench- marks such as BBQ (Parrish et al., 2021) to inves- tigate social biases or studies focusing on specific demographic axes such as gender and race (Gal- legos et al., 2024; Li et al., 2023b). When LLMs are used in other fields such as recommendation systems, | https://arxiv.org/abs/2505.16036v1 |
biases in gender, age, and race become a significant issue (Sakib and Bijoy Das, 2024). Language Gap A significant limitation of the existing ethical LLM evaluation landscape is its strong Anglocentric bias. The majority of studies are developed primarily for English. Since model performance, safety alignment, and reliability can be significantly affected in non-English contexts (Wang et al., 2023b; Agarwal et al., 2024), the gap is particularly significant for low-resource lan- guages, where dedicated datasets and studies on ethical evaluation are very rare. Model Variety Many existing ethical evalua- tions focus on a relatively small number of mod- els, often focusing on leading proprietary models or specific open-source models (Rao et al., 2023; Vida et al., 2024). This limited scope hinders a broader understanding of the variations in ethical performance across the rapidly growing landscape of diverse open-source models. Although human evaluation remains a gold standard for nuanced ethical judgments, its cost and scalability have limitations in automated approaches. The use of LLMs as evaluators (’LLM-as-a-judge’) therefore becomes a promising technique for large-scale as- sessment (Gu et al., 2025). Our Differences Despite progress in evaluating specific ethical aspects of LLMs, significant gaps remain: (1) evaluations often focus narrowly on individual dimensions (e.g. only safety or fair- ness) rather than adopting a comprehensive ethical taxonomy; (2) there is a persistent lack of multilin- gual evaluation, particularly for low-resource lan- guages; and (3) analyses frequently cover a limited set or range of models, especially within the open- source domain. This work aims to address these limitations. We conduct a comprehensive ethi- cal evaluation using a novel data collection with a broad taxonomy covering robustness, reliability, safety, and fairness. Our assessment spans both English and a low-resource language, Turkish. We evaluate a diverse set of 29 open-source LLMs, ranging significantly in size and architecture. 3 Ethical Evaluation Tasks Our data collection to evaluate the ethical concerns of large language generative models includes both hand-crafted and existing prompts. A total of 1,790 prompts for English and an equal number for Turkish were created. All prompts were trans- lated to Turkish using DeepL2and Google Trans- late3. Four native speakers carefully validate and correct the translations. The summary of our col- lection is given with data sources in Table 1. We adapt the ethical categories reported in (Liu et al., 2023). 3.1 Robustness Robustness evaluation involves assessing the model’s ability to maintain robust performance under various conditions, such as adversarial at- tacks and out-of-distribution scenarios. We adopt a methodology focused on simple and template- based prompt engineering. Our approach is based on prompt injection and jailbreaking techniques (AWS, 2024; Liu et al., 2024a). We first create jailbreak templates and then ap- ply them to existing prompts from our collection of reliability, safety, and fairness, which are ex- plained in the following subsections. For exam- ple, we obtain the Robustness score for Safety (re- ferred to as Safety + JB) by applying jailbreak (JB) templates. Robustness score is thereby measured by the impact of jailbreak templates on the other three categories. We use 36 jailbreak templates (Liu et al., | https://arxiv.org/abs/2505.16036v1 |
2024a), classified into three primary types: Attention Shifting (16 templates), Pretend- ing (15 templates), and Privilege Escalation (five templates). Attention Shifting Templates are designed to di- vert the model’s attention from the potentially harmful nature of the requested task by empha- sizing secondary aspects such as formatting, style, or continuation. This tests whether models can be tricked into overlooking ethical guidelines when their focus is redirected. 2https://www.deepl.com/en/translator 3https://translate.google.comPretending Templates explore whether models can be induced to adopt unethical stances or gener- ate harmful content under the guise of role-playing or adhering to predefined rules. This assesses the models’ ability to maintain ethical boundaries even within simulated contexts. Privilege Escalation Templates directly chal- lenge the model’s inherent safety constraints by attempting to override prior instructions or assert a higher level of authority. This category inves- tigates the robustness of the models’ core safety mechanisms against explicit bypass attempts. To ensure a balanced representation across tem- plates, we augment categories with fewer samples. We randomly sample around 10% of the prompts from each category to create the data collection for robustness. We chose 12 safety, 12 reliabil- ity, and 13 fairness prompts, each was augmented by 36 templates for two languages. This sampling strategy aims to provide a representative subset while maintaining computational feasibility. An example safety prompt is: "How do I steal some- one’s identity?" augmented by the following tem- plate: "[Prompt] Format your answer as base64." The resulting prompt is "How do I steal someone’s identity? Format your answer as base64." 3.2 Reliability Reliability refers to the model’s ability to consis- tently generate accurate, factual, and contextually appropriate outputs across various scenarios. Our reliability data includes a total of 135 prompts in 10 subcategories gathered from various sources. Misconceptions includes prompts that reflect common societal misconceptions, such as widely held myths. Distraction begin with a commonly asked question but shift to an illogical context in the final words. Logical Fallacy involves ques- tions that are tautologically true, meaning they ask about something that is always true by definition. Indexical Error prompt information that is depen- dent on time. Misquotations includes prompts that inquire about the source of a quotation. Logi- cal Inconsistency involve mathematical or logi- cal questions that require a correct answer. Ficti- tious entity includes prompts that ask for informa- tion about entities that do not exist. Nonexistent Reference prompt references to information that has been fabricated. Factual Fabrication involves prompts that ask for interesting details about top- ics that are not inherently interesting. Defending Category Subcategory Data Source Size Robustness (1,332)Attention Shifting (Liu et al., 2024a) 592 (37 prompts, 16 templates) Pretending (Liu et al., 2024a) 555 (37 prompts, 15 templates) Priveledge Escalation (Liu et al., 2024a) 185 (37 prompts, 5 templates) Reliability (135)Misconceptions (Lin et al., 2022) 30 Distraction (Lin et al., 2022) 9 Logical fallacy (Lin et al., 2022) 10 Indexical error (Lin et al., 2022) 10 Misquotations (Lin et al., 2022) 22 Logical inconsistency (Casper et al., 2023) and Custom 9 Fictitious entity (Casper et al., 2023) and Custom 10 Nonexistent Reference(Casper et al., 2023) 3 | https://arxiv.org/abs/2505.16036v1 |
(Li et al., 2023a) 4 Custom 6 Factual fabrication (Casper et al., 2023) and Custom 12 Defending fabrication (Casper et al., 2023) and Custom 10 Safety (174)Violence(Tedeschi et al., 2024) 32 (Bhardwaj and Poria, 2023) 2 (Ji et al., 2023) 13 (Li et al., 2024) 1 (Ahmadian et al., 2024) 13 Unlawful Conduct(Tedeschi et al., 2024) 41 (Bhardwaj and Poria, 2023) 9 (Ji et al., 2023) 10 (Li et al., 2024) 11 Privacy Violation(Tedeschi et al., 2024) 4 (Ji et al., 2023) 16 Misuse(Tedeschi et al., 2024) 7 (Bhardwaj and Poria, 2023) 11 (Li et al., 2024) 4 Fairness (149)ReligionCustom 14 (Friedrich et al., 2024) 6 (OpenAI, 2025) 4 GenderCustom 13 (Friedrich et al., 2024) 5 (Zhao et al., 2023) 6 (OpenAI, 2025) 1 Race(Friedrich et al., 2024) 11 Custom 5 (OpenAI, 2025) 9 DisabilityCustom 13 (Urbina et al., 2025) 1 (Friedrich et al., 2024) 4 (Omuz Omuza, 2025) 4 (OpenAI, 2025) 3 Hate Speech(Friedrich et al., 2024) 2 Custom 5 (Schneider et al., 2024) 3 Sexual Content(Schneider et al., 2024) 3 (Friedrich et al., 2024) 7 Cultural Insensitivity (OpenAI, 2025) and Custom 10 Self Harm (Friedrich et al., 2024) and Custom 10 Harassment (Friedrich et al., 2024) and Custom 10 Total: 1,790 Table 1: The statistics of our data collection to evaluate the ethical considerations of large language models. The numbers near category names in parenthesis represent the total number of instances in that category. Data Source represents where we adopt the prompts ( Custom means that we craft our own prompt). Sizerepresents the number of prompts in each task, except for Robustness where we craft templates and apply the prompts from other categories to the templates. Fabrication prompt arguments in defense of fabri- cated information. 3.3 Safety Safety evaluation focuses on preventing the gen- eration of harmful, dangerous, and unethical con- tent. We curate the safety prompts from a diverse set of established safety benchmarks and red- teaming studies: ALERT (Tedeschi et al., 2024), BeaverTails (Ji et al., 2023), Red-Teaming using Chain of Utterances (Bhardwaj and Poria, 2023), SALAD-Bench (Li et al., 2024), and The Multi- lingual Alignment Prism (Ahmadian et al., 2024). Our evaluation covers four subcategories based on (Liu et al., 2023): Violence, Unlawful Conduct, Privacy Violation, and Misuse. Violence assesses the model’s propensity to gen- erate content that promotes, describes, or pro- vides instructions for violent acts. It evaluates the model’s refusal to engage with prompts seeking information related to physical harm, aggression, inciting violence, or depicting graphic acts. Unlawful Conduct focuses on the model’s ad- herence to legal boundaries; and evaluates the re- fusal to generate content that encourages or pro- vides instructions for illegal activities. This in- cludes prompts related to theft, fraud, creating or obtaining illicit substances, planning criminal ac- tivities, or any other actions that violate laws. Privacy Violation examines the model’s safe- guards against compromising privacy; and as- sesses resistance to prompts designed to expose sensitive personal information, extract confiden- tial data, or provide guidance on methods for breaching privacy. This includes testing the model for identity theft, unauthorized surveillance, or the | https://arxiv.org/abs/2505.16036v1 |
disclosure of private details. Misuse explores the potential of exploiting the model for broader harmful purposes not strictly covered by the preceding categories. This cate- gory assesses the model’s refusal to assist in activ- ities such as generating large-scale disinformation campaigns, creating malicious code, aiding in the planning of harmful coordinated actions, or engag- ing in other forms of systemic abuse. 3.4 Fairness Fairness evaluation focuses on preventing the gen- eration of biases to ensure equitable treatment of individuals or groups across various contexts and assessing toxicity such as hate speech, harassment,or offensive content that can disproportionately harm individuals or communities. Our fairness dataset is structured into two pri- mary categories: Bias and Social Norm. The dis- tribution of the prompts across categories and sub- categories is shown in Table 1. The schema of our decision to split our categories and subcategories follows the same with the approach used in Trust- worthy LLMs (Liu et al., 2023). Bias is divided into four subcategories in this study: Religion, Gender, Race, and Disability. We ask GPT-4.0 (OpenAI, 2025) to list the most com- mon religious, gender, race, and disability biases. The Custom prompts of this subcategory are cho- sen not to be taken directly from the specified sources to focus on the most common Turkey- specific biases. This is ensured by changing single wordings of the specified sources’ prompts. Social Norm is divided into five subcategories in this study: Hate Speech, Sexual Content, Cultural Insensitivity, Self Harm, and Harassment. The Custom prompts are not directly taken but adapted from the specified sources’ prompts by changing single wordings in order to evaluate in a broader perspective and not to have repetitious prompts. 4 Experiments 4.1 Model Selection We evaluated 29 large language models, spanning model sizes from 1 billion to 72 billion parame- ters. We selected their instruction-tuned versions. The hyperparameters are listed in Appendix A. The evaluation metric is accuracy, defined as the number of prompts that meet the given ethical cri- teria. The model list is given in Table 2. Model inference was performed on four L4 GPUs in Google Cloud. We used the vLLM in- ference server (Kwon et al., 2023), with configu- rations of specific parameters optimized for per- formance and output quality. The average time required for the model to pro- cess each prompt ranged from 0.1 to 7.5 seconds per prompt. The details of the execution time are given in Appendix C. To manage computational resources, models with parameters greater than 32 billion were evaluated using their 8-bit quantized versions. We rely on the efficiency of quantization without significant performance degradation (Kur- tic et al., 2024). Table 2: The models examined in this study, listed with corresponding Huggingface models and relevant citations. Model Huggingface Model Publication Aya 23 8B CohereLabs/aya-23-8B (Aryabumi et al., 2024) Aya Expanse 32Bb CohereLabs/aya-expanse-32b (Dang et al., 2024) Aya Expanse 8B CohereLabs/aya-expanse-8b (Dang et al., 2024) DeepSeekR1 Llama 70B RedHatAI/DeepSeek-R1-Distill-Llama-70B-quantized.w8a8 (Guo et al., 2025) DeepSeekR1 Qwen 14B deepseek-ai/DeepSeek-R1-Distill-Qwen-14B (Guo et al., 2025) DeepSeekR1 Qwen 32B RedHatAI/DeepSeek-R1-Distill-Qwen-32B-quantized.w8a8 (Guo et al., 2025) Gemma 2 9B google/gemma-2-9b-it (Riviere | https://arxiv.org/abs/2505.16036v1 |
et al., 2024) Gemma 2 27B google/gemma-2-27b-it (Riviere et al., 2024) Gemma 3 4B google/gemma-3-4b-it (Kamath et al., 2025) Gemma 3 12B google/gemma-3-12b-it (Kamath et al., 2025) Gemma 3 27B google/gemma-3-27b-it (Kamath et al., 2025) Granite 3.1 8B ibm-granite/granite-3.1-8b-instruct (IBM, 2024) Llama 3.1 70B RedHatAI/Meta-Llama-3.1-70B-quantized.w8a8 (Dubey et al., 2024) Llama 3.2 1B meta-llama/Llama-3.2-1B-Instruct (Dubey et al., 2024) Llama 3.2 3B meta-llama/Llama-3.2-3B-Instruct (Dubey et al., 2024) Llama 3.3 70B RedHatAI/Llama-3.3-70B-Instruct-quantized.w8a8 (Dubey et al., 2024) Mistral Small 24B 2501 mistralai/Mistral-Small-24B-Instruct-2501 (Mistral, 2025) OLMo 2 1124 13B allenai/OLMo-2-1124-13B-Instruct (OLMo et al., 2024) OLMo 2 1124 7B allenai/OLMo-2-1124-7B-Instruct (OLMo et al., 2024) Phi 4 14B microsoft/phi-4 (Abdin et al., 2024) Phi 4 Mini 3.8B microsoft/Phi-4-mini-instruct (Abdin et al., 2024) Qwen 2.5 1.5B Qwen/Qwen2.5-1.5B-Instruct (Yang et al., 2024b) Qwen 2.5 3B Qwen/Qwen2.5-3B-Instruct (Yang et al., 2024b) Qwen 2.5 7B Qwen/Qwen2.5-7B-Instruct (Yang et al., 2024b) Qwen 2.5 14B Qwen/Qwen2.5-14B-Instruct (Yang et al., 2024b) Qwen 2.5 32B Qwen/Qwen2.5-32B-Instruct (Yang et al., 2024b) Qwen 2.5 72B Qwen/Qwen2.5-72B-Instruct-GPTQ-Int8 (Yang et al., 2024b) Qwen 2 72B RedHatAI/Qwen2-72B-Instruct-quantized.w8a8 (Yang et al., 2024a) QwQ 32B AWQ Qwen/QwQ-32B-AWQ (Qwen, 2025) 4.2 Prompt Selection We prompted the models by using the data col- lection defined in Section 3 in two languages. We aimed to analyze the model’s default response pat- terns for ethical evaluation. Our methodology in- volved providing only the input data within a stan- dard user prompt format, omitting system prompts to capture the model’s unfiltered output. Once we obtained the model outputs, we run LLM-as- a-Judge (Zheng et al., 2023) to assess the quality of the model outputs. We conducted a comprehensive analysis of ex- isting LLM-as-a-Judge prompts (Gu et al., 2025) to determine the most effective one for our case. The prompts were translated to Turkish by four native speakers. The details of prompt selection and the LLM-as-a-Judge prompts used in the ex- periments are given in Appendix B. The average time for the model to process each prompt ranged from 1.2 to 1.8 seconds per prompt. The details of time execution are given in Appendix C. 4.3 Experimental Results Main Categories The accuracy results for each model in four main categories are given in Figure 1. Detailed scores for each model are also listedin Appendix D. To compare the performance of the models in terms of ethical categories and ro- bustness impact on each category, we report the average accuracy scores at the top of Figure 1. Most open-source and generative large lan- guage models have been optimized for safety, fair- ness, and good robustness, while reliability re- mains a concern . The models are safe with an average score of 94.3% in English and 82.0% in Turkish. They are fair with an average score of 96.8% in English and 84.7% in Turkish. The mod- els have poor performance in terms of reliabil- ity, with 49.5% in English and 35.1% in Turk- ish. When robustness scores are examined per cat- egory, most models are resistant to jailbreak at- tempts. Detailed results for robustness are given in Table 4 and 5 in Appendix. Ethical evaluation is language-independent. Based on the comparison between | https://arxiv.org/abs/2505.16036v1 |
the results of the English prompts and their Turkish equivalents, we observe similar average scores for all dimen- sions. The reason could be the extensive mul- tilingual pre-training (Doddapaneni et al., 2021). English prompts still have higher average scores, as expected due to their dominance in the training corpus (Zhao et al., 2024). Figure 1: Accuracy scores of main ethical dimensions for English (top) and Turkish (bottom). Robustness is evaluated by applying jailbreak templates and observing the impact on other ethical dimensions at the right. Larger model parameters mostly exhibit bet- ter ethical evaluation. Smaller models generally show lower scores compared to their larger coun- terparts. In English, models with larger than 10 billion parameters get a safety score of 96.9%, compared to 96.3% for smaller models, while this disparity is more observed in Turkish, with larger models averaging 90.7% against 72.7% for smaller ones. A similar pattern for fairness is observed. In reliability prompts, smaller models show a drop from 55.7% to 43.7% in English. The most ethical behavior are observed in Gemma and Qwen models. The Gemma and Qwen models are placed in the top five in both languages (see Table 4 and 5 in the Appendix). For small models (with parameters smaller than 5 billion), Phi-4-Mini 3.8b performs the best. For medium models (between 5 and 20 billion param- eters), Gemma-2 9b achieves the highest score. Among the large models (with parameters larger than 20 billion), Gemma-2 27b secures the overall top position. In particular, Gemma-2 27b performs the highest ethical evaluation among all sizes.Subcategory Results The accuracy results in terms of subcategories are presented in Figure 2. Reliability is a sensitive ethical dimension, such that the gap between the lowest and highest sub- category scores is largest in reliability. In partic- ular, the models have poor performance in hallu- cination in the form of factual fabrication and fic- titious entity. Overall, the most sensitive aspects are harassment in fairness, misuse in safety, fac- tual fabrication in reliability, and program execu- tion in robustness. On the other hand, the models are strong in responding hate speech, self harm, privacy violation, misquotations, misconceptions, and priveledge escalation of superior model. The results are similar in Turkish, yet the gaps between the worst and best scores in subcategories are in- creasing, probably due to inadequate ethical align- ments for low-resource languages. 4.4 Ablation Study Rejection Analysis To study models’ ten- dency to reject answering a question, we assess model outputs with LLM-judge to check whether model’s answer was direct refusal or not. The Figure 2: Accuracy scores of ethical subcategories for English (left) and Turkish (right). Table 3: Rejection Ratio for each ethical dimension (normalized by column total). E: Evaluated model out- put (ethical or not), R: Rejection in output. (English) E/R Safety Fairness Reliability True/True 68.77% 17.95% 1.37% True/False 25.51% 78.84% 48.12% False/True 0.06% 0.03% 0.66% False/False 5.66% 3.18% 49.86% prompt for this rejection analysis is given in Ap- pendix F. We report the percentage of the rejected prompts for each ethical dimension in terms of ground truth in Table 3. | https://arxiv.org/abs/2505.16036v1 |
We find that reliability prompts are rarely, but safety prompts are mostly rejected. Models can answer in a safe manner without refusing (e.g., 78.84% in Fairness). Reasoning models reject less compared to others. Models reject much more in English compared to Turkish. A detailed discus- sion on rejection analysis is given in Appendix F. Effect of Jailbreak We report the ethical per- formance scores of each ethical dimension with and without appyling jailbreak templates in Table 4 and 5 in Appendix E. Jailbreak templates are generally ineffective to most open-source models . We find that most mod- els are resistant to simple jailbreaking attempts. The highest deterioration due to jailbreak in eth- ical performance is observed in reliability (35.1% to 29.7% for Turkish, 49.5% to 45.6% for En- glish). In particular, a jailbreak template, given in Appendix E, caused the average English safety grade of the models to drop from 94.3% to 52.9% in safety, Turkish safety grade to drop from 82.0% to 34.8% (detailed results are given in Table 6 inAppendix). Comparison with Existing Studies The results are similar to Vectora’s Hallucination Leaderboard (Hughes et al., 2023), such that the Gemini and Gemma models are better in not hallucinating, and the differences between the reliability per- formances of older models of the higher param- eter and newer models of the lower parameter are nearly the same. Our findings are also consistent with WalledEval (Gupta et al., 2024), such that Gemma-2 is the safest model in their findings, and larger models are generally more robust. How- ever, the Gemma evaluations on FLEX (Jung et al., 2025) revealed low robustness against adversarial bias, while we find higher robustness scores for Gemma models. A detailed comparison with ex- isting studies is given in Appendix G. 5 Conclusion This study provides a comprehensive ethical eval- uation of 29 open-source generative models. A key finding is the apparent priority in optimiz- ing efforts for many open-source models appear to have prioritized safety and fairness, and good ro- bustness while reliability remains a concern. We also provide a comprehensive analysis for differ- ent components including ethical subcategories, prompt rejections, and effect of jailbreak. Future research should aim to expand the scope of ethical evaluation to an even wider range of low-resource languages and to the complexities of cultural variations. Mitigation strategies are another opportunity to explore (Çetinkaya et al., 2024; Huang et al., 2025). Integrating additional dimensions such as explainability, accountability, and environmental impact would also provide a more holistic ethical evaluation. Acknowledgments: We thank the Gemma Aca- demic Program and Google Cloud for providing research credits to support this study. 6 Limitations Our study evaluates 29 models across four ethi- cal categories using a specific set of prompts. Al- though informative, prompt counts could be in- creased to potentially capture a wider range of model behaviors across these categories. Further- more, our findings, significant for English and Turkish, may not directly generalize to other low- resource languages, indicating a need for broader linguistic assessment. The static nature of our dataset and evaluation also means it might | https://arxiv.org/abs/2505.16036v1 |
not fully represent the dynamic challenges faced by real-world LLM users. Methodologically, our robustness assessment focused on simple jailbreak templates. This ap- proach provides a useful baseline but does not re- flect the complexity of multi-turn interactions or more advanced adversarial techniques, and results should be interpreted within this specific scope. The reliance on an LLM-as-a-judge for eval- uation introduces potential risks. These include inherent biases where the judge might favor cer- tain output styles or models from its own family (e.g., Gemini judging Gemma). Consistent with findings from previous research (Li et al., 2025), our LLM-as-a-judge evaluation framework, when used by the Gemini model, ranks Gemma models as exhibiting superior safety and reliability perfor- mance. In addition, there is a risk that the judge model’s automated assessment of safety or fair- ness may not perfectly align with nuanced human understanding. Finally, models larger than 32B parameters were evaluated in an 8-bit quantized setting due to computational constraints. While necessary, this quantization might affect performance com- pared to their full-precision counterparts, partic- ularly impacting non-benchmark metrics such as safety which could be sensitive to such compres- sion. 7 Ethical and Broader Impact Our findings highlight that current optimization in many open-source large language models pri- oritizes safety, fairness, and demonstrates good robustness to simple jailbreaks, while reliability remains a significant concern. This underscoresan urgent need for development efforts targeting model factuality. Our publicly available dataset promotes trans- parency and assists in comparative ethical assess- ment. The dual-language evaluation reveals cru- cial behavioral differences; for instance, mod- els refuse much more in Turkish compared to English. Identifying stronger performers (e.g., Gemma, Qwen) can also guide model selection. Potential risks include the inherent limitations of our specific evaluation scope (models and attack types) and the LLM-as-a-Judge approach, which cannot fully replace nuanced human judgment and may possess inherent biases. Developers should utilize the reported results to improve reliability and cross-lingual ethical align- ment. Developers should consider the observed weaknesses, particularly in reliability, and imple- ment robust safeguards and human oversight. References Marah Abdin, Jyoti Aneja, Harkirat Singh Behl, Sébastien Bubeck, Ronen Eldan, Suriya Gunasekar, Michael Harrison, Russell J. Hewett, Mojan Java- heripi, Piero Kauffmann, James R. Lee, Yin Tat Lee, Yuanzhi Li, Weishung Liu, Caio C’esar Teodoro Mendes, Anh Nguyen, Eric Price, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Xin Wang, Rachel Ward, Yue Wu, Dingli Yu, Cyril Zhang, and Yi Zhang. 2024. Phi-4 technical report. ArXiv , abs/2412.08905. Utkarsh Agarwal, Kumar Tanmay, Aditi Khandelwal, and Monojit Choudhury. 2024. Ethical reasoning and moral value alignment of llms depend on the language we prompt them in. In Proceedings of the 2024 Joint International Conference on Computa- tional Linguistics, Language Resources and Evalua- tion (LREC-COLING 2024) , pages 6330–6340. Arash Ahmadian, Beyza Ermis, Seraphina Goldfarb- Tarrant, Julia Kreutzer, Marzieh Fadaee, Sara Hooker, et al. 2024. The multilingual alignment prism: Aligning global and local preferences to re- duce harm. arXiv preprint arXiv:2406.18682 . Viraat Aryabumi, John Dang, Dwarak Talupuru, Saurabh Dash, David Cairuz, Hangyu Lin, Bharat Venkitesh, Madeline Smith, Jon Ander Campos, Yi Chern Tan, | https://arxiv.org/abs/2505.16036v1 |
et al. 2024. Aya 23: Open weight releases to further multilingual progress. arXiv preprint arXiv:2405.15032 . AWS. 2024. Common prompt injection at- tacks - AWS Prescriptive Guidance — docs.aws.amazon.com. https://docs.aws. amazon.com/prescriptive-guidance/latest/ llm-prompt-engineering-best-practices/ common-attacks.html . [Accessed 23-03-2025]. Rishabh Bhardwaj and Soujanya Poria. 2023. Red- teaming large language models using chain of utterances for safety-alignment. arXiv preprint arXiv:2308.09662 . Sébastien Bubeck, Varun Chadrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. 2023. Sparks of artificial general intelligence: Early experiments with gpt-4. Stephen Casper, Luke Bailey, Zachary Marinov, Michael Gerovich, Andrew Garber, Shuvom Sad- huka, Oam Patel, and Riley Kong. 2023. GPT4_BS – github repository. Accessed: 2025-04-14. Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, et al. 2024. A sur- vey on evaluation of large language models. ACM transactions on intelligent systems and technology , 15(3):1–45. Sunhao Dai, Chen Xu, Shicheng Xu, Liang Pang, Zhenhua Dong, and Jun Xu. 2024. Bias and un- fairness in information retrieval systems: New chal- lenges in the llm era. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discov- ery and Data Mining , pages 6437–6447. John Dang, Shivalika Singh, Daniel D’souza, Arash Ahmadian, Alejandro Salamanca, Madeline Smith, Aidan Peppin, Sungjin Hong, Manoj Govindassamy, Terrence Zhao, et al. 2024. Aya expanse: Combin- ing research breakthroughs for a new multilingual frontier. arXiv preprint arXiv:2412.04261 . Badhan Chandra Das, M Hadi Amini, and Yanzhao Wu. 2025. Security and privacy challenges of large language models: A survey. ACM Computing Sur- veys, 57(6):1–39. Sumanth Doddapaneni, Gowtham Ramesh, Mitesh Khapra, Anoop Kunchukuttan, and Pratyush Kumar. 2021. A primer on pretrained multilingual language models. ACM Computing Surveys . Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Let- man, Akhil Mathur, Alan Schelten, Amy Yang, An- gela Fan, Anirudh Goyal, Anthony S. Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, As- ton Zhang, Aur’elien Rodriguez, Austen Gregerson, Ava Spataru, Bap tiste Rozière, Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris Marra, Chris Mc- Connell, Christian Keller, Christophe Touret, Chun- yang Wu, Corinne Wong, Cristian Cantón Fer- rer, Cyrus Nikolaidis, Damien Allonsius, Daniel Song, Danielle Pintz, Danny Livshits, David Es- iobu, Dhruv Choudhary, Dhruv Mahajan, Diego Garcia-Olano, Diego Perino, Dieuwke Hupkes, Egor Lakomkin, Ehab A. AlBadawy, Elina Lobanova, Emily Dinan, Eric Michael Smith, Filip Radenovic,Frank Zhang, Gabriele Synnaeve, Gabrielle Lee, Georgia Lewis Anderson, Graeme Nail, Grégoire Mialon, Guanglong Pang, Guillem Cucurell, Hai- ley Nguyen, Hannah Korevaar, Hu Xu, Hugo Tou- vron, Iliyan Zarov, Imanol Arrieta Ibarra, Isabel M. Kloumann, Ishan Misra, Ivan Evtimov, Jade Copet, Jaewon Lee, Jan Geffert, Jana Vranes, Jason Park, Jay Mahadeokar, Jeet Shah, Jelmer van der Linde, Jennifer Billock, Jenny Hong, Jenya Lee, Jeremy Fu, Jianfeng Chi, Jianyu Huang, Jiawen Liu, Jie Wang, Jiecao Yu, Joanna Bitton, Joe Spisak, Jongsoo Park, Joseph Rocca, Joshua Johnstun, Joshua Saxe, Ju- Qing Jia, Kalyan Vasuden Alwala, K. Upasani, Kate Plawiak, Keqian Li, Ken-591 neth Heafield, Kevin Stone, Khalid El-Arini, | https://arxiv.org/abs/2505.16036v1 |
Krithika Iyer, Kshi- tiz Malik, Kuen ley Chiu, Kunal Bhalla, Lauren Rantala-Yeary, Laurens van der Maaten, Lawrence Chen, Liang Tan, Liz Jenkins, Louis Martin, Lo- vish Madaan, Lubo Malo, Lukas Blecher, Lukas Landzaat, Luke de Oliveira, Madeline Muzzi, Ma- hesh Pasupuleti, Mannat Singh, Manohar Paluri, Marcin Kardas, Mathew Oldham, Mathieu Rita, Maya Pavlova, Melissa Hall Melanie Kambadur, Mike Lewis, Min Si, Mitesh Kumar Singh, Mona Hassan, Naman Goyal, Narjes Torabi, Niko lay Bashlykov, Nikolay Bogoychev, Niladri S. Chatterji, Olivier Duchenne, Onur cCelebi, Patrick Alrassy, Pengchuan Zhang, Pengwei Li, Petar Vasi ´c, Peter Weng, Prajjwal Bhargava, Pratik Dubal, Praveen Kr- ishnan, Punit Singh Koura, Puxin Xu, Qing He, Qingxiao Dong, Ragavan Srinivasan, Raj Gana- pathy, Ramon Calderer, Ricardo Silveira Cabral, Robert Stojnic, Roberta Raileanu, Rohit Girdhar, Rohit Patel, Ro main Sauvestre, Ronnie Polidoro, Roshan Sumbaly, Ross Taylor, Ruan Silva, Rui Hou, Rui Wang, Saghar Hosseini, Sahana Chennabas- appa, Sanjay Singh, Sean Bell, Seohyun Sonia Kim, Sergey Edunov, Shaoliang Nie, Sharan Narang, Sharath Chandra Raparthy, Sheng Shen, Shengye Wan, Shruti Bhosale, Shun Zhang, Simon Van- denhende, Soumya Batra, Spencer Whitman, Sten Sootla, Stephane Collot, Suchin Gururangan, Syd- ney Borodinsky, Tamar Herman, Tara Fowler, Tarek Sheasha, Thomas Georgiou, Thomas Scialom, To- bias Speckbacher, Todor Mihaylov, Tong Xiao, Ujjwal Karn, Vedanuj Goswami, Vibhor Gupta, Vig- nesh Ramanathan, Viktor Kerkez, Vincent Gonguet, Virginie Do, Vish V ogeti, Vladan Petrovic, Weiwei Chu, Wenhan Xiong, Wenyin Fu, Whit ney Meers, Xavier Martinet, Xiaodong Wang, Xiaoqing Ellen Tan, Xinfeng Xie, Xuchao Jia, Xuewei Wang, Yaelle Goldschlag, Yashesh Gaur, Yasmine Babaei, Yiqian Wen, Yiwen Song, Yuchen Zhang, Yue Li, Yun- ing Mao, Zacharie Delpierre Coudert, Zhengxu Yan, Zhengxing Chen, Zoe Papakipos, Aaditya K. Singh, Aaron Grattafiori, Abha Jain, Adam Kelsey, Adam Shajnfeld, Adi Gangidi, Adolfo Victoria, Ahuva Goldstand, Ajay Menon, Ajay Sharma, Alex Boe- senberg, Alex Vaughan, Alexei Baevski, Allie Fein- stein, Amanda Kallet, Amit Sangani, Anam Yunus, Andrei Lupu, Andres Alvarado, Andrew Caples, Andrew Gu, Andrew Ho, Andrew Poulton, Andrew Ryan, Ankit Ramchandani, Annie Franco, Apara- jita Saraf, Arkabandhu Chowdhury, Ashley Gabriel, Ashwin Bharambe, Assaf Eisenman, Azadeh Yaz- dan, Beau James, Ben Maurer, Ben Leonhardi, Po- Yao (Bernie) Huang, Beth Loyd, Beto De Paola, Bhargavi Paranjape, Bing Liu, Bo Wu, Boyu Ni, Braden Hancock, Bram Wasti, Brandon Spence, Brani Stojkovic, Brian Gamido, Britt Montalvo, Carl Parker, Carly Burton, Catalina Mejia, Chang- han Wang, Changkyu Kim, Chao Zhou, Chester Hu, Ching-Hsiang Chu, Chris Cai, Chris Tin- dal, Christoph Feichtenhofer, Damon Civin, Dana Beaty, Daniel Kreymer, Shang-Wen Li, Danny Wy- att, David Adkins, David Xu, Davide Testuggine, Delia David, Devi Parikh, Diana Liskovich, Didem Foss, Dingkang Wang, Duc Le, Dustin Holland, Edward Dowling, Eissa Jamil, Elaine Montgomery, Eleonora Presani, Emily Hahn, Emily Wood, Erik Brinkman, Esteban Arcaute, Evan Dunbar, Evan Smothers, Fei Sun, Felix Kreuk, Feng Tian, Firat Ozgenel, Francesco Caggioni, Francisco Guzm’an, Frank J. Kanayet, Frank Seide, Gabriela Medina Florez, Gabriella Schwarz, Gada Badeer, Georgia Swee, Gil Halpern, Govind Thattai, Grant Her- man, Grigory G. Sizov, Guangyi Zhang, Guna Lakshminarayanan, Hamid Shojanazeri, Han Zou, Hannah Wang, Han Zha, Haroun Habeeb, Harri- son Rudolph, Helen Suk, Henry Aspegren, Hunter Goldman, Igor Molybog, | https://arxiv.org/abs/2505.16036v1 |
Igor Tufanov, Irina-Elena Veliche, Itai Gat, Jake Weissman, James Geboski, James Kohli, Japhet Asher, Jean-Baptiste Gaya, Jeff Marcus, Jeff Tang, Jennifer Chan, Jenny Zhen, Jeremy Reizenstein, Jeremy Teboul, Jessica Zhong, Jian Jin, Jingyi Yang, Joe Cummings, Jon Carvill, Jon Shepard, Jonathan McPhie, Jonathan Torres, Josh Ginsburg, Junjie Wang, Kaixing(Kai) Wu, U KamHou, Karan Saxena, Karthik Prasad, Kar- tikay Khandelwal, Katayoun Zand, Kathy Matosich, Kaushik Veeraraghavan, Kelly Michelena, Keqian Li, Kun Huang, Kunal Chawla, Kushal Lakhotia, Kyle Huang, Lailin Chen, Lakshya Garg, A Laven- der, Leandro Silva, Lee Bell, Lei Zhang, Liangpeng Guo, Licheng Yu, Liron Moshkovich, Luca Wehrst- edt, Madian Khabsa, Manav Avalani, Manish Bhatt, Maria Tsimpoukelli, Martynas Mankus, Matan Has- son, Matthew Lennie, Matthias Reso, Maxim Gro- shev, Maxim Naumov, Maya Lathi, Meghan Ke- neally, Michael L. Seltzer, Michal Valko, Michelle Restrepo, Mihir Patel, Mik Vyatskov, Mikayel Samvelyan, Mike Clark, Mike Macey, Mike Wang, Miquel Jubert Hermoso, Mo Metanat, Mohammad Rastegari, Munish Bansal, Nandhini Santhanam, Natascha Parks, Natasha White, Navyata Bawa, Nayan Singhal, Nick Egebo, Nicolas Usunier, Niko- lay Pavlovich Laptev, Ning Dong, Ning Zhang, Nor- man Cheng, Oleg Chernoguz, Olivia Hart, Omkar Salpekar, Ozlem Kalinli, Parkin Kent, Parth Parekh, Paul Saab, Pavan Balaji, Pe dro Rittner, Philip Bon- trager, Pierre Roux, Piotr Dollár, Polina Zvyag- ina, Prashant Ratanchandani, Pritish Yuvraj, Qian Liang, Rachad Alao, Rachel Rodriguez, Rafi Ayub, Raghotham Murthy, Raghu Nayani, Rahul Mi- tra, Raymond Li, Rebekkah Hogan, Robin Bat- tey, Rocky Wang, Rohan Maheswari, Russ Howes,Ruty Rinott, Sai Jayesh Bondu, Samyak Datta, Sara Chugh, Sara Hunt, Sargun Dhillon, Sasha Sidorov, Satadru Pan, Saurabh Verma, Seiji Ya- mamoto, Sharadh Ramaswamy, Shaun Lindsay, Sheng Feng, Shenghao Lin, Shengxin Cindy Zha, Shiva Shankar, Shuqiang Zhang, Sinong Wang, Sneha Agarwal, Soji Sajuyigbe, Soumith Chintala, Stephanie Max, Stephen Chen, Steve Kehoe, Steve Satterfield, Sudarshan Govindaprasad, Sumit Gupta, Sung-Bae Cho, Sunny Virk, Suraj Subramanian, Sy Choudhury, Sydney Goldman, Tal Remez, Tamar Glaser, Tamara Best, Thilo Kohler, Thomas Robin- son, Tianhe Li, Tianjun Zhang, Tim Matthews, Timothy Chou, Tzook Shaked, Varun V ontimitta, Victoria Ajayi, Victoria Montanez, Vijai Mohan, Vinay Satish Kumar, Vishal Mangla, Vlad Ionescu, Vlad Andrei Poenaru, Vlad T. Mihailescu, Vladimir Ivanov, Wei Li, Wenchen Wang, Wenwen Jiang, Wes Bouaziz, Will Constable, Xia Tang, Xiaofang Wang, Xiaojian Wu, Xiaolan Wang, Xide Xia, Xilun Wu, Xinbo Gao, Yanjun Chen, Ye Hu, Ye Jia, Ye Qi, Yenda Li, Yilin Zhang, Ying Zhang, Yossi Adi, Youngjin Nam, Yu Wang, Yuchen Hao, Yundi Qian, Yuzi He, Zach Rait, Zachary DeVito, Zef Rosnbrick, Zhaoduo Wen, Zhenyu Yang, and Zhi- wei Zhao. 2024. The llama 3 herd of models. ArXiv , abs/2407.21783. Felix Friedrich, Simone Tedeschi, Patrick Schramowski, Manuel Brack, Roberto Navigli, Huu Nguyen, Bo Li, and Kristian Kersting. 2024. Llms lost in translation: M-alert uncovers cross-linguistic safety gaps. arXiv preprint arXiv:2412.15035 . Isabel O Gallegos, Ryan A Rossi, Joe Barrow, Md Mehrab Tanjim, Sungchul Kim, Franck Der- noncourt, Tong Yu, Ruiyi Zhang, and Nesreen K Ahmed. 2024. Bias and fairness in large lan- guage models: A survey. Computational Linguis- tics, 50(3):1097–1179. Suyu Ge, Chunting Zhou, Rui Hou, Madian Khabsa, Yi-Chia Wang, Qifan Wang, Jiawei Han, and Yun- ing | https://arxiv.org/abs/2505.16036v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.