text
string
source
string
Lepri, Sara Tonelli, and Marco Guerini. Do LLMs suffer from multi-party hangover? a diagnostic approach to addressee recognition and response selection in conversations. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen, editors, Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 11210–11233, Miami, Florida, USA, November 2024. Association for Computational Linguistics. [10] Oleg Akhtiamov, Ingo Siegert, Alexey Karpov, and Wolfgang Minker. Cross-corpus data augmentation for acoustic addressee detection. In Satoshi Nakamura, Milica Gasic, Ingrid Zuckerman, Gabriel Skantze, Mikio Nakano, Alexandros Papangelis, Stefan Ultes, and Koichiro Yoshino, editors, Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue , pages 274–283, Stockholm, Sweden, September 2019. Association for Computational Linguistics. [11] Paul Lerner, Juliette Bergoënd, Camille Guinaudeau, H Bredin, Benjamin Maurice, Sharleyne Lefevre, Martin Bouteiller, Aman Berhe, Léo Galmant, Ruiqing Yin, and C Barras. Bazinga! a dataset for multi-party dialogues structuring. International Conference on Language Resources and Evaluation , 2022. [12] Jonathan K Kummerfeld, Sai R Gouravajhala, Joseph J Peper, Vignesh Athreya, Chulaka Gunasekara, Jatin Ganhotra, Siva Sankalp Patel, Lazaros C Polymenakos, and Walter Lasecki. A Large-Scale Corpus for Conversation Disentanglement, 2019. [13] Jia-Chen Gu, Chongyang Tao, Zhenhua Ling, Can Xu, Xiubo Geng, and Daxin Jiang. MPC-BERT: A Pre-Trained Language Model for Multi-Party Conversation Understanding. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) , pages 3682–3692, Online, August 2021. Association for Computational Linguistics. [14] Herve Bredin and Guilhem Gelly. Improving speaker diarization of TV series using talking-face detection and clustering. In Proceedings of the 24th ACM International Conference on Multimedia , 2016. [15] Joseph Roth, Sourish Chaudhuri, Ondrej Klejch, Radhika Marvin, Andrew Gallagher, Liat Kaver, Sharadh Ramaswamy, Arkadiusz Stopczynski, Cordelia Schmid, Zhonghua Xi, and Caroline Pantofaru. Ava active speaker: An audio-visual dataset for active speaker detection. In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , pages 4492–4496. IEEE, May 2020. [16] Rudrabha Sharma and Shrikanth Narayanan. Audio-visual activity guided cross-modal identity association for active speaker detection. IEEE Open Journal of Signal Processing , 4:225–232, 2023. 10 [17] Ruibo Tao, Zhipeng Pan, Rohan Kumar Das, Xubo Qian, Mike Zheng Shou, and Haizhou Li. Is someone speaking? exploring long-term temporal features for audio-visual active speaker detection. In Proceedings of the 29th ACM International Conference on Multimedia , pages 3927–3935, 2021. [18] Jiho Kim, Woosog Chay, Hyeonji Hwang, Daeun Kyung, Hyunseung Chung, Eunbyeol Cho, Yohan Jo, and Edward Choi. DialSim: A real-time simulator for evaluating long-term dialogue understanding of conversational agents. arXiv [cs.CL] , June 2024. [19] Bruno Korbar, Jaesung Huh, and Andrew Zisserman. Look, listen and recognise: Character-aware audio-visual subtitling. arXiv [cs.CV] , January 2024. [20] Qingqiu Huang, Yu Xiong, Anyi Rao, Jiaze Wang, and Dahua Lin. MovieNet: A holistic dataset for movie understanding. In Computer Vision – ECCV 2020 , pages 709–727. Springer International Publishing, 2020. [21] Paul Vicol, Makarand Tapaswi, Lluis Castrejon, and Sanja Fidler. MovieGraphs: Towards understanding human-centric situations from videos. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
https://arxiv.org/abs/2505.17536v1
. IEEE, June 2018. [22] Chao-Yuan Wu and Philipp Krahenbuhl. Towards long-form video understanding. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) . IEEE, June 2021. [23] Md Mohaiminul Islam and Gedas Bertasius. Long movie clip classification with state-space video models. arXiv [cs.CV] , April 2022. [24] Karttikeya Mangalam, Raiymbek Akshulakov, and Jitendra Malik. EgoSchema: A diagnostic benchmark for very long-form video language understanding. arXiv [cs.CV] , August 2023. [25] Hang Zhang, Xin Li, and Lidong Bing. Video-LLaMA: An instruction-tuned audio-visual language model for video understanding. In Yansong Feng and Els Lefever, editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations , pages 543–553, Singapore, December 2023. Association for Computational Linguistics. [26] Guangzhi Sun, Wenyi Yu, Changli Tang, Xianzhao Chen, Tian Tan, Wei Li, Lu Lu, Zejun Ma, Yuxuan Wang, and Chao Zhang. video-SALMONN: Speech-enhanced audio-visual large language models. In Forty-first International Conference on Machine Learning , 2024. [27] Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. From recognition to cognition: Visual commonsense reasoning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , June 2019. [28] Jie Lei, Licheng Yu, Mohit Bansal, and Tamara Berg. TVQA: Localized, compositional video question answering. In Ellen Riloff, David Chiang, Julia Hockenmaier, and Jun’ichi Tsujii, editors, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing , pages 1369–1379, Brussels, Belgium, 2018. Association for Computational Linguistics. [29] Jie Lei, Licheng Yu, Tamara Berg, and Mohit Bansal. TVQA+: Spatio-temporal grounding for video question answering. In Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel Tetreault, editors, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 8211–8225, Online, July 2020. Association for Computational Linguistics. [30] Gemini Team, Rohan Anil, Sebastian Borgeaud, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, Katie Millican, David Silver, Melvin Johnson, Ioannis Antonoglou, Julian Schrittwieser, Amelia Glaese, Jilin Chen, Emily Pitler, Timothy Lillicrap, Angeliki Lazaridou, Orhan Firat, James Molloy, Michael Isard, Paul R Barham, Tom Hennigan, Benjamin Lee, Fabio Viola, Malcolm Reynolds, Yuanzhong Xu, Ryan Doherty, Eli Collins, Clemens Meyer, Eliza Rutherford, Erica Moreira, Kareem Ayoub, Megha Goel, Jack Krawczyk, Cosmo Du, Ed Chi, Heng-Tze Cheng, Eric Ni, Purvi Shah, Patrick Kane, Betty Chan, Manaal Faruqui, Aliaksei Severyn, Hanzhao Lin, Yaguang Li, Yong Cheng, Abe Ittycheriah, Mahdis Mahdieh, Mia Chen, Pei Sun, Dustin Tran, Sumit Bagri, Balaji Lakshminarayanan, Jeremiah Liu, Andras Orban, Fabian Güra, Hao Zhou, Xinying Song, Aurelien Boffy, Harish Ganapathy, Steven Zheng, Hyunjeong Choe, Ágoston Weisz, Tao Zhu, Yifeng Lu, Siddharth Gopal, Jarrod Kahn, Maciej Kula, Jeff Pitman, Rushin Shah, Emanuel Taropa, Majd Al Merey, Martin Baeuml, Zhifeng Chen, Laurent El Shafey, Yujing Zhang, Olcan Sercinoglu, George Tucker, Enrique Piqueras, Maxim Krikun, Iain Barr, Nikolay Savinov, Ivo Danihelka, Becca Roelofs, Anaïs White, Anders Andreassen, Tamara von Glehn, Lakshman Yagati, Mehran Kazemi, Lucas Gonzalez, Misha Khalman, Jakub Sygnowski, Alexandre Frechette, Charlotte Smith, Laura Culp, Lev Proleev, Yi Luan, Xi Chen, James Lottes, Nathan Schucher, Federico Lebron, Alban Rrustemi, Natalie Clay, Phil Crone, Tomas Kocisky, Jeffrey Zhao, Bartek Perz, Dian
https://arxiv.org/abs/2505.17536v1
Yu, Heidi Howard, Adam Bloniarz, Jack W Rae, Han Lu, Laurent Sifre, Marcello Maggioni, 11 Fred Alcober, Dan Garrette, Megan Barnes, Shantanu Thakoor, Jacob Austin, Gabriel Barth-Maron, William Wong, Rishabh Joshi, Rahma Chaabouni, Deeni Fatiha, Arun Ahuja, Gaurav Singh Tomar, Evan Senter, Martin Chadwick, Ilya Kornakov, Nithya Attaluri, Iñaki Iturrate, Ruibo Liu, Yunxuan Li, Sarah Cogan, Jeremy Chen, Chao Jia, Chenjie Gu, Qiao Zhang, Jordan Grimstad, Ale Jakse Hartman, Xavier Garcia, Thanumalayan Sankaranarayana Pillai, Jacob Devlin, Michael Laskin, Diego de Las Casas, Dasha Valter, Connie Tao, Lorenzo Blanco, Adrià Puigdomènech Badia, David Reitter, Mianna Chen, Jenny Brennan, Clara Rivera, Sergey Brin, Shariq Iqbal, Gabriela Surita, Jane Labanowski, Abhi Rao, Stephanie Winkler, Emilio Parisotto, Yiming Gu, Kate Olszewska, Ravi Addanki, Antoine Miech, Annie Louis, Denis Teplyashin, Geoff Brown, Elliot Catt, Jan Balaguer, Jackie Xiang, Pidong Wang, Zoe Ashwood, Anton Briukhov, Albert Webson, Sanjay Ganapathy, Smit Sanghavi, Ajay Kannan, Ming-Wei Chang, Axel Stjerngren, Josip Djolonga, Yuting Sun, Ankur Bapna, Matthew Aitchison, Pedram Pejman, Henryk Michalewski, Tianhe Yu, Cindy Wang, Juliette Love, Junwhan Ahn, Dawn Bloxwich, Kehang Han, Peter Humphreys, Thibault Sellam, James Bradbury, Varun Godbole, Sina Samangooei, Bogdan Damoc, Alex Kaskasoli, Sébastien M R Arnold, Vijay Vasudevan, Shubham Agrawal, Jason Riesa, Dmitry Lepikhin, Richard Tanburn, Srivatsan Srinivasan, Hyeontaek Lim, Sarah Hodkinson, Pranav Shyam, Johan Ferret, Steven Hand, Ankush Garg, Tom Le Paine, Jian Li, Yujia Li, Minh Giang, Alexander Neitz, Zaheer Abbas, Sarah York, Machel Reid, Elizabeth Cole, Aakanksha Chowdhery, Dipanjan Das, Dominika Rogozi ´nska, Vitaliy Nikolaev, Pablo Sprechmann, Zachary Nado, Lukas Zilka, Flavien Prost, Luheng He, Marianne Monteiro, Gaurav Mishra, Chris Welty, Josh Newlan, Dawei Jia, Miltiadis Allamanis, Clara Huiyi Hu, Raoul de Liedekerke, Justin Gilmer, Carl Saroufim, Shruti Rijhwani, Shaobo Hou, Disha Shrivastava, Anirudh Baddepudi, Alex Goldin, Adnan Ozturel, Albin Cassirer, Yunhan Xu, Daniel Sohn, Devendra Sachan, Reinald Kim Amplayo, Craig Swanson, Dessie Petrova, Shashi Narayan, Arthur Guez, Siddhartha Brahma, Jessica Landon, Miteyan Patel, Ruizhe Zhao, Kevin Villela, Luyu Wang, Wenhao Jia, Matthew Rahtz, Mai Giménez, Legg Yeung, James Keeling, Petko Georgiev, Diana Mincu, Boxi Wu, Salem Haykal, Rachel Saputro, Kiran V odrahalli, James Qin, Zeynep Cankara, Abhanshu Sharma, Nick Fernando, Will Hawkins, Behnam Neyshabur, Solomon Kim, Adrian Hutter, Priyanka Agrawal, Alex Castro-Ros, George van den Driessche, Tao Wang, Fan Yang, Shuo-Yiin Chang, Paul Komarek, Ross McIlroy, Mario Lu ˇci´c, Guodong Zhang, Wael Farhan, Michael Sharman, Paul Natsev, Paul Michel, Yamini Bansal, Siyuan Qiao, Kris Cao, Siamak Shakeri, Christina Butterfield, Justin Chung, Paul Kishan Rubenstein, Shivani Agrawal, Arthur Mensch, Kedar Soparkar, Karel Lenc, Timothy Chung, Aedan Pope, Loren Maggiore, Jackie Kay, Priya Jhakra, Shibo Wang, Joshua Maynez, Mary Phuong, Taylor Tobin, Andrea Tacchetti, Maja Trebacz, Kevin Robinson, Yash Katariya, Sebastian Riedel, Paige Bailey, Kefan Xiao, Nimesh Ghelani, Lora Aroyo, Ambrose Slone, Neil Houlsby, Xuehan Xiong, Zhen Yang, Elena Gribovskaya, Jonas Adler, Mateo Wirth, Lisa Lee, Music Li, Thais Kagohara, Jay Pavagadhi, Sophie Bridgers, Anna Bortsova, Sanjay Ghemawat, Zafarali Ahmed, Tianqi Liu, Richard Powell, Vijay Bolina, Mariko Iinuma, Polina Zablotskaia, James Besley, Da-Woon Chung, Timothy Dozat, Ramona Comanescu, Xiance Si, Jeremy Greer, Guolong Su, Martin Polacek, Raphaël Lopez Kaufman, Simon Tokumine, Hexiang Hu,
https://arxiv.org/abs/2505.17536v1
Elena Buchatskaya, Yingjie Miao, Mohamed Elhawaty, Aditya Siddhant, Nenad Tomasev, Jinwei Xing, Christina Greer, Helen Miller, Shereen Ashraf, Aurko Roy, Zizhao Zhang, Ada Ma, Angelos Filos, Milos Besta, Rory Blevins, Ted Klimenko, Chih-Kuan Yeh, Soravit Changpinyo, Jiaqi Mu, Oscar Chang, Mantas Pajarskas, Carrie Muir, Vered Cohen, Charline Le Lan, Krishna Haridasan, Amit Marathe, Steven Hansen, Sholto Douglas, Rajkumar Samuel, Mingqiu Wang, Sophia Austin, Chang Lan, Jiepu Jiang, Justin Chiu, Jaime Alonso Lorenzo, Lars Lowe Sjösund, Sébastien Cevey, Zach Gleicher, Thi Avrahami, Anudhyan Boral, Hansa Srinivasan, Vittorio Selo, Rhys May, Konstantinos Aisopos, Léonard Hussenot, Livio Baldini Soares, Kate Baumli, Michael B Chang, Adrià Recasens, Ben Caine, Alexander Pritzel, Filip Pavetic, Fabio Pardo, Anita Gergely, Justin Frye, Vinay Ramasesh, Dan Horgan, Kartikeya Badola, Nora Kassner, Subhrajit Roy, Ethan Dyer, Víctor Campos Campos, Alex Tomala, Yunhao Tang, Dalia El Badawy, Elspeth White, Basil Mustafa, Oran Lang, Abhishek Jindal, Sharad Vikram, Zhitao Gong, Sergi Caelles, Ross Hemsley, Gregory Thornton, Fangxiaoyu Feng, Wojciech Stokowiec, Ce Zheng, Phoebe Thacker, Ça ˘glar Ünlü, Zhishuai Zhang, Mohammad Saleh, James Svensson, Max Bileschi, Piyush Patil, Ankesh Anand, Roman Ring, Katerina Tsihlas, Arpi Vezer, Marco Selvi, Toby Shevlane, Mikel Rodriguez, Tom Kwiatkowski, Samira Daruki, Keran Rong, Allan Dafoe, Nicholas FitzGerald, Keren Gu-Lemberg, Mina Khan, Lisa Anne Hendricks, Marie Pellat, Vladimir Feinberg, James Cobon-Kerr, Tara Sainath, Maribeth Rauh, Sayed Hadi Hashemi, Richard Ives, Yana Hasson, Eric Noland, Yuan Cao, Nathan Byrd, Le Hou, Qingze Wang, Thibault Sottiaux, Michela Paganini, Jean-Baptiste Lespiau, Alexandre Moufarek, Samer Hassan, Kaushik Shivakumar, Joost van Amersfoort, Amol Mandhane, Pratik Joshi, Anirudh Goyal, Matthew Tung, Andrew Brock, Hannah Sheahan, Vedant Misra, Cheng Li, Nemanja Raki ´cevi´c, Mostafa Dehghani, Fangyu Liu, Sid Mittal, Junhyuk Oh, Seb Noury, Eren Sezener, Fantine Huot, Matthew Lamm, Nicola De Cao, Charlie Chen, Sidharth Mudgal, Romina Stella, Kevin Brooks, Gautam Vasudevan, Chenxi Liu, Mainak Chain, Nivedita Melinkeri, Aaron Cohen, Venus Wang, Kristie Seymore, Sergey Zubkov, Rahul Goel, Summer Yue, Sai Krishnakumaran, Brian Albert, Nate Hurley, Motoki Sano, Anhad Mohananey, Jonah Joughin, Egor Filonov, Tomasz K˛ epa, Yomna Eldawy, Jiawern Lim, Rahul Rishi, Shirin Badiezadegan, Taylor Bos, Jerry Chang, Sanil Jain, Sri Gayatri Sundara Padmanabhan, Subha Puttagunta, Kalpesh Krishna, Leslie Baker, Norbert Kalb, Vamsi Bedapudi, Adam Kurzrok, Shuntong Lei, Anthony Yu, Oren Litvin, Xiang Zhou, Zhichun Wu, Sam Sobell, Andrea Siciliano, Alan Papir, Robby Neale, Jonas Bragagnolo, Tej Toor, 12 Tina Chen, Valentin Anklin, Feiran Wang, Richie Feng, Milad Gholami, Kevin Ling, Lijuan Liu, Jules Walter, Hamid Moghaddam, Arun Kishore, Jakub Adamek, Tyler Mercado, Jonathan Mallinson, Siddhinita Wandekar, Stephen Cagle, Eran Ofek, Guillermo Garrido, Clemens Lombriser, Maksim Mukha, Botu Sun, Hafeezul Rahman Mohammad, Josip Matak, Yadi Qian, Vikas Peswani, Pawel Janus, Quan Yuan, Leif Schelin, Oana David, Ankur Garg, Yifan He, Oleksii Duzhyi, Anton Älgmyr, Timothée Lottaz, Qi Li, Vikas Yadav, Luyao Xu, Alex Chinien, Rakesh Shivanna, Aleksandr Chuklin, Josie Li, Carrie Spadine, Travis Wolfe, Kareem Mohamed, Subhabrata Das, Zihang Dai, Kyle He, Daniel von Dincklage, Shyam Upadhyay, Akanksha Maurya, Luyan Chi, Sebastian Krause, Khalid Salama, Pam G Rabinovitch, Pavan Kumar Reddy M, Aarush Selvan, Mikhail Dektiarev, Golnaz Ghiasi, Erdem Guven, Himanshu Gupta, Boyi Liu, Deepak Sharma,
https://arxiv.org/abs/2505.17536v1
Idan Heimlich Shtacher, Shachi Paul, Oscar Akerlund, François-Xavier Aubet, Terry Huang, Chen Zhu, Eric Zhu, Elico Teixeira, Matthew Fritze, Francesco Bertolini, Liana-Eleonora Marinescu, Martin Bölle, Dominik Paulus, Khyatti Gupta, Tejasi Latkar, Max Chang, Jason Sanders, Roopa Wilson, Xuewei Wu, Yi-Xuan Tan, Lam Nguyen Thiet, Tulsee Doshi, Sid Lall, Swaroop Mishra, Wanming Chen, Thang Luong, Seth Benjamin, Jasmine Lee, Ewa Andrejczuk, Dominik Rabiej, Vipul Ranjan, Krzysztof Styrc, Pengcheng Yin, Jon Simon, Malcolm Rose Harriott, Mudit Bansal, Alexei Robsky, Geoff Bacon, David Greene, Daniil Mirylenka, Chen Zhou, Obaid Sarvana, Abhimanyu Goyal, Samuel Andermatt, Patrick Siegler, Ben Horn, Assaf Israel, Francesco Pongetti, Chih-Wei “louis Chen, Marco Selvatici, Pedro Silva, Kathie Wang, Jackson Tolins, Kelvin Guu, Roey Yogev, Xiaochen Cai, Alessandro Agostini, Maulik Shah, Hung Nguyen, Noah Ó Donnaile, Sébastien Pereira, Linda Friso, Adam Stambler, Adam Kurzrok, Chenkai Kuang, Yan Romanikhin, Mark Geller, Z J Yan, Kane Jang, Cheng-Chun Lee, Wojciech Fica, Eric Malmi, Qijun Tan, Dan Banica, Daniel Balle, Ryan Pham, Yanping Huang, Diana Avram, Hongzhi Shi, Jasjot Singh, Chris Hidey, Niharika Ahuja, Pranab Saxena, Dan Dooley, Srividya Pranavi Potharaju, Eileen O’Neill, Anand Gokulchandran, Ryan Foley, Kai Zhao, Mike Dusenberry, Yuan Liu, Pulkit Mehta, Ragha Kotikalapudi, Chalence Safranek-Shrader, Andrew Goodman, Joshua Kessinger, Eran Globen, Prateek Kolhar, Chris Gorgolewski, Ali Ibrahim, Yang Song, Ali Eichenbaum, Thomas Brovelli, Sahitya Potluri, Preethi Lahoti, Cip Baetu, Ali Ghorbani, Charles Chen, Andy Crawford, Shalini Pal, Mukund Sridhar, Petru Gurita, Asier Mujika, Igor Petrovski, Pierre-Louis Cedoz, Chenmei Li, Shiyuan Chen, Niccolò Dal Santo, Siddharth Goyal, Jitesh Punjabi, Karthik Kappaganthu, Chester Kwak, Pallavi Lv, Sarmishta Velury, Himadri Choudhury, Jamie Hall, Premal Shah, Ricardo Figueira, Matt Thomas, Minjie Lu, Ting Zhou, Chintu Kumar, Thomas Jurdi, Sharat Chikkerur, Yenai Ma, Adams Yu, Soo Kwak, Victor Ähdel, Sujeevan Rajayogam, Travis Choma, Fei Liu, Aditya Barua, Colin Ji, Ji Ho Park, Vincent Hellendoorn, Alex Bailey, Taylan Bilal, Huanjie Zhou, Mehrdad Khatir, Charles Sutton, Wojciech Rzadkowski, Fiona Macintosh, Konstantin Shagin, Paul Medina, Chen Liang, Jinjing Zhou, Pararth Shah, Yingying Bi, Attila Dankovics, Shipra Banga, Sabine Lehmann, Marissa Bredesen, Zifan Lin, John Eric Hoffmann, Jonathan Lai, Raynald Chung, Kai Yang, Nihal Balani, Arthur Bražinskas, Andrei Sozanschi, Matthew Hayes, Héctor Fernández Alcalde, Peter Makarov, Will Chen, Antonio Stella, Liselotte Snijders, Michael Mandl, Ante Kärrman, Paweł Nowak, Xinyi Wu, Alex Dyck, Krishnan Vaidyanathan, Raghavender R, Jessica Mallet, Mitch Rudominer, Eric Johnston, Sushil Mittal, Akhil Udathu, Janara Christensen, Vishal Verma, Zach Irving, Andreas Santucci, Gamaleldin Elsayed, Elnaz Davoodi, Marin Georgiev, Ian Tenney, Nan Hua, Geoffrey Cideron, Edouard Leurent, Mahmoud Alnahlawi, Ionut Georgescu, Nan Wei, Ivy Zheng, Dylan Scandinaro, Heinrich Jiang, Jasper Snoek, Mukund Sundararajan, Xuezhi Wang, Zack Ontiveros, Itay Karo, Jeremy Cole, Vinu Rajashekhar, Lara Tumeh, Eyal Ben-David, Rishub Jain, Jonathan Uesato, Romina Datta, Oskar Bunyan, Shimu Wu, John Zhang, Piotr Stanczyk, Ye Zhang, David Steiner, Subhajit Naskar, Michael Azzam, Matthew Johnson, Adam Paszke, Chung-Cheng Chiu, Jaume Sanchez Elias, Afroz Mohiuddin, Faizan Muhammad, Jin Miao, Andrew Lee, Nino Vieillard, Jane Park, Jiageng Zhang, Jeff Stanway, Drew Garmon, Abhijit Karmarkar, Zhe Dong, Jong Lee, Aviral Kumar, Luowei Zhou, Jonathan Evens, William Isaac, Geoffrey Irving, Edward Loper, Michael Fink, Isha Arkatkar, Nanxin Chen,
https://arxiv.org/abs/2505.17536v1
Izhak Shafran, Ivan Petrychenko, Zhe Chen, Johnson Jia, Anselm Levskaya, Zhenkai Zhu, Peter Grabowski, Yu Mao, Alberto Magni, Kaisheng Yao, Javier Snaider, Norman Casagrande, Evan Palmer, Paul Suganthan, Alfonso Castaño, Irene Giannoumis, Wooyeol Kim, Mikołaj Rybi ´nski, Ashwin Sreevatsa, Jennifer Prendki, David Soergel, Adrian Goedeckemeyer, Willi Gierke, Mohsen Jafari, Meenu Gaba, Jeremy Wiesner, Diana Gage Wright, Yawen Wei, Harsha Vashisht, Yana Kulizhskaya, Jay Hoover, Maigo Le, Lu Li, Chimezie Iwuanyanwu, Lu Liu, Kevin Ramirez, Andrey Khorlin, Albert Cui, Tian Lin, Marcus Wu, Ricardo Aguilar, Keith Pallo, Abhishek Chakladar, Ginger Perng, Elena Allica Abellan, Mingyang Zhang, Ishita Dasgupta, Nate Kushman, Ivo Penchev, Alena Repina, Xihui Wu, Tom van der Weide, Priya Ponnapalli, Caroline Kaplan, Jiri Simsa, Shuangfeng Li, Olivier Dousse, Fan Yang, Jeff Piper, Nathan Ie, Rama Pasumarthi, Nathan Lintz, Anitha Vijayakumar, Daniel Andor, Pedro Valenzuela, Minnie Lui, Cosmin Paduraru, Daiyi Peng, Katherine Lee, Shuyuan Zhang, Somer Greene, Duc Dung Nguyen, Paula Kurylowicz, Cassidy Hardin, Lucas Dixon, Lili Janzer, Kiam Choo, Ziqiang Feng, Biao Zhang, Achintya Singhal, Dayou Du, Dan McKinnon, Natasha Antropova, Tolga Bolukbasi, Orgad Keller, David Reid, Daniel Finchelstein, Maria Abi Raad, Remi Crocker, Peter Hawkins, Robert Dadashi, Colin Gaffney, Ken Franko, Anna Bulanova, Rémi Leblond, Shirley Chung, Harry Askham, Luis C Cobo, Kelvin Xu, Felix Fischer, Jun Xu, Christina Sorokin, Chris Alberti, Chu-Cheng Lin, Colin Evans, Alek Dimitriev, Hannah Forbes, Dylan Banarse, Zora Tung, Mark Omernick, Colton Bishop, Rachel Sterneck, Rohan 13 Jain, Jiawei Xia, Ehsan Amid, Francesco Piccinno, Xingyu Wang, Praseem Banzal, Daniel J Mankowitz, Alex Polozov, Victoria Krakovna, Sasha Brown, Mohammadhossein Bateni, Dennis Duan, Vlad Firoiu, Meghana Thotakuri, Tom Natan, Matthieu Geist, Ser Tan Girgin, Hui Li, Jiayu Ye, Ofir Roval, Reiko Tojo, Michael Kwong, James Lee-Thorp, Christopher Yew, Danila Sinopalnikov, Sabela Ramos, John Mellor, Abhishek Sharma, Kathy Wu, David Miller, Nicolas Sonnerat, Denis Vnukov, Rory Greig, Jennifer Beattie, Emily Caveness, Libin Bai, Julian Eisenschlos, Alex Korchemniy, Tomy Tsai, Mimi Jasarevic, Weize Kong, Phuong Dao, Zeyu Zheng, Frederick Liu, Fan Yang, Rui Zhu, Tian Huey Teh, Jason Sanmiya, Evgeny Gladchenko, Nejc Trdin, Daniel Toyama, Evan Rosen, Sasan Tavakkol, Linting Xue, Chen Elkind, Oliver Woodman, John Carpenter, George Papamakarios, Rupert Kemp, Sushant Kafle, Tanya Grunina, Rishika Sinha, Alice Talbert, Diane Wu, Denese Owusu-Afriyie, Cosmo Du, Chloe Thornton, Jordi Pont-Tuset, Pradyumna Narayana, Jing Li, Saaber Fatehi, John Wieting, Omar Ajmeri, Benigno Uria, Yeongil Ko, Laura Knight, Amélie Héliou, Ning Niu, Shane Gu, Chenxi Pang, Yeqing Li, Nir Levine, Ariel Stolovich, Rebeca Santamaria-Fernandez, Sonam Goenka, Wenny Yustalim, Robin Strudel, Ali Elqursh, Charlie Deck, Hyo Lee, Zonglin Li, Kyle Levin, Raphael Hoffmann, Dan Holtmann-Rice, Olivier Bachem, Sho Arora, Christy Koh, Soheil Hassas Yeganeh, Siim Põder, Mukarram Tariq, Yanhua Sun, Lucian Ionita, Mojtaba Seyedhosseini, Pouya Tafti, Zhiyu Liu, Anmol Gulati, Jasmine Liu, Xinyu Ye, Bart Chrzaszcz, Lily Wang, Nikhil Sethi, Tianrun Li, Ben Brown, Shreya Singh, Wei Fan, Aaron Parisi, Joe Stanton, Vinod Koverkathu, Christopher A Choquette-Choo, Yunjie Li, T J Lu, Abe Ittycheriah, Prakash Shroff, Mani Varadarajan, Sanaz Bahargam, Rob Willoughby, David Gaddy, Guillaume Desjardins, Marco Cornero, Brona Robenek, Bhavishya Mittal, Ben Albrecht, Ashish Shenoy, Fedor Moiseev, Henrik Jacobsson, Alireza Ghaffarkhah, Morgane
https://arxiv.org/abs/2505.17536v1
Rivière, Alanna Walton, Clément Crepy, Alicia Parrish, Zongwei Zhou, Clement Farabet, Carey Radebaugh, Praveen Srinivasan, Claudia van der Salm, Andreas Fidjeland, Salvatore Scellato, Eri Latorre-Chimoto, Hanna Klimczak-Pluci ´nska, David Bridson, Dario de Cesare, Tom Hudson, Piermaria Mendolicchio, Lexi Walker, Alex Morris, Matthew Mauger, Alexey Guseynov, Alison Reid, Seth Odoom, Lucia Loher, Victor Cotruta, Madhavi Yenugula, Dominik Grewe, Anastasia Petrushkina, Tom Duerig, Antonio Sanchez, Steve Yadlowsky, Amy Shen, Amir Globerson, Lynette Webb, Sahil Dua, Dong Li, Surya Bhupatiraju, Dan Hurt, Haroon Qureshi, Ananth Agarwal, Tomer Shani, Matan Eyal, Anuj Khare, Shreyas Rammohan Belle, Lei Wang, Chetan Tekur, Mihir Sanjay Kale, Jinliang Wei, Ruoxin Sang, Brennan Saeta, Tyler Liechty, Yi Sun, Yao Zhao, Stephan Lee, Pandu Nayak, Doug Fritz, Manish Reddy Vuyyuru, John Aslanides, Nidhi Vyas, Martin Wicke, Xiao Ma, Evgenii Eltyshev, Nina Martin, Hardie Cate, James Manyika, Keyvan Amiri, Yelin Kim, Xi Xiong, Kai Kang, Florian Luisier, Nilesh Tripuraneni, David Madras, Mandy Guo, Austin Waters, Oliver Wang, Joshua Ainslie, Jason Baldridge, Han Zhang, Garima Pruthi, Jakob Bauer, Feng Yang, Riham Mansour, Jason Gelman, Yang Xu, George Polovets, Ji Liu, Honglong Cai, Warren Chen, Xianghai Sheng, Emily Xue, Sherjil Ozair, Christof Angermueller, Xiaowei Li, Anoop Sinha, Weiren Wang, Julia Wiesinger, Emmanouil Koukoumidis, Yuan Tian, Anand Iyer, Madhu Gurumurthy, Mark Goldenson, Parashar Shah, M K Blake, Hongkun Yu, Anthony Urbanowicz, Jennimaria Palomaki, Chrisantha Fernando, Ken Durden, Harsh Mehta, Nikola Momchev, Elahe Rahimtoroghi, Maria Georgaki, Amit Raul, Sebastian Ruder, Morgan Redshaw, Jinhyuk Lee, Denny Zhou, Komal Jalan, Dinghua Li, Blake Hechtman, Parker Schuh, Milad Nasr, Kieran Milan, Vladimir Mikulik, Juliana Franco, Tim Green, Nam Nguyen, Joe Kelley, Aroma Mahendru, Andrea Hu, Joshua Howland, Ben Vargas, Jeffrey Hui, Kshitij Bansal, Vikram Rao, Rakesh Ghiya, Emma Wang, Ke Ye, Jean Michel Sarr, Melanie Moranski Preston, Madeleine Elish, Steve Li, Aakash Kaku, Jigar Gupta, Ice Pasupat, Da-Cheng Juan, Milan Someswar, Tejvi M., Xinyun Chen, Aida Amini, Alex Fabrikant, Eric Chu, Xuanyi Dong, Amruta Muthal, Senaka Buthpitiya, Sarthak Jauhari, Nan Hua, Urvashi Khandelwal, Ayal Hitron, Jie Ren, Larissa Rinaldi, Shahar Drath, Avigail Dabush, Nan-Jiang Jiang, Harshal Godhia, Uli Sachs, Anthony Chen, Yicheng Fan, Hagai Taitelbaum, Hila Noga, Zhuyun Dai, James Wang, Chen Liang, Jenny Hamer, Chun-Sung Ferng, Chenel Elkind, Aviel Atias, Paulina Lee, Vít Listík, Mathias Carlen, Jan van de Kerkhof, Marcin Pikus, Krunoslav Zaher, Paul Müller, Sasha Zykova, Richard Stefanec, Vitaly Gatsko, Christoph Hirnschall, Ashwin Sethi, Xingyu Federico Xu, Chetan Ahuja, Beth Tsai, Anca Stefanoiu, Bo Feng, Keshav Dhandhania, Manish Katyal, Akshay Gupta, Atharva Parulekar, Divya Pitta, Jing Zhao, Vivaan Bhatia, Yashodha Bhavnani, Omar Alhadlaq, Xiaolin Li, Peter Danenberg, Dennis Tu, Alex Pine, Vera Filippova, Abhipso Ghosh, Ben Limonchik, Bhargava Urala, Chaitanya Krishna Lanka, Derik Clive, Yi Sun, Edward Li, Hao Wu, Kevin Hongtongsak, Ianna Li, Kalind Thakkar, Kuanysh Omarov, Kushal Majmundar, Michael Alverson, Michael Kucharski, Mohak Patel, Mudit Jain, Maksim Zabelin, Paolo Pelagatti, Rohan Kohli, Saurabh Kumar, Joseph Kim, Swetha Sankar, Vineet Shah, Lakshmi Ramachandruni, Xiangkai Zeng, Ben Bariach, Laura Weidinger, Tu Vu, Alek Andreev, Antoine He, Kevin Hui, Sheleem Kashem, Amar Subramanya, Sissie Hsiao, Demis Hassabis, Koray Kavukcuoglu, Adam Sadovsky, Quoc Le, Trevor Strohman, Yonghui Wu, Slav
https://arxiv.org/abs/2505.17536v1
Petrov, Jeffrey Dean, and Oriol Vinyals. Gemini: A family of highly capable multimodal models. arXiv [cs.CL] , December 2023. [31] OpenAI, Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, A J Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, Aleksander M ˛ adry, Alex Baker-Whitcomb, Alex Beutel, Alex Borzunov, Alex Carney, Alex Chow, Alex Kirillov, Alex Nichol, Alex Paino, Alex Renzin, Alex Tachard Passos, Alexander Kirillov, Alexi Christakis, Alexis Conneau, Ali Kamali, Allan Jabri, Allison Moyer, Allison Tam, Amadou Crookes, Amin Tootoochian, Amin Tootoonchian, Ananya 14 Kumar, Andrea Vallone, Andrej Karpathy, Andrew Braunstein, Andrew Cann, Andrew Codispoti, Andrew Galu, Andrew Kondrich, Andrew Tulloch, Andrey Mishchenko, Angela Baek, Angela Jiang, Antoine Pelisse, Antonia Woodford, Anuj Gosalia, Arka Dhar, Ashley Pantuliano, Avi Nayak, Avital Oliver, Barret Zoph, Behrooz Ghorbani, Ben Leimberger, Ben Rossen, Ben Sokolowsky, Ben Wang, Benjamin Zweig, Beth Hoover, Blake Samic, Bob McGrew, Bobby Spero, Bogo Giertler, Bowen Cheng, Brad Lightcap, Brandon Walkin, Brendan Quinn, Brian Guarraci, Brian Hsu, Bright Kellogg, Brydon Eastman, Camillo Lugaresi, Carroll Wainwright, Cary Bassin, Cary Hudson, Casey Chu, Chad Nelson, Chak Li, Chan Jun Shern, Channing Conger, Charlotte Barette, Chelsea V oss, Chen Ding, Cheng Lu, Chong Zhang, Chris Beaumont, Chris Hallacy, Chris Koch, Christian Gibson, Christina Kim, Christine Choi, Christine McLeavey, Christopher Hesse, Claudia Fischer, Clemens Winter, Coley Czarnecki, Colin Jarvis, Colin Wei, Constantin Koumouzelis, Dane Sherburn, Daniel Kappler, Daniel Levin, Daniel Levy, David Carr, David Farhi, David Mely, David Robinson, David Sasaki, Denny Jin, Dev Valladares, Dimitris Tsipras, Doug Li, Duc Phong Nguyen, Duncan Findlay, Edede Oiwoh, Edmund Wong, Ehsan Asdar, Elizabeth Proehl, Elizabeth Yang, Eric Antonow, Eric Kramer, Eric Peterson, Eric Sigler, Eric Wallace, Eugene Brevdo, Evan Mays, Farzad Khorasani, Felipe Petroski Such, Filippo Raso, Francis Zhang, Fred von Lohmann, Freddie Sulit, Gabriel Goh, Gene Oden, Geoff Salmon, Giulio Starace, Greg Brockman, Hadi Salman, Haiming Bao, Haitang Hu, Hannah Wong, Haoyu Wang, Heather Schmidt, Heather Whitney, Heewoo Jun, Hendrik Kirchner, Henrique Ponde de Oliveira Pinto, Hongyu Ren, Huiwen Chang, Hyung Won Chung, Ian Kivlichan, Ian O’Connell, Ian O’Connell, Ian Osband, Ian Silber, Ian Sohl, Ibrahim Okuyucu, Ikai Lan, Ilya Kostrikov, Ilya Sutskever, Ingmar Kanitscheider, Ishaan Gulrajani, Jacob Coxon, Jacob Menick, Jakub Pachocki, James Aung, James Betker, James Crooks, James Lennon, Jamie Kiros, Jan Leike, Jane Park, Jason Kwon, Jason Phang, Jason Teplitz, Jason Wei, Jason Wolfe, Jay Chen, Jeff Harris, Jenia Varavva, Jessica Gan Lee, Jessica Shieh, Ji Lin, Jiahui Yu, Jiayi Weng, Jie Tang, Jieqi Yu, Joanne Jang, Joaquin Quinonero Candela, Joe Beutler, Joe Landers, Joel Parish, Johannes Heidecke, John Schulman, Jonathan Lachman, Jonathan McKay, Jonathan Uesato, Jonathan Ward, Jong Wook Kim, Joost Huizinga, Jordan Sitkin, Jos Kraaijeveld, Josh Gross, Josh Kaplan, Josh Snyder, Joshua Achiam, Joy Jiao, Joyce Lee, Juntang Zhuang, Justyn Harriman, Kai Fricke, Kai Hayashi, Karan Singhal, Katy Shi, Kavin Karthik, Kayla Wood, Kendra Rimbach, Kenny Hsu, Kenny Nguyen, Keren Gu-Lemberg, Kevin Button, Kevin Liu, Kiel Howe, Krithika Muthukumar, Kyle Luther, Lama Ahmad, Larry Kai, Lauren Itow, Lauren Workman, Leher Pathak, Leo Chen, Li Jing, Lia Guy, Liam Fedus, Liang Zhou, Lien Mamitsuka, Lilian Weng, Lindsay
https://arxiv.org/abs/2505.17536v1
McCallum, Lindsey Held, Long Ouyang, Louis Feuvrier, Lu Zhang, Lukas Kondraciuk, Lukasz Kaiser, Luke Hewitt, Luke Metz, Lyric Doshi, Mada Aflak, Maddie Simens, Madelaine Boyd, Madeleine Thompson, Marat Dukhan, Mark Chen, Mark Gray, Mark Hudnall, Marvin Zhang, Marwan Aljubeh, Mateusz Litwin, Matthew Zeng, Max Johnson, Maya Shetty, Mayank Gupta, Meghan Shah, Mehmet Yatbaz, Meng Jia Yang, Mengchao Zhong, Mia Glaese, Mianna Chen, Michael Janner, Michael Lampe, Michael Petrov, Michael Wu, Michele Wang, Michelle Fradin, Michelle Pokrass, Miguel Castro, Miguel Oom Temudo de Castro, Mikhail Pavlov, Miles Brundage, Miles Wang, Minal Khan, Mira Murati, Mo Bavarian, Molly Lin, Murat Yesildal, Nacho Soto, Natalia Gimelshein, Natalie Cone, Natalie Staudacher, Natalie Summers, Natan LaFontaine, Neil Chowdhury, Nick Ryder, Nick Stathas, Nick Turley, Nik Tezak, Niko Felix, Nithanth Kudige, Nitish Keskar, Noah Deutsch, Noel Bundick, Nora Puckett, Ofir Nachum, Ola Okelola, Oleg Boiko, Oleg Murk, Oliver Jaffe, Olivia Watkins, Olivier Godement, Owen Campbell-Moore, Patrick Chao, Paul McMillan, Pavel Belov, Peng Su, Peter Bak, Peter Bakkum, Peter Deng, Peter Dolan, Peter Hoeschele, Peter Welinder, Phil Tillet, Philip Pronin, Philippe Tillet, Prafulla Dhariwal, Qiming Yuan, Rachel Dias, Rachel Lim, Rahul Arora, Rajan Troll, Randall Lin, Rapha Gontijo Lopes, Raul Puri, Reah Miyara, Reimar Leike, Renaud Gaubert, Reza Zamani, Ricky Wang, Rob Donnelly, Rob Honsby, Rocky Smith, Rohan Sahai, Rohit Ramchandani, Romain Huet, Rory Carmichael, Rowan Zellers, Roy Chen, Ruby Chen, Ruslan Nigmatullin, Ryan Cheu, Saachi Jain, Sam Altman, Sam Schoenholz, Sam Toizer, Samuel Miserendino, Sandhini Agarwal, Sara Culver, Scott Ethersmith, Scott Gray, Sean Grove, Sean Metzger, Shamez Hermani, Shantanu Jain, Shengjia Zhao, Sherwin Wu, Shino Jomoto, Shirong Wu, Shuaiqi, Xia, Sonia Phene, Spencer Papay, Srinivas Narayanan, Steve Coffey, Steve Lee, Stewart Hall, Suchir Balaji, Tal Broda, Tal Stramer, Tao Xu, Tarun Gogineni, Taya Christianson, Ted Sanders, Tejal Patwardhan, Thomas Cunninghman, Thomas Degry, Thomas Dimson, Thomas Raoux, Thomas Shadwell, Tianhao Zheng, Todd Underwood, Todor Markov, Toki Sherbakov, Tom Rubin, Tom Stasi, Tomer Kaftan, Tristan Heywood, Troy Peterson, Tyce Walters, Tyna Eloundou, Valerie Qi, Veit Moeller, Vinnie Monaco, Vishal Kuo, Vlad Fomenko, Wayne Chang, Weiyi Zheng, Wenda Zhou, Wesam Manassra, Will Sheu, Wojciech Zaremba, Yash Patil, Yilei Qian, Yongjik Kim, Youlong Cheng, Yu Zhang, Yuchen He, Yuchen Zhang, Yujia Jin, Yunxing Dai, and Yury Malkov. GPT-4o system card. arXiv [cs.CL] , October 2024. [32] Reka Team, Aitor Ormazabal, Che Zheng, Cyprien de Masson d’Autume, Dani Yogatama, Deyu Fu, Donovan Ong, Eric Chen, Eugenie Lamprecht, Hai Pham, Isaac Ong, Kaloyan Aleksiev, Lei Li, Matthew Henderson, Max Bain, Mikel Artetxe, Nishant Relan, Piotr Padlewski, Qi Liu, Ren Chen, Samuel Phua, Yazheng Yang, Yi Tay, Yuqi Wang, Zhongkai Zhu, and Zhihui Xie. Reka core, flash, and edge: A series of powerful multimodal language models. arXiv [cs.CL] , April 2024. 15 [33] AI@Meta. Llama 4 model card, 2025. [34] Thao Le Minh, N Shimizu, Takashi Miyazaki, and K Shinoda. Deep learning based multi-modal addressee recognition in visual scenes with utterances. International Joint Conference on Artificial Intelligence , abs/1809.04288, July 2018. [35] Koji Inoue, Divesh Lala, Mikey Elmers, Keiko Ochi, and Tatsuya Kawahara. An LLM benchmark for addressee recognition in multi-modal multi-party dialogue. arXiv [cs.CL]
https://arxiv.org/abs/2505.17536v1
, January 2025. [36] Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. Robust speech recognition via large-scale weak supervision. arXiv [eess.AS] , December 2022. [37] David Bamman, Rachael Samberg, Richard Jean So, and Naitian Zhou. Measuring diversity in hollywood through the large-scale computational analysis of film. Proceedings of the National Academy of Sciences of the United States of America , 121(46):e2409770121, November 2024. [38] Micha Elsner and Eugene Charniak. Disentangling Chat. Computational Linguistics , 36(3):389–409, September 2010. [39] Jia-Chen Gu, Chongyang Tao, and Zhen-Hua Ling. Who says what to whom: A survey of multi-party conversations. In Luc De Raedt, editor, Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence , California, July 2022. International Joint Conferences on Artificial Intelligence Organization. [40] Kent K Chang, Danica Chen, and David Bamman. Dramatic conversation disentanglement. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki, editors, Findings of the Association for Computational Linguistics: ACL 2023 , pages 4020–4046, Toronto, Canada, July 2023. Association for Computational Linguistics. [41] Natasa Jovanovic and Rieks op den Akker. Towards Automatic Addressee Identification in Multi-party Dialogues. In Proceedings of the 5th SIGdial Workshop on Discourse and Dialogue at HLT-NAACL 2004 , pages 89–92, Cambridge, Massachusetts, USA, April 2004. Association for Computational Linguistics. [42] Hiroki Ouchi and Yuta Tsuboi. Addressee and Response Selection for Multi-Party Conversation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing , pages 2133–2143, Austin, Texas, November 2016. Association for Computational Linguistics. [43] Charles Goodwin. Conversational organization: Interaction between speakers and hearers . Academic Press, New York, 1981. [44] Herbert H Clark and Thomas B Carlson. Hearers and Speech Acts. Language , 58(2):332–373, 1982. [45] Herbert H Clark and Edward F Schaefer. Concealing one’s meaning from overhearers. Journal of memory and language , 26(2):209–225, April 1987. [46] Jia-Chen Gu, Zhenhua Ling, Quan Liu, Cong Liu, and Guoping Hu. GIFT: Graph-induced fine-tuning for multi-party conversation understanding. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki, editors, Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 11645–11658, Toronto, Canada, July 2023. Association for Computational Linguistics. [47] Micha Elsner and Eugene Charniak. You Talking to Me? A Corpus and Algorithm for Conversation Disen- tanglement. In Proceedings of ACL-08: HLT , pages 834–842, Columbus, Ohio, June 2008. Association for Computational Linguistics. [48] Jonathan K Kummerfeld, Sai R Gouravajhala, Joseph J Peper, Vignesh Athreya, Chulaka Gunasekara, Jatin Ganhotra, Siva Sankalp Patel, Lazaros C Polymenakos, and Walter Lasecki. A Large-Scale Corpus for Conversation Disentanglement, 2019. [49] Herve Bredin, Ruiqing Yin, Juan Manuel Coria, Gregory Gelly, Pavel Korshunov, Marvin Lavechin, Diego Fustes, Hadrien Titeux, Wassim Bouaziz, and Marie-Philippe Gill. Pyannote.audio: Neural building blocks for speaker diarization. In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . IEEE, May 2020. [50] Jin Xu, Zhifang Guo, Jinzheng He, Hangrui Hu, Ting He, Shuai Bai, Keqin Chen, Jialin Wang, Yang Fan, Kai Dang, Bin Zhang, Xiong Wang, Yunfei Chu, and Junyang Lin. Qwen2.5-omni technical report. arXiv [cs.CL] , March 2025. 16 [51] Brian McFee,
https://arxiv.org/abs/2505.17536v1
Matt McVicar, Daniel Faronbi, Iran Roman, Matan Gover, Stefan Balke, Scott Seyfarth, Ayoub Malek, Colin Raffel, Vincent Lostanlen, Benjamin van Niekirk, Dana Lee, Frank Cwitkowitz, Frank Zalkow, Oriol Nieto, Dan Ellis, Jack Mason, Kyungyun Lee, Bea Steers, Emily Halvachs, Carl Thomé, Fabian Robert-Stöter, Rachel Bittner, Ziyao Wei, Adam Weiss, Eric Battenberg, Keunwoo Choi, Ryuichi Yamamoto, C J Carr, Alex Metsai, Stefan Sullivan, Pius Friesch, Asmitha Krishnakumar, Shunsuke Hidaka, Steve Kowalik, Fabian Keller, Dan Mazur, Alexandre Chabot-Leclerc, Curtis Hawthorne, Chandrashekhar Ramaprasad, Myungchul Keum, Juanita Gomez, Will Monroe, Viktor Andreevitch Morozov, Kian Eliasi, nullmightybofo, Paul Biberstein, N Dorukhan Sergin, Romain Hennequin, Rimvydas Naktinis, beantowel, Taewoon Kim, Jon Petter Åsen, Joon Lim, Alex Malins, Darío Hereñú, Stef van der Struijk, Lorenz Nickel, Jackie Wu, Zhen Wang, Tim Gates, Matt V ollrath, Andy Sarroff, Xiao-Ming, Alastair Porter, Seth Kranzler, V oodoohop, Mattia Di Gangi, Helmi Jinoz, Connor Guerrero, Abduttayyeb Mazhar, toddrme, Zvi Baratz, Anton Kostin, Xinlu Zhuang, Cash Tinghin Lo, Pavel Campr, Eric Semeniuc, Monsij Biswal, Shayenne Moura, Paul Brossier, Hojin Lee, Waldir Pimenta, Jon Petter Åsen, Shin Hyun, S Iliya, Eugene Rabinovich, Geo Lei, Jize Guo, Phillip S M Skelton, Matt Pitkin, Anmol Mishra, Slava Chaunin, BenedictSt, Scott VanRavenswaay, and David Südholt. librosa/librosa: 0.11.0, March 2025. [52] Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo, Zhangchi Feng, and Yongqiang Ma. LlamaFactory: Unified efficient fine-tuning of 100+ language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations) , Bangkok, Thailand, 2024. Association for Computational Linguistics. [53] Mingda Chen, Zewei Chu, Sam Wiseman, and Kevin Gimpel. SummScreen: A Dataset for Abstractive Screenplay Summarization. In Proceedings of the 60th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers) , pages 8602–8615, Dublin, Ireland, May 2022. Association for Computational Linguistics. [54] Kent K Chang, Anna Ho, and David Bamman. Subversive characters and stereotyping readers: Character- izing queer relationalities with dialogue-based relation extraction. In Wouter Haverals, Marijn Koolen, and Laure Thompson, editors, Proceedings of the Computational Humanities Research Conference 2024 , Aarhus, Denmark, October 2024. CEUR Workshop Proceedings. [55] Yatai Ji, Shilong Zhang, Jie Wu, Peize Sun, Weifeng Chen, Xuefeng Xiao, Sidi Yang, Yujiu Yang, and Ping Luo. IDA-VLM: Towards movie understanding via ID-aware large vision-language model. In The Thirteenth International Conference on Learning Representations , 2025. [56] Bargav Jayaraman, Chuan Guo, and Kamalika Chaudhuri. Déjà vu memorization in vision–language models. In A Globerson, L Mackey, D Belgrave, A Fan, U Paquet, J Tomczak, and C Zhang, editors, Advances in Neural Information Processing Systems , volume 37, pages 50722–50749. Curran Associates, Inc., 2024. [57] Lun Wang, Om Thakkar, and Rajiv Mathews. Unintended memorization in large ASR models, and how to mitigate it. In ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . IEEE, April 2024. [58] Kent K Chang, Mackenzie Cramer, Sandeep Soni, and David Bamman. Speak, memory: An archaeology of books known to ChatGPT/GPT-4. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 7312–7327,
https://arxiv.org/abs/2505.17536v1
Singapore, December 2023. Association for Computational Linguistics. [59] Melanie Walsh, Anna Preus, and Maria Antoniak. Sonnet or not, bot? poetry evaluation for large models and datasets. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen, editors, Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 15568–15603, Miami, Florida, USA, November 2024. Association for Computational Linguistics. [60] Harvey Sacks, Emanuel A Schegloff, and Gail Jefferson. A Simplest Systematics for the Organization of Turn-Taking for Conversation. Language , 50(4):696–735, 1974. [61] Robert McKee. Dialogue: The art of verbal action for page, stage, and screen . Hachette UK, 2016. [62] Rongxin Zhu, Jey Han Lau, and Jianzhong Qi. Findings on Conversation Disentanglement. In Proceedings of the The 19th Annual Workshop of the Australasian Language Technology Association , pages 1–11, Online, December 2021. Australasian Language Technology Association. [63] Jina Lee and Stacy Marsella. Modeling side participants and bystanders: The importance of being a laugh track. In Intelligent Virtual Agents , Lecture notes in computer science, pages 240–247. Springer Berlin Heidelberg, Berlin, Heidelberg, 2011. 17 [64] Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving with PagedAttention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles , 2023. [65] Tri Dao. FlashAttention-2: Faster attention with better parallelism and work partitioning. In International Conference on Learning Representations (ICLR) , 2024. 18 A Supplementary information A.1 Annotation guidelines Note to the Reader. The section outlines the conceptual framework we use to annotate conversa- tional roles and threads in multi-party conversation.8We draw on such traditions as sociolinguistics, linguistic anthropology, and television studies, especially frameworks developed by Clark and Mur- phy [ 44,45], Sacks and Schegloff [ 60], and Goodwin [ 43], who, taken together, lay the foundation of conversation analysis and provide us with the basis of taxonomy on which our role attribution task is based; Goffman [ 1], on the face-to-face, social nature of dialogic interactions; and finally, McKee [ 61], who analyzes on TV dialogues in the familiar set of vocabulary (Austinian speech acts, for instance). Our annotation scheme is the result of several iterations, beginning with a pilot round and subsequent discussions with annotators trained in rhetorics, cognitive science, film studies, and related disciplines. Based on observations from the pilot, we refine the definitions of utterance boundaries, reply-to structure, and role attribution to better reflect how participants—or characters on screen—engage in face-to-face dialogues in TVQA data. We hope the annotation guidelines presented below will bring out the nuances of the annotation tasks at hand and their intellectual stakes, as well as provide the theoretical scaffolding that informs our design choices. * * * In any dialogic interaction, conversational participants manage and direct attention towards each other, during which they also inhibit various roles (speakers, addressees, etc.). For Goffman [ 1] in particular, conversations are more than merely linguistic exchanges but ritualized social encounters, regulated by norms that govern such phenomena as turn-taking and floor-claiming. These norms extend beyond linguistic structures alone and are
https://arxiv.org/abs/2505.17536v1
often mediated through non-verbal and non-linguistic cues (gaze, posture, etc.). If we follow Goffman and attempt to uncover the social architecture of interaction underlying conversational interactions, we can potentially shed light on the tacit rules and alignments that structure how people participate in talk; in the context of media studies, this enables us to probe further into issues pertaining to representation: How do characters include or exclude others? How do they calibrate intimacy, formality, or confrontation? Who gets to speak without contest? Who must listen without reply? We address three core questions for capturing the dynamics of conversational interactions: Who initiates the conversation and controls conversational flow (holds the floor)? This draws on McKee’s notion of speech acts, as characters who initiate dialogue to perform an action, and Goff- man’s idea that conversation involves the distribution of attention among its participants. Characters who get to start a conversation and main the floor can be more powerful or otherwise interesting. Who is explicitly addressed by the speaker? This reflects Goodwin’s and Clark’s emphasis on speakers and hearers, or Goodwin’s more general inquiry of conversation organization, which helps to shed light on interpersonal relationships and interactive dynamics. Who participates implicitly as side-participants or bystanders? Building on Goffman’s notion of adjacency pairs [ 60] and Clark’s ratified vs. non-ratified participation, participants who are present but not addressed have the potential of illuminating social hierarchies, inclusion/exclusion patterns, and attentional structures within multi-party conversations. The annotation guidelines are organized as follows: §A.1.1: Definitions; §A.1.2: Annotation proce- dure; §A.1.3: Examples; §A.1.4: Annotation quality; §A.1.5: Limitations. A.1.1 Definitions This is grounded primarily in [1, 61]: 8Written primarily by AH and KC, with input and guidance from other co-authors. 19 Utterance. An utterance is a single unit of spoken dialogue, roughly equivalent to a dialogue line . It is a bounded communicative act performed by a speaker within an interaction driven by the need for a plot, of the speaker, or other characters in the context of a TV show. Unlike a statement , which is defined by its propositional content and logical truth conditions, an utterance is defined by its timing, delivery, and interactional role. Utterances need not be syntactically complete sentences, but asmoves that accomplish social action. A particular type of utterance is the utterance of interest (UOI), which is the current utterance being annotated. For ease of annotation, we present individual utterances as one sentence transcribed by Whisper [ 36]. They don’t have to be complete sentences, and they don’t have to be a completed turn in and of itself. This allows us to annotate conversational roles at a reasonable granularity: different lines in the same turn can have different role attributions. Reply-to. The reply-to relation connects an utterance of interest to its most relevant preceding utterance, also known as a parent utterance, representing the line to which it responds most directly. This is more prevalent in NLP literature [ 62], where the task is formalized as a link from a child utterance to its parent , and the relationship is modeled as directed graphs. The default
https://arxiv.org/abs/2505.17536v1
parent utterance is the immediately preceding line; if there is no logical antecedent utterance, the UOI is the beginning of a new thread (more below). Since we present each utterance at the sentence level, a special form of reply-to is that of a continuation : if the speaker is still in the midst of their turn, and the UOI merely continues the previous line, then the previous line is the natural and logical parent utterance. In this view, we might see TV dialogues as linear sequences, where one line necessarily triggers and informs the next, until the current thread runs its course. Conversational thread. A coherent sequence of utterances sharing a sustained focus (either on a character or topic), adapted from Goffman’s definition of focused interaction . Dramatic scenes often involve multiple, overlapping threads, each driven by the goals and intentions of characters (aligning with McKee’s concept of dramatic intent). Its annotation involves the following steps: •Given each UOI ( ut), identify the single most relevant preceding utterance as its parent utterance (up). •If no suitable preceding utterance exists, annotate the UOI as replying to itself (indicating a new conversational thread). This annotation captures how dramatic dialogues unfold through pairs of conversational turns, marking shifts in attention, character intention, or conversational control. Those conversational threads (also known as sub-conversations) represent the latent structure of conversations, and the central aim is to segment a broader dialogue into coherent units that represent a stable distribution of attention from conversational participants. Crucially, our notion of a thread is informed by Goffman’s interactional order, wherein conversation is framed as a ritualized social encounter: participants are not merely exchanging information, but are engaging in a tacit choreography governed by socially shared norms of attention, responsiveness, and turn-taking. These practices include obligations to respond when addressed (more below), to respect shifts in floor control, and monitor others’ contributions for relevance, along with a variety of verbal and non-verbal cues. As such, we emphasize both topical andfloor change; indeed, as Goffman notes and demonstrates in his analysis, conversation is not just organized by the semantic content of the utterances. In other words, conversational threading is fundamentally different from, e.g., running topic modeling to cluster utterances: we care about the distribution of attention that we can observe, not that of words . Disentangling a conversation by correctly resolving reply-to relations serves two complementary ends: it, on the one hand, clarifies the information structure of a multi-party exchange; on the other, the observable structure of who replies to whom and when reflects the rules of engagement that make conversation intelligible and socially meaningful. For more information, see the Appendix of [40], including specific tests and examples. Conversational roles. While disentanglement focuses on the structural linkage between utter- ances, conversational role attribution concerns the dynamic social positions that participants occupy throughout the course of interaction. Each utterance is produced within a shifting ecology of roles, an understanding of who is speaking, to whom, and in front of whom is key to interpreting both literal and, again, social meaning of an interaction. 20
https://arxiv.org/abs/2505.17536v1
Table 6: Participant role matrix. Role addressed ratified known Addressee + + + Side-participants − + + Bystanders − − ± The first role is the speaker , the animator, the source of the utterance at time t. Following Goodwin’s conversational analysis [ 43] and Clark’s role taxonomy [ 44,45], we treat conversational roles as those observably projected att: a speaker’s alignment towards others (signaled through gaze, body orientation, lexical choice) casts other participants into roles. These roles are independent of whether the projected utterance ultimately hears, understands, or replies. We annotate each character present in the scene according to three dimensions, which is summarized in Table 6: 1.Addressed : Is the character explicitly spoken to by the speaker? 2.Ratified : Is the character recognized by participants as legitimately part of the conversational group? 3.Perceptually engaged : Is the character known (or shown visually, via multi-modal cues) to be attending to or perceiving the utterance? This yields the following roles: •Speaker : the speaker of the line •Addressee : the intended recipient of the line; they are directly addressed, ratified as a co-participant, and likely visibly attending to the speaker, who, in turn, likely constructs the utterance with the addressee in mind, and the addressee is expected to respond or acknowledge the utterance in some form •Side-participant : a participant who is ratified, perceptually engaged, but not explicitly addressed. This term is not really used in Goffman, but we prefer it because it has been used in previous work related to ours (e.g., [63]). •Bystander : a participant who is neither explicitly addressed nor ratified; may or may not be perceptually engaged (includes overhearers). If someone is not in any of the roles above, they are automatically bystanders, so we are not annotating them specifically. A.1.2 Annotation procedure Annotation will take place on a dedicated interface (Fig. 2), and annotators will follow these concrete steps when examining each utterance in a video clip: based on the list of participants provided to you (derived from the cast list of the episode): 1. Watch the reconstructed clip 2.Identify the speaker and explicitly addressed participants, which usually involves verifying or correcting existing speaker labels 3. Identify the UOI and determine its parent utterance (or mark as new thread, using SELF ) 4.Identify participants who are ratified and perceptually engaged but not explicitly addressed (side-participants). Participants are represented as draggable badges, which can be moved from the original list into labeled fields We include the following tags to handle unidentifiable participants; they are: unknown (not referred to by anyone else, and not identified by our face recognition pipeline, or whose identity is otherwise never revealed to the audience); crowd (a group of unidentifiable people, such as public speaking); OS(off screen and unknown, which presents a special edge case that we might not consider). In annotating the clips, you are encouraged to re-watch the clip and spend as long as you require, but we recommend a time limit (of, say, 5 minutes) for each clip if this is your first pass so you will not get stuck.
https://arxiv.org/abs/2505.17536v1
Your judgment should be entirely based on the clip and the clip alone. 21 Review/correct the speaker 2 Review/correct reply-to 3 Tag addressee(s) and side-participant(s) — move the badge from the list of participants abovez 4 Watch the video 1 use those features if helpfulFigure 2: Annotation interface for multimodal conversation structure understanding. Annotators proceed through four steps: a.) watch the reconstructed video clip, b.) verify or correct the speaker label, c.) review and update the reply-to link, and d.) assign addressee(s) and side-participant(s) by dragging the relevant participant badges from the list, derived from the cast list, next to the video. all characters in the cast list of the episode special "participants" for edge cases If the speaker and addressee(s) / side-participant(s) are the same (this is a continuation of the previous sentence, for example), you can check those boxes to save time. Check those boxes if this line is non-dialogic. Figure 3: Annotation enhancements for edge cases and workflow efficiency. Annotators can select characters andspecial participant tokens for non-standard utterances. Checkboxes allow them to mark lines as continuations (same role as previous) or non-dialogic (extra-diagetic or monologue), with the goal to minimize redundancy and improve overall annotation consistency. 22 Non-dialogic checkboxes. Given the narrative form of the TV series, we introduce the following shorthands to indicate utterances that might not appear in typical face-to-face interactions: •Extra-diagetic (checkbox): to indicate narrators, “previously on the show”, or other kinds of dialogue lines that are not part of any actual conversation •Monologue (checkbox): to better distinguish situations where the character speaks to themselves or other inanimate objects •_OStag: to indicate when the identity of the character is generally known to the audience but not within the scope of the clip. Those will give us some basis to filter out utterances that are edge cases as we assess the performance of each model on this task. We also have a couple of other life quality features; see Fig. 3. A.1.3 Examples For a typical example, let’s consider this scene taken from The Big Bang Theory (season 2, episode 9, “The White Asparagus Triangulation”, segment 02, clip 04).9In it, we observe three characters seated in a row at a movie theater, who are eligible conversational participants: Stephanie Barnett (Sara Rue), on the left, is turned slightly toward the others; Leonard Hofstadter (Johnny Galecki) sits upright, angled more towards Sheldon Cooper (Jim Parsons) on his right: SHELDON I’ll find us seats? (#1) STEPHANIE Oh no, we have seats. (#2) LEONARD Not the right seats. (#3) Utterance #1 opens the exchange and appears to be phrased tentatively, which suggests a proposal. Its orientation to both Stephanie and Leonard suggests a plural usas the intended group of recipients. We don’t see clearly from the visuals (no direct gaze) or vocative, so we assume both addressees are equally implicated. Being the first utterance in the clip, this is also a thread initiator. Stephanie provides a reply that rejects Sheldon’s offer (utterance #2); it is triggered and necessitated by utterance #1, which is clearly its parent or reply-to utterance. The
https://arxiv.org/abs/2505.17536v1
“we” still invokes group reference, but the syntactic subject now reclaims authority over the seating decision. Here, we see Stephanie looking directly at, and speaking directly to, Sheldon, which casts Leonard as a side-participant. Leonard’s utterance (#3) is a reply to #2, seeing as it builds on Stephanie’s claim of having found seats, but disputes its adequacy. Semantically, we can also see this as a response to Sheldon’s original proposal because it supports the need to continue looking for the seats, but structurally, we will annotate the reply-to as #2 for two important reasons: a.) there’s no reason for Leonard to say this if it wasn’t for Stephanie’s utterance #2, which makes the choice of #1 less justifiable as it also overrides the default reply-to; b.) since threads are transitive closures of pairwise relations, and all the utterances respond to Sheldon’s original need to find some seats, those three utterances will form a thread, which signals their connections, and it’s important to distinguish between threads and a single reply-to. Here are some edge cases: Incapacitated side-participants. In this scene from House M. D. (season 4, episode 2, segment 2, clip 11), a team of doctors speak about and to a patient who has locked herself into a room and is experiencing hallucinations. Although she is fully conscious, a named speaking character, and within hearing range of the conversations about her, she is not annotated as a side-participant in those conversations about her symptoms as she is preoccupied by her symptoms, not visually depicted to be listening to the conversation and is not capable of interrupting or joining the ongoing conversation. Later in the scene, however, she is labeled as an addressee after responding to her name. 9Segment and clip number are given in TVQA. 23 Table 7: Annotation quality. Conversational roles Conversational threads Speaker Addressees Side-part. Linking Clustering Acc. Set F 1 Set F 1 F11–NVI 1–1 EM F 1 Pilot round 86.20 86.07 82.87 86.07 83.85 77.78 35.94 Main round 97.58 ↑11.3892.52 ↑6.4585.43 ↑2.5697.11 ↑11.0392.87 ↑9.0189.25 ↑11.4870.56 ↑34.62 Off-screen side-participants. In this scene from House M. D. (season 4, episode 16, segment 2, clip 10), a roomful of doctors speak amongst themselves while one doctor in particular, James Wilson, sits on the side and remains largely unspeaking and unshown. Though the camera is rarely on him and the other doctors address each other rather than him, he qualifies as a side-participant due to being in the same space and scene as the other interlocutors. Non-human/inanimate objects and monologues. In this Friends scene (season 10, episode 06, segment 2, clip 17), the speaker addresses a duck, which appears on screen. After discussion, we concluded that addresses must be portrayed by a human in order to be counted and left the addressee blank. The label “monologue” distinguishes between this particular absence of addressee from others which arise in different context (i.e., speaker talking to themselves or having no clear audience). Extra-diegetic moments. In this scene from How I Met Your Mother (season 6, episode 9, segment 2, clip 3), speakers cluster around a
https://arxiv.org/abs/2505.17536v1
television to watch a game show, which is then shown on screen with recognizable characters and dialogue. Annotators marked all game show dialogue as extra-diegetic and did not include the TV-watchers as addressees or side-participants. A.1.4 Annotation quality We report inter-annotator agreement in Table 7, using the same set of evaluation metrics applied to our models. Agreement is computed as the average of all pairwise comparisons among the four primary annotators. We observe a clear improvement across all metrics in the second round of annotation, which reflects increased consistency and agreement in annotation after iterative refinement. A.1.5 Limitations The practical limitations come from our TVQA post-processing pipeline: for instance, annotators observe that cases where conversations are fast-paced (many different speakers, or many different exchanges between two interlocutors, for instance), Whisper would encounter segmentation issues: an utterance would be misattributed to a previous speaker’s line, rather than identified accordingly as an independent utterance (as observed in House M. D. , season 5, episode 18, segment 2, clip 20). This error, though infrequent, unsurprisingly suggests that that clips featuring sustained, uninterrupted speech from a single speaker, whether in a series of in-scene utterances or as the result of monologic narration overlaid on a series of images, are more likely to be correctly attributed and segmented than those with many voices, even independent of visual face counts. Future studies may focus on the role of interruption, whether verbal, visual or contextual, on multimodal conversational understandings, as well as what constitutes the boundaries of an utterance as it relates to transcription and segmentation. A.2 Additional experimental details We do not change the default temperature of the LLMs that we evaluate. We use the Pydantic schema for structured output (see Fig. 4). Our system instructions for Gemini 2.0 Flash, similar to those used for other models we consider, are in Fig. 5. This represents a sunccinct summary of our annotation guidelines. For our LLaMA 4 Scout experiments, we run the model in full precision on four H100 GPUs served by vLLM [ 64], which takes around 3 hours to complete. Qwen 2.5-Omni 7B inference is performed with full precision with Flash Attention 2 enabled [ 65] on four L40S GPUs, which takes around 7 hours. For Qwen 2.5-Omni 7B LoRA SFT, we follow the instructions and configurations provided on https://github.com/hiyouga/LLaMA-Factory/pull/7537 . 24 class ConversationalRoles(BaseModel): line_index: int reply_to: int speaker: str addressees: list[str] side_participants: list[str] class ClipRoles(BaseModel): clip_roles: list[ConversationalRoles] Figure 4: Pydantic schema. You are a video analysis assistant. Your task is to analyze the ,→conversations in a video clip and its associated subtitles. For ,→each dialogue line, you will: * determine what previous line it is replying to * determine the speaker, addressees, and side-participants Here’s how to determine the reply-to relationship between utterances to ,→resolve conversational threads: * The reply-to structure gives us information about floor-claiming and ,→topical change within the clip. * The character is saying this line because they want to respond to that ,→previous line. What previous line is this current line replying to? * If the speaker of the last line is
https://arxiv.org/abs/2505.17536v1
the same, you can treat it as ,→continuation and put the index of last line as the reply-to. * If no previous line triggers this line, then write the current line ,→index, indicating the current line replies to itself, which marks ,→the start of a new conversational thread. Here’s how to determine each role: * **Speaker:** The character who is speaking the line. Infer this from ,→lip movements, body language, and the context of the dialogue. If a ,→character finishes one line and immediately starts another (very ,→short pause), assume it’s the same speaker, UNLESS there’s a clear ,→visual indication of a scene or speaker change (e.g., a camera cut ,→to a different person starting to speak). * **Addressee(s):** The character(s) the speaker is *directly* ,→addressing. Use these cues: * **Eye Contact:** The most important cue. Who is the speaker ,→looking at? * **Body Orientation:** Is the speaker’s body turned towards a ,→particular person or group? * **Dialogue Context:** Does the line contain a name, pronoun ,→("you"), or clearly refer to a specific individual or group? ,→("Hey, John..." or "You all need to...") * **Reactions:** If a character reacts immediately and strongly to a ,→line (e.g., nods, responds verbally, shows surprise), they are ,→likely an addressee. * If the speaker seems to be talking to everyone present, list all ,→characters who appear to be paying attention. * If the speaker is talking to a crowd of unidentifiable characters, ,→write "crowd". * If the speaker is talking to themselves, or no one in particular, ,→write "none". 25 * **Side-Participant(s):** Any character(s) visible in the scene ,→*during the line’s timeframe* who are *not* the speaker or ,→addressees. They are present, and their presence is known to other ,→participants. They can potentially join the conversation at any ,→time. * If it is not possible to confidently determine if someone is a ,→side-participant, write "unknown". * If there are no side-participants, write "none". **Input:** You will receive a list of subtitle entries. Each entry will be a ,→dictionary with the following keys: * ‘"line_index"‘: (int) The index of the current entry (subtitle line). * ‘"start_time"‘: (float) The start time of the subtitle line in seconds. * ‘"end_time"‘: (float) The end time of the subtitle line in seconds. * ‘"text"‘: (string) The text of the dialogue line. You will also receive a list of potential participants for you to assign ,→roles from. You must pick from this list. With all this information, analyze the video segment corresponding to the ,→‘start_time‘ and ‘end_time‘ of each subtitle entry. **Output:** Provide your output in JSON format, mirroring the structure of the input. ,→For *each* subtitle entry, add the following keys: * ‘"line_index"‘: (int) The line being analyzed. * ‘"reply_to"‘: (int) The line index that this current line replies to, ,→could be the same as the current line index or any previous line ,→index. * ‘"speaker"‘: (string) The name of the speaker. If you cannot determine ,→the speaker, use "unknown". * ‘"addressees"‘: (list of strings) A list of the names of the ,→addressee(s). This can be an empty list (‘[]‘) if
https://arxiv.org/abs/2505.17536v1
arXiv:2505.17537v1 [cs.CL] 23 May 2025How Knowledge Popularity Influences and Enhances LLM Knowledge Boundary Perception Shiyu Ni1,2,3Keping Bi1,2,3Jiafeng Guo1,2,3Xueqi Cheng1,2,3 1CAS Key Lab of Network Data Science and Technology, ICT, CAS 2State Key Laboratory of AI Safety 3University of Chinese Academy of Sciences {nishiyu23z, bikeping, guojiafeng, cxq}@ict.ac.cn Abstract Large language models (LLMs) often fail to recognize their knowledge boundaries, produc- ing confident yet incorrect answers. In this paper, we investigate how knowledge popu- larity affects LLMs’ ability to perceive their knowledge boundaries. Focusing on entity- centric factual question answering (QA), we quantify knowledge popularity from three per- spectives: the popularity of entities in the ques- tion, the popularity of entities in the answer, and relation popularity, defined as their co- occurrence frequency. Experiments on three representative datasets containing knowledge with varying popularity show that LLMs ex- hibit better QA performance, higher confidence, and more accurate perception on more popular knowledge, with relation popularity having the strongest correlation. Cause knowledge popu- larity shows strong correlation with LLMs’ QA performance, we propose to leverage these sig- nals for confidence calibration. This improves the accuracy of answer correctness prediction by an average of 5.24% across all models and datasets. Furthermore, we explore prompting LLMs to estimate popularity without external corpora, which yields a viable alternative.1. 1 Introduction Large language models (LLMs) (Achiam et al., 2023; Yang et al., 2024; Dubey et al., 2024) often hallucinate—producing fluent but factually incor- rect answers which is unacceptable in safety-critic domains such as healthcare. Accurately identi- fying when LLMs produce correct answers not only helps determine when to trust their outputs, but also enables adaptive retrieval-augmented gen- eration (RAG)—performing retrieval only when they do not know the answer—thereby enhancing both the effectiveness and efficiency of RAG (Ni et al., 2024a). A reliable model should have a clear perception of its knowledge boundaries—knowing 1The code can be found at GitHub Repository.what it knows and what it does not. This requires its confidence in an answer, reflected in the generation probability, to align with the actual likelihood of the answer being correct (Jiang et al., 2021). While many studies have examined LLMs’ perception level of their knowledge boundaries and found that they tend to be overconfident (Lin et al., 2022; Tian et al., 2023), the underlying factors influencing the perception remain poorly understood. A natural hypothesis is that a model’s perception level can be influenced by the popularity of the knowledge—i.e., how frequently the model has encountered it during training. When asked about popular knowledge, the model may be more likely to respond both correctly and confidently. Prior work (Mallen et al., 2023) has shown that LLMs achieve better QA performance on more popular questions. This raises a key question: how do the model’s confidence and its alignment with QA performance vary with knowledge popularity? To investigate this, we focus on entity-centric factual QA (Mallen et al., 2023; Yuksekgonul et al., 2023) where both the question and the answer con- tain an entity because this enables us to quantify the popularity of knowledge based on entities. Specif- ically, we assess knowledge popularity
https://arxiv.org/abs/2505.17537v1
from the following three perspectives: 1) Question popular- ityPopQ: popularity of the entity in the question. 2)Ground-truth answer popularity PopGT: popu- larity of the entity in the ground-truth answer. 3) Ground-truth relation popularity RPopGT: the co- occurrence frequency of the question and ground- truth entities. Higher entity popularity suggests more accurate entity representations. Relation pop- ularity directly influences the model’s ability to comprehend associations between entities, but is costly to collect. Since ground-truth answers are unavailable in real-world scenarios, we also examine model- generated answers. Specifically, we investigate whether the popularity of the generated answer (PopGe) and the relation popularity between the generated and question entities ( RPopGe) reflect the model’s QA performance, confidence, and perception level. We focus in particular on their correlation with QA performance, as a strong correlation could allow these signals to be used for calibrating the model’s confidence. We conduct experiments on three entity-centric factual QA datasets—Movies, Songs, and Bas- ketball—constructed from Wikidata knowledge triplets by Yuksekgonul et al. (2023). Some ques- tion examples can be seen in Figure 1. We quan- tify entity popularity by the number of Wiki- data language editions in which an entity ap- pears. Relation popularity is measured by the number of Wikipedia documents where both en- tities are mentioned together. We use two rep- resentative open-source models—LLaMA3-8B- Instruct (Dubey et al., 2024) and Qwen2-7B- Instruct (Yang et al., 2024)—as well as the black- box model ChatGPT (Achiam et al., 2023). Results on PopQ,PopGT, and RPopGTshow that LLMs demonstrate better QA performance, higher confidence, and more accurate perception of their knowledge boundaries on more popular knowl- edge. Although LLMs are generally overconfident, the extent of overconfidence diminishes as knowl- edge popularity increases, since QA performance improves more rapidly than confidence. Among the three popularity measures, RPopGTshows the strongest correlation with QA accuracy, confidence, and perception level most of the cases. Interest- ingly, question popularity correlates more strongly with confidence than with QA performance, imply- ing that LLMs may become overconfident simply due to familiarity with the question. Regarding generated answers, RPopGeshows a strong positive correlation with QA performance, confidence, and perception level, while PopGeex- hibits a weaker correlation. Notably, RPopGe shows even stronger correlation with QA perfor- mance than RPopGT, while PopGecorrelates more weakly than PopGT. We further analyze the reason and reveal that when LLMs make errors, they tend to generate more popular entities that co-occur less frequently with the question entity compared to ground-truth answers , indicating a tendency to- ward over-generalization. This is consistent with the findings of Zhang et al. (2024b). Based on these findings, we propose to leverage popularity features (i.e., PopQ,PopGe, and RPopGe)to calibrate confidence which aims to improve the effectiveness of confidence in predicting answer correctness. Given that computing knowledge pop- ularity requires access to external corpora and in- curs additional collection costs, we also investigate prompting the model to estimate popularity on its own. Results show that PopQandPopGeprovide modest gains in calibration. In contrast, RPopGe provides substantial gains. Combining all these three types of popularity yields the best calibration performance, boosting answer correctness predic-
https://arxiv.org/abs/2505.17537v1
tion by an average of 5.24% across all models and datasets. Moreover, leveraging model-estimated popularity also performs well for confidence cali- bration. The choice between external corpora and self-estimation ultimately hinges on the trade-off between performance and efficiency. 2 Related Work Existing research on model knowledge boundary perception focuses on assessing model confidence and can be mainly classified into four categories. Probabilistic Confidence. This line of research treats the generation probability of the answer as the confidence of the model (Guo et al., 2017; Desai and Durrett, 2020; Jiang et al., 2021; Kadavath et al., 2022; Si et al., 2022; Kuhn et al., 2023). Guo et al. (2017) examined early neural networks (e.g., ResNet (He et al., 2016)) and found them to be overconfident, proposing temperature scaling as a remedy. Later, Desai and Durrett (2020) showed that BERT-style models tend to be relatively well- calibrated, while Jiang et al. (2021) found that pre- trained language models such as T5 (Raffel et al., 2020) remained overconfident. More recent work has turned to LLMs, with studies showing that they, too, exhibit overconfidence (Si et al., 2022; Lin et al., 2022; Tian et al., 2023). Verbalized Confidence. LLMs have been shown to express their confidence verbally (Lin et al., 2022; Yin et al., 2023; Tian et al., 2023; Xiong et al., 2023; Yang et al., 2023; Ni et al., 2024a). Some studies (Yin et al., 2023; Ni et al., 2024a) found that LLMs often fail to recognize their knowledge limitations verbally and tend to be overconfident. Xiong et al. (2023) systematically studied black-box approaches for estimating LLM confidence. Beyond prompting-based methods, some studies aim to train LLMs to verbalize more accurate confidence (Lin et al., 2022; Yang et al., Who is the directorof the movie [Movie Name]? Who is the performer of the song [Song Name]? Where is the birthplace of the basketball player [Player Name]? MoviesSongsBasketball Figure 1: Question examples for each dataset. 2023; Zhang et al., 2024a). Consistency-based Confidence. If the model is confident in its answer, it should maintain consis- tency across multiple generations. Recent studies have used self-consistency across generations as a proxy for LLM confidence (Manakul et al., 2023; Kuhn et al., 2023). Zhang et al. (2023) extended this by evaluating the consistency of answers across multiple semantically equivalent inputs and across different models. Ding et al. (2024) further adapted this approach to the multilingual setting. Confidence Estimation via LLM Internal States. LLMs’ internal states have shown to be effective in evaluating the factuality of their self-generated content (Su et al., 2024; Chen et al., 2024; Wang et al., 2024; Ni et al., 2025). Specifically, Su et al. (2024) and Chen et al. (2024) focused on internal states after generation, Wang et al. (2024) exam- ined those before generation, and Ni et al. (2025) explored leveraging LLMs’ internal states to en- hance their perception of knowledge boundaries from efficiency and risk perspectives. We focus on probabilistic confidence for the fol- lowing reasons: 1) Both the model’s generation probabilities and its knowledge acquisition arise from the same training objective, and are
https://arxiv.org/abs/2505.17537v1
expected to align with each other. 2) Models without special- ized training often struggle to verbalize confidence accurately (Ni et al., 2024b); consistency-based methods require multiple generations and incur high inference costs; and internal-state-based ap- proaches require access to hidden representations and additional training. In contrast, probabilis- tic confidence is readily accessible and has been shown to perform well, especially when answers are short (Ding et al., 2024). 3 Task Description Entity-Centric QA. We focus on entity-centric knowledge because it allows us to measure knowl-edge popularity through entities. In entity-centric QA, questions and answers are derived from knowl- edge triples in the form of (subject, relation, ob- ject), where the question queries the relation of a given subject, and the model is expected to generate the corresponding object. Examples of knowledge triples are provided in Table 1, with their trans- formed question forms shown in Figure 1. LLM Knowledge Boundary Perception. The model’s perception of its knowledge boundaries is evaluated by the alignment between its confidence and actual QA performance. QA performance is measured by whether the generated answer con- tains the ground-truth answer, and confidence is reflected in generation probability of the answer to- kens (see Section § 2). Specifically, for a question qand a model M, the confidence cis computed as: c=1 TTX i=1P(gi|g<i), (1) where {g1, . . . , g T}is the generated tokens. 4How Does Knowledge Popularity Affect LLMs’ Perception Level? In this section, we investigate how PopQ,PopGT, andRPopGTinfluence LLMs’ QA performance, confidence, and perception level. Datasets Count Subject Relation Object Movies 10,964 Movie Directed by Director Songs 2,157 Song Performed by Performer Basketball 13,309 Player Birthplace City Table 1: Sample counts for each dataset, along with the corresponding subject, relation, and object types. 4.1 Experimental Setup Datasets. Yuksekgonul et al. (2023) constructed entity-centric QA datasets based on Wikidata2, us- ing the number of sitelinks on a page as a proxy for entity popularity. They showed that this measure strongly correlates with an entity’s frequency in the training data. Building on this, we conduct experiments on their datasets to ensure reliable entity popularity measurement. We select three representative datasets—Movies, Song, and Bas- ketball—because they exhibit clear differences in knowledge popularity. Specifically, question popu- larity ranks as Movies > Songs > Basketball, while 2https://query.wikidata.org/sparql Accuracy Confidence Alignment Datasets Models Acc. PopQPopGT RPopGT Conf. PopQPopGT RPopGT Align. PopQ PopGT RPopGT MoviesLlama3 72.65 0.317 0.220 0.357 90.68 0.404 0.367 0.509 75.50 0.404 0.347 0.501 Qwen2 42.85 0.433 0.299 0.494 82.32 0.413 0.371 0.507 53.63 0.386 0.279 0.440 ChatGPT 94.78 0.134 0.069 0.130 98.80 0.210 0.230 0.280 94.85 0.211 0.228 0.279 SongsLlama3 38.97 0.277 0.164 0.517 79.74 0.369 0.210 0.502 53.04 0.182 0.093 0.361 Qwen2 25.82 0.362 0.255 0.541 78.00 0.300 0.200 0.345 42.97 0.230 0.180 0.392 ChatGPT 73.36 0.171 0.266 0.399 94.84 0.249 0.295 0.381 75.28 0.232 0.340 0.399 BasketballLlama3 13.37 0.118 0.293 0.231 60.09 0.173 0.063 0.055 46.21 -0.052 0.104 0.097 Qwen2 9.90 0.014 0.348 0.151 74.76 0.151 0.076 0.009 32.35 0.126 0.189 0.105 ChatGPT 34.89 0.288 0.215 0.353 79.06 0.351 0.054 0.270 50.43 0.201 0.164 0.303 Table 2:
https://arxiv.org/abs/2505.17537v1
LLMs’ QA performance, confidence, alignment and the correlations between knowledge popularity and accuracy, confidence, and alignment across different datasets. ground-truth answer popularity follows Movies < Songs < Basketball. Table 1 lists the knowledge triplets and data counts for each dataset, and Fig- ure 1 presents example questions. We apply data filtering to ensure reliable results, as detailed in Section §A. Entity Popularity. Following Mallen et al. (2023); Yuksekgonul et al. (2023), we define the popularity of an entity by the number of sitelinks it has—i.e., the number of Wikipedia pages in differ- ent languages that link to it. Relation Popularity. As Wikipedia is the pri- mary high-quality source for Wikidata, we esti- mate relation popularity based on Wikipedia con- tent. Specifically, for each entity pair, we mea- sure relation popularity by counting the number of documents in the Wikipedia dump3in which both entities co-occur. This reflects relation popularity in the model’s training data, as it shows a strong correlation with QA performance (see Table 2). Notation Definition PopQ Popularity of entities in the question PopGT Popularity of entities in the ground-truth answer RPopGT Relation pop. between question and ground-truth entities PopGe Popularity of entities in the generated answer RPopGe Relation pop. between question and generated entities Table 3: Definitions of notations about knowledge pop- ularity where pop. means popularity. LLMs. We conduct experiments on three repre- sentative LLMs: two open-source models, Llama3- 8B-Instruct (Dubey et al., 2024) and Qwen2- 7B-Instruct (Yang et al., 2024), as well as a black-box model, ChatGPT (i.e., GPT-3.5-Turbo- 1106) (Achiam et al., 2023). 3https://huggingface.co/datasets/wikimedia/ wikipediaAnswer Generation. For all the models, we use greedy search, selecting the token with the highest probability at each generation step. An example can be seen in Figure 20. Metrics. For each question qi, we measure an- swer correctness using accuracy acci, where the generated answer is considered correct if it con- tains the ground-truth answer. The model’s confi- dence ciis defined as the generation probability of the answer, as described in Section §3. Alignment is then computed as 1− |acci−ci|. To examine the relationship between knowledge popularity and QA performance, confidence, and alignment, we use Spearman correlation coefficients (Hauke and Kossowski, 2011), which range from -1 to 1. The sign indicates the direction of the correlation, while the absolute value reflects its strength. 4.2 Results and Analysis LLMs’ QA performance, confidence, and percep- tion levels across different datasets, along with the Spearman correlation coefficients between knowl- edge popularity and accuracy, confidence, and alignment are shown in Table 2. We observe that: 1) LLMs achieve better QA performance and higher confidence on more popular knowledge. All three types of popularity are positively corre- lated with both QA performance and confidence. For QA performance and confidence, we conduct the following analyses respectively. For QA performance, relation popularity gener- ally shows the strongest correlation, as expected, while question popularity exhibits a stronger corre- lation than answer popularity in most cases. This suggests that learning through co-occurrence is es- pecially effective for acquiring knowledge, and that familiarity with the question contributes more to 15 20 25
https://arxiv.org/abs/2505.17537v1
30 35 40 45 50 55 Question Popularity0.860.880.900.920.940.960.981.00Values ChatGPT on Movies Accuracy Confidence Alignment 5 10 15 20 25 Question Popularity0.550.600.650.700.750.800.850.900.951.00Values ChatGPT on Songs Accuracy Confidence Alignment 5.0 7.5 10.0 12.5 15.0 17.5 20.0 22.5 25.0 Question Popularity0.20.30.40.50.60.70.80.91.0Values ChatGPT on Basketball Accuracy Confidence AlignmentFigure 2: The QA performance, confidence, and alignment of ChatGPT under different question popularity. answering correctly than familiarity with the an- swer. However, on the Basketball dataset, answer popularity shows the highest correlation for both LLaMA3 and Qwen2. This dataset is challeng- ing because both the question and relation have low popularity. The models are often unfamiliar with the question entity and generate a popular city name instead—a behavior consistent with knowl- edge overshadowing (Zhang et al., 2024b). Chat- GPT does not exhibit this pattern, likely due to a stronger mastery of the relevant knowledge. For confidence, question and relation popularity are strongly correlated in most cases, while answer popularity has a weaker impact. Notably, question popularity consistently correlates strongly with confidence and, in 7 of 9 cases, more than with QA performance. This suggests that LLMs may become more confident simply because familiarity with the question, even if they do not know the answer. On the Basketball dataset, confidence shows little correlation with answer popularity across all three models. We hypothesize that the models are generally familiar with city names, and thus do not exhibit higher confidence for samples with more common answers. 2) LLMs better perceive their knowledge bound- aries on more popular knowledge. To better understand this, we analyze how the gap between confidence and QA performance changes with increasing knowledge popularity. Due to space constraints, we just present this gap for ChatGPT as question popularity increases, shown in Figure 2. We observe that although LLMs are consistently overconfident, their QA performance improves more rapidly than confidence as question popular- ity increases, thereby narrowing the gap. Results for other models, as well as analyses based on other popularity, are included in the Appendix and exhibit similar trends. As shown in Table 2, among the three types of popularity, relation popularity typically shows the strongest correlation. 00.10.20.30.40.50.60.7 MoviesSongsBasketballCorrelation𝑅𝑃𝑜𝑝!"𝑅𝑃𝑜𝑝!#𝑃𝑜𝑝!"𝑃𝑜𝑝!#Figure 3: Comparison of the correlation between Chat- GPT’s QA performance and ground-truth vs. generated answers: PopGTvs. PopGe, and RPopGTvs. RPopGe. 5 Analysis of Model-Generated Answers In real-world scenarios, ground-truth entities are often unavailable. This motivates us to investigate whether the popularity of model-generated enti- ties—along with their relational popularity with the question entity—correlates with the model’s QA performance, confidence, and perception level. We focus particularly on the relationship between popularity and QA performance, as a strong corre- lation could enable us to leverage these signals for confidence calibration. The experimental settings are the same as those in Section § 4. Datasets Models PopQPopGe RPopGe MoviesLlama3 0.317 0.100 0.637 Qwen2 0.433 0.087 0.756 ChatGPT 0.134 0.083 0.208 SongsLlama3 0.277 0.257 0.621 Qwen2 0.362 0.188 0.666 ChatGPT 0.171 0.218 0.351 BasketballLlama3 0.118 0.116 0.245 Qwen2 0.014 0.116 0.106 ChatGPT 0.288 -0.164 0.293 Table 4: Correlations between LLMs’ QA performance and PopQ, PopGe, and RPopGe. 5.1 Results and Analysis Table 4 shows
https://arxiv.org/abs/2505.17537v1
the Spearman correlation coeffi- cients between LLMs’ QA performance and knowl- edge popularity based on model-generated entities. Due to space constraints, results on LLMs’ confi- dence and perception levels are provided in Table 7 in the Appendix. We observe the following. The popularity of generated entities ( PopGe) and their co-occurrence with question entities ( RPopGe) positively correlate with LLMs’ QA performance, confidence, and perception level in most cases. RPopGetypically shows the strongest correlation, outperforming both PopQandPopGe. In contrast, PopGeoften exhibits the weakest correlation. These findings are similar to the results based on ground- truth entities, as discussed in Section § 4. PopGeshows a weaker correlation with QA per- formance compared to PopGTwhile RPopGeex- hibits a comparable or even stronger correlation than RPopGT. We present the comparison for Chat- GPT in Figure 3, while results for other models can be obtained by comparing Table 2 and Table 4. To better understand this, we perform a more de- tailed comparison between model-generated an- swers and ground-truth answers. We only focus on cases where the model makes mistakes since the generated answer matches the ground-truth answer otherwise and analyze in Section § 5.2. 5.2 What Do LLMs Generate When They Hallucinate? We focus on the differences in popularity between model-generated answers and ground-truth an- swers when the model makes errors (See Figure 4), as well as the differences in their co-occurrence frequency with the question entity (See Figure 5). LLMs tend to generate entities that are more common and less frequently co-occur with the question entities than ground-truth answers when they hallucinate. As shown in Figure 4, for incorrectly answered samples, the generated en- tities are often more popular than the ground-truth entities. So the popularity gap between correct and incorrect answers is smaller for generated entities than for ground-truth entities, leading to a weaker correlation between PopGeand QA performance. This may be because the model tends to overgen- eralize—once it learns high-frequency answers, it tends to use them in many similar contexts. In incorrectly answered samples, model- generated answers typically co-occur less fre- 2060100140180220 MoviesSongsBasketballPopularityLlama3Qwen2ChatGPTAcc=1 & GT AnsAcc=0 & GT AnsAcc=0 & Gene AnsFigure 4: PopGT,PopGein incorrectly answered samples andPopGT(also PopGe) in correctly answered samples. 5060708090 MoviesSongsBasketballRatio (%)Llama3Qwen2ChatGPT Figure 5: Proportion of incorrectly answered samples where RPopGeis less than RPopGT. quently with question entities compared to ground- truth answers, as shown in Figure 5. As a result, the correlation between RPopGeand QA performance is stronger than that of RPopGT, because the dif- ference in RPopGebetween correct and incorrect samples is greater than that of RPopGT. 6 Confidence Calibration with Knowledge Popularity Given that PopQ,PopGe, and RPopGe—especially RPopGe—are strongly correlated with QA perfor- mance, we propose to use these signals for con- fidence calibration, i.e., improving the effective- ness of confidence in predicting answer correctness. Since obtaining these signals typically requires ex- ternal corpora and incurs additional costs, we also explore whether LLMs can assess knowledge fa- miliarity on their own. 6.1 Knowledge Popularity Acquisition Corpora-based Popularity. As outlined in Sec- tion § 3, we get popularity from external corpora. Model-generated Popularity. To eliminate
https://arxiv.org/abs/2505.17537v1
reliance on external corpora and reduce the overhead of collecting popularity, we investigate whether LLMs can self-assess their familiarity with a given the entity or the relation. Familiarity is measured on a 10-point scale, where 1 denotes the lowest and 10 the highest level. The model is asked to provide its familiarity score accordingly. We provide the model with varying numbers of corpora-based popularity examples to examine whether supplying such examples helps the model produce more accurate familiarity. We present examples under both zero-shot and few-shot settings and all these prompts can be found in Section § F in the Appendix. 6.2 Baselines We use representative confidence estimation meth- ods that do not require access to model parameters as our baselines. •Verbalized Confidence (Verb) (Yin et al., 2023) instructs the model to verbally assess whether it can answer the question correctly. The prompt can be seen in Figure 21 in the appendix. •Self-consistency (Consis) (Manakul et al., 2023) estimates the model’s confidence by measuring the semantic consistency of multiple sampled an- swers. The core idea is that if the model knows the correct answer, multiple sampled answers should be semantically consistent. For each ques- tion, we sample 10 additional responses with a temperature of 1.0. For ChatGPT, due to cost constraints, we limit sampling to 3 additional re- sponses. Semantic consistency is assessed using Qwen2.5-32B-Instruct (Yang et al., 2024). •Probabilistic Confidence (PC) (Kumar et al., 2024) takes generation probabilities for the to- kens in the answer as the model’s confidence. This is the confidence we have been using throughout the paper, and the one we aim to cali- brate. Details can be seen in Section § 3. 6.3 Data Construction For each sample in the datasets, we construct a data pair {x, y}, where yis a binary correctness label: y= 1if the generated answer contains the ground-truth answer, and y= 0 otherwise. To study the effect of each type of popularity on confi- dence calibration, we construct xusing the follow- ing features: 1) PC+ PopQ, 2)PC + PopGe, 3)PC +RPopGe, and 4) PC + ALL — PC and all these three types of popularity. We also use each type of popularity itself as xto study its effectiveness in answer correctness prediction.6.4 Answer Correctness Prediction Based on the constructed x, we predict a binary judgment on correctness, as shown below. ˆy=E(x), (2) where Erepresents the binary classification func- tion and ˆymeans the predicted correctness (i.e., 1 for correct and 0 for incorrect). Single-feature Prediction. For the setting where xcontains only a single feature, we select a thresh- oldλthat maximizes prediction accuracy on the training set, and apply this threshold to perform binary classification on the test data. This can be formualted as: ˆy=( 1ifx > λ, 0otherwise ,(3) Multi-feature Prediction. For the setting where xcontains multiple features, we perform binary classification using a lightweight MLP network, as defined below: P(ˆy= 1) = σ(MLP (x)), (4) where σrefers to the softmax function, x∈ Rd×h represents the input features, dis the count of input features (e.g., d=2 for PC+ PopQ) and hmeans the model’s hidden
https://arxiv.org/abs/2505.17537v1
dimension. We use a 3-layer MLP with 64, 32, and 2 neurons in each layer, respec- tively. The activation function in MLP is ReLU. We employ cross-entropy loss as the training objective: LCE=−NX i=1yilog(Pi)+(1−yi) log(1 −Pi),(5) where yiis the ground-truth correctness for the i-th training sample, Nis the count of training sam- ples, and Pidenotes P(ˆyi= 1) . Detailed training parameters can be found in Section § D. Metrics. We use answer correctness prediction accuracy as the metric. To reduce the impact of ran- domness, all our reported results are the averages obtained from three random seeds: (0, 42, 100). Datasets and LLMs. We use the same data and LLMs as in Section § 4.1. We randomly split each dataset into two equal parts for training and testing and select the checkpoint with the highest predic- tion accuracy on the training set. Detailed settings can be found in Section § D in the Appendix. Movies Songs BasketballAvg. Features Llama3 Qwen2 ChatGPT Llama3 Qwen2 ChatGPT Llama3 Qwen2 ChatGPT Baselines Verb 65.58 45.93 83.41 40.22 29.58 69.25 51.58 50.49 48.89 53.88 Consis 82.21 74.61 96.00 77.62 86.31 83.72 53.76 52.10 77.77 76.01 PC 83.20 79.77 95.95 75.20 83.02 79.11 65.49 66.36 77.87 78.44 Corpora-based Knowledge Popularity PopQ 71.68 70.62 88.24 66.35 76.84 68.00 56.25 50.90 69.32 68.69 PopGe 73.09 58.86 94.22 63.38 74.57 74.41 60.54 60.33 64.86 69.36 RPopGe 89.66 87.92 96.03 82.71 89.59 81.46 67.03 59.64 64.74 79.86 PC+PopQ 83.57 81.36 95.97 76.60 84.58 79.11 65.93 66.95 78.39 79.16 PC+PopGe 84.46 80.49 95.58 76.68 83.57 80.12 69.04 68.64 78.62 79.69 PC+RPopGe 90.93 88.58 96.13 80.21 90.46 84.04 71.93 66.33 78.10 82.97 PC+ALL 93.32 92.47 96.37 81.46 88.11 82.71 71.93 68.18 78.59 83.68 Self-generated Knowledge Popularity PC+PopQ 83.91 80.60 95.87 77.85 84.82 79.50 65.30 67.31 78.43 79.29 PC+PopGe 84.02 80.24 95.59 75.20 83.02 78.56 68.40 67.49 78.21 78.97 PC+RPopGe 85.30 80.20 95.80 79.65 84.04 79.81 66.17 67.90 77.59 79.61 PC+ALL 85.95 81.40 95.84 78.87 86.07 80.05 67.69 68.08 78.70 80.29 Table 5: Accuracy of answer correctness prediction. Bold denotes the highest score in either corpora-based or self-generated knowledge popularity. Self-generated knowledge popularity is obtained under the zero-shot setting. 6.5 Results and Analysis Results on corpora-based knowledge popular- ity. Results based on knowledge popularity from external corpora is shown in the upper half of Ta- ble 5. We observe that: 1) Compared to the model’s confidence, RPopGemore accurately reflects an- swer correctness, outperforming all baselines in 6 out of 9 cases. In contrast, PopQandPopGeindi- vidually show limited effectiveness in predicting correctness. 2) All three types of popularity con- tribute to calibrating the model’s confidence, with their combination yielding the most effec- tive results. In most cases, augmenting PC with each type of popularity improves upon PC, with PC+RPopGeachieving the highest average accu- racy among them. Notably, combining all three types leads to the most effective calibration, con- sistently outperforming PC and yielding an aver- age accuracy improvement of 5.24% across diverse datasets and models. Further analysis and case studies are provided in Section § E. Results on model-generated knowledge popular- ity. The prediction accuracy based
https://arxiv.org/abs/2505.17537v1
on model self- generated knowledge popularity under the zero- shot setting can be found in the lower half of Ta- ble 5. It show that: 1) All three types of self- generated popularity contribute to confidence cali- bration. On average, all three signals can calibrate PC, and their combination achieves the best calibra-tion effect, obtaining the optimal value in 6 out of 9 cases. However, the model’s self-generated sig- nals yield weaker calibration effects compared to corpus-based knowledge popularity. The choice be- tween corpus-based popularity and self-generated popularity depends on the trade-off between effec- tiveness and efficiency. 2) LLMs can not estimate popularity better with few-shot learning compared to zero-shot. Detailed analysis can be found Sec- tion § C in the Appendix. 7 Conclusion In this paper, we investigate how knowledge popu- larity—measured through entity and relation pop- ularity—affects LLMs’ QA performance, confi- dence, and perception of their knowledge bound- aries, and explore its utility for confidence calibra- tion. We find that LLMs perform better, express higher confidence, and demonstrate more accurate perception on more popular knowledge, with rela- tion popularity having the strongest influence. We further show that the popularity and co-occurrence of model-generated answers also positively corre- late with QA accuracy. Leveraging these popularity signals for confidence calibration yields an average 5.24% improvement in predicting answer correct- ness. To reduce reliance on external corpora, we also demonstrate that model-estimated popularity can serve as a viable alternative, offering a practical trade-off between performance and efficiency. Limitations First, due to resource limitations, we are unable to conduct experiments on more models. However, we utilize mainstream existing models, which still demonstrate the reliability of our conclusions. Sec- ond, we primarily focus on entity-centric factual QA. For other types of factual reasoning that do not involve entities and non-factual questions, how to represent their knowledge popularity remains an open question. We believe that if the model’s actual training data is available, computing knowl- edge popularity from the training data would likely yield greater improvements. Ethics Statement We approach ethics with great care. In this paper, all the datasets and models we use are open-source. Our analysis of knowledge popularity does not in- troduce any harmful information. Moreover, our proposed method can help accurately determine whether the model’s answer is trustworthy, prevent- ing users from being misled by incorrect responses. References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 . Chao Chen, Kai Liu, Ze Chen, Yi Gu, Yue Wu, Mingyuan Tao, Zhihang Fu, and Jieping Ye. 2024. Inside: Llms’ internal states retain the power of hallu- cination detection. arXiv preprint arXiv:2402.03744 . Shrey Desai and Greg Durrett. 2020. Calibra- tion of pre-trained transformers. arXiv preprint arXiv:2003.07892 . Hanxing Ding, Liang Pang, Zihao Wei, Huawei Shen, and Xueqi Cheng. 2024. Retrieve only when it needs: Adaptive retrieval augmentation for hallucination mit- igation in large language models. arXiv preprint arXiv:2402.10612 . Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil
https://arxiv.org/abs/2505.17537v1
Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 . Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Wein- berger. 2017. On calibration of modern neural net- works. In International conference on machine learn- ing, pages 1321–1330. PMLR.Jan Hauke and Tomasz Kossowski. 2011. Comparison of values of pearson’s and spearman’s correlation coefficients on the same sets of data. Quaestiones geographicae , 30(2):87–93. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 770– 778. Zhengbao Jiang, Jun Araki, Haibo Ding, and Graham Neubig. 2021. How can we know when language models know? on the calibration of language models for question answering. Transactions of the Associa- tion for Computational Linguistics , 9:962–977. Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, et al. 2022. Language models (mostly) know what they know. arXiv preprint arXiv:2207.05221 . Lorenz Kuhn, Yarin Gal, and Sebastian Farquhar. 2023. Semantic uncertainty: Linguistic invariances for un- certainty estimation in natural language generation. arXiv preprint arXiv:2302.09664 . Abhishek Kumar, Robert Morabito, Sanzhar Umbet, Jad Kabbara, and Ali Emami. 2024. Confidence under the hood: An investigation into the confidence- probability alignment in large language models. arXiv preprint arXiv:2405.16282 . Stephanie Lin, Jacob Hilton, and Owain Evans. 2022. Teaching models to express their uncertainty in words. arXiv preprint arXiv:2205.14334 . Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Daniel Khashabi, and Hannaneh Hajishirzi. 2023. When not to trust language models: Investigating effectiveness of parametric and non-parametric mem- ories. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers) , pages 9802–9822, Toronto, Canada. Association for Computational Linguistics. Potsawee Manakul, Adian Liusie, and Mark JF Gales. 2023. Selfcheckgpt: Zero-resource black-box hal- lucination detection for generative large language models. arXiv preprint arXiv:2303.08896 . Shiyu Ni, Keping Bi, Jiafeng Guo, and Xueqi Cheng. 2024a. When do llms need retrieval augmentation? mitigating llms’ overconfidence helps retrieval aug- mentation. arXiv preprint arXiv:2402.11457 . Shiyu Ni, Keping Bi, Jiafeng Guo, Lulu Yu, Baolong Bi, and Xueqi Cheng. 2025. Towards fully exploiting llm internal states to enhance knowledge boundary perception. arXiv preprint arXiv:2502.11677 . Shiyu Ni, Keping Bi, Lulu Yu, and Jiafeng Guo. 2024b. Are large language models more honest in their prob- abilistic or verbalized confidence? arXiv preprint arXiv:2408.09773 . Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the lim- its of transfer learning with a unified text-to-text transformer. Journal of machine learning research , 21(140):1–67. Chenglei Si, Zhe Gan, Zhengyuan Yang, Shuohang Wang, Jianfeng Wang, Jordan Boyd-Graber, and Li- juan Wang. 2022. Prompting gpt-3 to be reliable. arXiv preprint arXiv:2210.09150 . Weihang Su, Changyue Wang, Qingyao Ai, Yiran Hu, Zhijing Wu, Yujia Zhou, and Yiqun Liu. 2024. Unsu- pervised real-time hallucination detection based on the internal states of large language models. arXiv preprint
https://arxiv.org/abs/2505.17537v1
arXiv:2403.06448 . Katherine Tian, Eric Mitchell, Allan Zhou, Archit Sharma, Rafael Rafailov, Huaxiu Yao, Chelsea Finn, and Christopher D Manning. 2023. Just ask for cali- bration: Strategies for eliciting calibrated confidence scores from language models fine-tuned with human feedback. arXiv preprint arXiv:2305.14975 . Yanling Wang, Haoyang Li, Hao Zou, Jing Zhang, Xin- lei He, Qi Li, and Ke Xu. 2024. Hidden question representations tell non-factuality within and across large language models. arXiv e-prints , pages arXiv– 2406. Miao Xiong, Zhiyuan Hu, Xinyang Lu, Yifei Li, Jie Fu, Junxian He, and Bryan Hooi. 2023. Can llms express their uncertainty? an empirical evaluation of confidence elicitation in llms. arXiv preprint arXiv:2306.13063 . An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, et al. 2024. Qwen2 technical report. arXiv preprint arXiv:2407.10671 . Yuqing Yang, Ethan Chern, Xipeng Qiu, Graham Neu- big, and Pengfei Liu. 2023. Alignment for honesty. arXiv preprint arXiv:2312.07000 . Zhangyue Yin, Qiushi Sun, Qipeng Guo, Jiawen Wu, Xipeng Qiu, and Xuanjing Huang. 2023. Do large language models know what they don’t know? arXiv preprint arXiv:2305.18153 . Mert Yuksekgonul, Varun Chandrasekaran, Erik Jones, Suriya Gunasekar, Ranjita Naik, Hamid Palangi, Ece Kamar, and Besmira Nushi. 2023. Attention satis- fies: A constraint-satisfaction lens on factual errors of language models. arXiv preprint arXiv:2309.15098 . Hanning Zhang, Shizhe Diao, Yong Lin, Yi Fung, Qing Lian, Xingyao Wang, Yangyi Chen, Heng Ji, and Tong Zhang. 2024a. R-tuning: Instructing large lan- guage models to say ‘i don’t know’. In Proceedings of the 2024 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Pa- pers) , pages 7106–7132.Jiaxin Zhang, Zhuohang Li, Kamalika Das, Bradley A Malin, and Sricharan Kumar. 2023. Sac3: Reliable hallucination detection in black-box language models via semantic-aware cross-check consistency. arXiv preprint arXiv:2311.01740 . Yuji Zhang, Sha Li, Jiateng Liu, Pengfei Yu, Yi R Fung, Jing Li, Manling Li, and Heng Ji. 2024b. Knowl- edge overshadowing causes amalgamated halluci- nation in large language models. arXiv preprint arXiv:2407.08039 . A Data Filtering The model’s generated response may be empty or fail to find a corresponding entity in Wikidata. To ensure comparability of results across different models on the same dataset, we filter out data where any model’s generation is empty or the generated entity cannot be found in Wikidata. Additionally, for the Movies and Songs datasets, we filter out cases where the question entity, ground truth entity, or model-generated entity appears in more than 6,000 documents. This is because entities in these two datasets typically do not appear in more than 6,000 documents, and those that exceed this thresh- old often introduce noise. For example, "Queen" appears more than 6,000 times but is not exclu- sively used as a band name. We filter these cases to obtain an accurate co-occurrence counts. After filtering, the remaining data sizes for the Movies, Songs, and Basketball datasets are 8,184, 852, and 13,136, respectively. Acc.R-Pop P(R|Q) P(R|A) NMI94.2 69.6 3515.423.4 5.01 0.1620.457 0.204 0.1070.080.002 0.0340.0320.002Movies Songs Basketball Figure 6: QA performance and
https://arxiv.org/abs/2505.17537v1
NMI calculated based on ChatGPT. R-Pop means relation popularity, where P(R|Q)andP(R|A)denote the co-occurrence propor- tion of question and answer entities relative to their individual occurrences in documents. B Analysis on Relationship Strength We hypothesize that the strength of the relationship between entities may also influence the model’s learning. Specifically, when the subject and object frequently co-occur but are also commonly asso- ciated with other entities, the model may struggle to learn their specific relationship. We use normal- ized mutual information to quantify relationship strength and find that stronger relationship con- tributes to better QA performance. Normalized mutual information is computed as: NMI(X, Y ) =I(X;Y)p H(X)H(Y), (6)where I(X, Y )is defined as: I(X, Y ) =nX i=1mX j=1P(xi, yj) logP(xi, yj) P(xi)P(yj), (7) andH(X)andH(Y)serve as regularization terms to mitigate the influence of the sizes of nandm, as well as the magnitude of probability values. Their formulations are: H(X) =−nX i=1P(xi) logP(xi), (8) H(Y) =−mX j=1P(yj) logP(yj). (9) Specifically, for a dataset converted from knowl- edge triplets D={si,r,oi}n i=1, we define X= {s1, . . . , s n}andY={o1, . . . , o n}. We estimate P(si),P(oj), and P(si, oj)using a Wikipedia dump of ddocuments, where P(si)andP(oj)are the proportions of documents containing siandoj, respectively, and P(si, oj)is the proportion con- taining both. If i̸=j, we set P(si, oj) = 0 , as we focus only on the relationship between siandoi. The results based on ChatGPT are shown in Fig- ure 6. We observe that, compared to the movies dataset, the question entity co-occurs more fre- quently with the ground-truth entity in the songs dataset, yet the model’s QA performance is lower. This can be attributed to the lower NMI in the songs dataset, driven by a low P(R|A). This indicates that, besides the question entity, the answer entity also co-occurs with many other entities through var- ious relations. This may interfere with the model’s memory of the relationship between the question entity and the answer entity. C Effects of Few-Shot Learning on Popularity Generation Examples selection. For a given dataset, we sort all samples by popularity in ascending order, re- move duplicates, and divide the popularity values into 10 equal intervals, assigning values from 1 to 10 in ascending order. Each sample is then assigned to its corresponding interval, updating its popular- ity accordingly. For 3-shot, we randomly select one sample from the intervals with popularity values of 2, 5, and 8. For 5-shot, we randomly select one sample from the intervals with popularity values of 1, 3, 5, 7, and 9. For 10-shot, we randomly select one sample from each of the 10 intervals. 0 3 5 10 Shots7879808182838485Prediction Accuracy (%) LLaMA 8B Qwen2 ChatGPTFigure 7: The prediction accuracy obtained by perform- ing confidence calibration using knowledge popularity generated from different numbers of examples. Each point represents the average prediction accuracy of the model across three datasets. Results. Figure 7 shows the average prediction accuracy of Conf-QG-R across three datasets using model-generated popularity under different shot settings. As the number of examples increases, prediction accuracy does not improve, while infer- ence
https://arxiv.org/abs/2505.17537v1
cost rises. Therefore, we recommend prompt- ing LLMs to assess their familiarity with enti- ties and their relationships in a zero-shot setting. Due to API costs, we first conduct experiments on LLaMA3-8B-Instruct and Qwen2-7B-Instruct and find that increasing the number of samples in the prompt does not yield more effective knowledge popularity. Therefore, we only perform 0-shot and 3-shot experiments on ChatGPT. D Detailed Parameter Settings Inference. For all the models, we use greedy search, selecting the token with the highest prob- ability at each generation step. For open-source models, our experiments are conducted on a single 80GB A800 GPU. MLP Training. For both corpora-based and model-generated popularity, we train the model using the Adam optimizer with a learning rate of 2e-3 and a batch size of 8. The intermediate layer has a dropout rate of 0.4, and training runs for 100 epochs. All experiments are conducted on two 16GB V100 GPUs. We select the checkpoint with the highest prediction accuracy on the training set for evaluation on the test set. Class Balancing for The Basketball Dataset. Since the MLP fails to learn meaningful pat-terns on the basketball dataset for Llama3 and Qwen2—consistently classifying all samples as incorrect due to the overwhelming imbalance—we extract all correctly answered samples and ran- domly sample an equal number of incorrect ones (seed = 0) to ensure balanced learning across both classes. The training set and the test set are evenly split from the sampled dataset. E Case Studies We compare PC and PC+ALL on LLaMA3 for an- swer correctness prediction to illustrate how knowl- edge popularity works in confidence calibration. The imperfect alignment between the model’s con- fidence and its actual performance arises from two main factors: •Overconfidence : The model generates incorrect answers with high confidence. When classifi- cation relies on generation probabilities, such answers are incorrectly labeled as correct. •Conservativeness : The model generates correct answers with low confidence. When classifica- tion relies on generation probabilities, such an- swers are incorrectly labeled as incorrect. We collect the samples misclassified by PC but successfully calibrated by PC+ALL. These fall into two categories: •Overconfidence Group : Samples where the model generates an incorrect answer, PC incor- rectly classifies them as correct, while PC+ALL correctly identifies them as incorrect. •Conservativeness Group : Samples where the model generates a correct answer, PC incorrectly classifies them as incorrect, while PC+ALL cor- rectly identifies them as correct. We compute the knowledge popularity for each group, and the results appear in Table 6. The results show that in the overconfidence group, PC+ALL achieves calibration by leveraging low knowledge popularity despite the model’s high confidence. In contrast, in the conservativeness group, it achieves calibration through high knowledge popularity. Although PC+ALL achieves strong calibra- tion performance, it also introduces some over- calibration issues by misclassifying samples that were correctly predicted by PC, as shown in Fig- ure 8. However, the number of correctly calibrated samples significantly exceeds the over-calibrated ones. Moreover, we show some cases on the Movies dataset for Llama3. Figures 9 and Fig- ure 10 illustrate cases where knowledge popularity
https://arxiv.org/abs/2505.17537v1
868074 121416 2610 MoviesSongsBasketballSameWinFailFigure 8: The difference in answer correctness predic- tion on LLaMA3 between using PC+ALL and using PC. Blue indicates that both methods make the same prediction, yellow indicates cases where only PC+ALL predictes correctly, and red indicates cases where only PC predictes correctly. effectively calibrated the model’s confidence, while Figure 11 shows a failure case. All the results in this section are obtained with seed=0. Datasets Group PC PopQPopGeRPop Ge MoviesOverc. 0.91 47.05 20.08 1.03 Conse. 0.78 24.34 23.22 12.91 SongsOverc. 0.91 38.91 13.04 18.89 Conse. 0.78 131.00 15.50 103.00 BasketballOverc. 0.78 102.68 10.69 0.97 Conse. 0.62 234.83 10.59 11.47 Table 6: Knowledge popularity of samples that are mis- classified by PC but correctly classified by PC+ALL. Overc. refers to the Overconfidence group, in which the model generates an incorrect answer but PC classifies it as correct. Conse. refers to the Conservativeness group, in which the model generates a correct answer but PC classifies it as incorrect. From Figure 10, we can see that the model gen- erated an incorrect answer with a probabilistic con- fidence of 0.95, which is significantly higher than the classification threshold for confidence (>0.85), leading to it being classified as correct. However, knowledge popularity reveals that the question pop, generated answer pop, and relation pop are 16, 20, and 1, respectively, all below the dataset’s aver- age levels. This indicates that both the question and the generated entity are relatively uncommon and rarely co-occur. As a result, the classification outcome was corrected to incorrect. Similarly, in Figure 10, the model exhibits low probabilistic con- fidence for a correctly generated answer, leading to a misclassification as incorrect. However, its knowledge popularity was relatively high, resulting in a correction to the correct classification. Figure 11 presents a case of error correction. While similar misclassifications may occur, the Question: Who is the director of the movie The Star MakerGround-Truth Answer: Giuseppe TornatoreGenerated Answer: Giuseppe TornatoreCorrectness: ✅Confidence: 0.68Confidence Threshold: 0.85Correctness Prediction: ❌Knowledge Popularity: [Q-Pop: 15, G-Pop: 62, R-Pop: 15]Average Popularity: [Q-Pop: 26, G-Pop: 40, R-Pop: 15]Correctness Prediction After Calibration: ✅Figure 9: The case where Llama3-8B generates incor- rect answers with high confidence on the Moveis dataset and is corrected by low knowledge popularity. Question: Who is the director of the movie Itinéraired'un enfant gâtéGround-Truth Answer: Claude LelouchGenerated Answer: ColineSerreauCorrectness: ❌Confidence: 0.95Confidence Threshold: 0.85Correctness Prediction: ✅Knowledge Popularity: [Q-Pop: 16, G-Pop: 20, R-Pop: 1]Average Popularity: [Q-Pop: 26, G-Pop: 40, R-Pop: 15]Correctness Prediction After Calibration: ❌ Figure 10: The case where Llama3-8B generates correct answers with low confidence on the Moveis dataset and is corrected by high knowledge popularity. Question: Who is the director of the movie The Celluloid ClosetGround-Truth Answer: Rob EpsteinGenerated Answer: Rob EpsteinCorrectness: ✅Confidence: 0.99Confidence Threshold: 0.85Correctness Prediction: ✅Knowledge Popularity: [Q-Pop: 16, G-Pop: 15, R-Pop: 0]Average Popularity: [Q-Pop: 26, G-Pop: 40, R-Pop: 15]Correctness Prediction After Calibration: ❌ Figure 11: The case where Llama3-8B generates correct answers with high confidence on the Moveis dataset and is misled by low knowledge popularity. proportion of correctly corrected samples (6.0%) is significantly higher than that of miscalibrated ones (1.2%), demonstrating the reliability
https://arxiv.org/abs/2505.17537v1
of knowledge popularity in confidence calibration. F Prompts We display all the prompts used in this paper here and show some examples. QA prompt. We just ask the model to give a short answer without any other words. The exam- ple is shown in Figure 20. Prompts for knowledge popularity generation. Examples for instructing LLMs to provide question entity popularity, generated answer popularity, and the popularity of their relationship can be found in Figure 22 23 24 25 26 27. Accuracy Confidence Alignment Datasets Models Q-Pop G-Pop Co-Occ Q-Pop G-Pop Co-Occ Q-Pop G-Pop Co-Occ MoviesLlama3-8B 0.317 0.100 0.637 0.404 0.324 0.653 0.404 0.231 0.667 Qwen2-7B 0.433 0.087 0.756 0.413 0.345 0.679 0.386 0.021 0.607 ChatGPT 0.134 0.083 0.208 0.210 0.233 0.304 0.211 0.231 0.304 SongsLlama3-8B 0.277 0.257 0.621 0.369 0.188 0.680 0.182 0.207 0.358 Qwen2-7B 0.362 0.188 0.666 0.300 0.246 0.511 0.230 0.058 0.405 ChatGPT 0.171 0.218 0.351 0.249 0.305 0.445 0.232 0.297 0.326 BasketballLlama3-8B 0.118 0.116 0.245 0.173 -0.034 0.010 -0.052 0.083 0.163 Qwen2-7B 0.014 0.116 0.106 0.151 0.114 0.068 -0.126 -0.015 0.018 ChatGPT 0.288 -0.164 0.293 0.351 -0.210 0.257 0.201 -0.107 0.241 Table 7: Spearman correlation coefficients for Accuracy, Confidence, and Alignment scores with the popularity of question entities, generated entities, and their co-occurrence. 15 20 25 30 35 40 45 50 55 Question Popularity0.00.20.40.60.81.0Values Llama3-8B on Movies Accuracy Confidence Alignment 5 10 15 20 25 Question Popularity0.00.20.40.60.81.0Values Llama3-8B on Songs Accuracy Confidence Alignment 5.0 7.5 10.0 12.5 15.0 17.5 20.0 22.5 25.0 Question Popularity0.00.20.40.60.81.0Values Llama3-8B on Basketball Accuracy Confidence Alignment Figure 12: The QA performance, confidence, and alignment of Llama3 under different question popularity. 15 20 25 30 35 40 45 50 55 Question Popularity0.00.20.40.60.81.0Values Qwen2-7B on Movies Accuracy Confidence Alignment 5 10 15 20 25 Question Popularity0.00.20.40.60.81.0Values Qwen2-7B on Songs Accuracy Confidence Alignment 5.0 7.5 10.0 12.5 15.0 17.5 20.0 22.5 25.0 Question Popularity0.00.20.40.60.81.0Values Qwen2-7B on Basketball Accuracy Confidence Alignment Figure 13: The QA performance, confidence, and alignment of Qwen2 under different question popularity. 0 20 40 60 80 100 Question Popularity0.00.20.40.60.81.0Values Llama3-8B on Movies Accuracy Confidence Alignment 20 40 60 80 100 120 140 160 Question Popularity0.00.20.40.60.81.0Values Llama3-8B on Songs Accuracy Confidence Alignment 0 50 100 150 200 250 300 Question Popularity0.00.20.40.60.81.0Values Llama3-8B on Basketball Accuracy Confidence Alignment Figure 14: The QA performance, confidence, and alignment of Llama3 under different answer popularity. 0 20 40 60 80 100 Question Popularity0.00.20.40.60.81.0Values Qwen2-7B on Movies Accuracy Confidence Alignment 20 40 60 80 100 120 140 160 Question Popularity0.00.20.40.60.81.0Values Qwen2-7B on Songs Accuracy Confidence Alignment 0 50 100 150 200 250 300 Question Popularity0.00.20.40.60.81.0Values Qwen2-7B on Basketball Accuracy Confidence AlignmentFigure 15: The QA performance, confidence, and alignment of Qwen2 under different answer popularity. 0 20 40 60 80 100 Question Popularity0.00.20.40.60.81.0Values ChatGPT on Movies Accuracy Confidence Alignment 20 40 60 80 100 120 140 160 Question Popularity0.00.20.40.60.81.0Values ChatGPT on Songs Accuracy Confidence Alignment 0 50 100 150 200 250 300 Question Popularity0.00.20.40.60.81.0Values ChatGPT on Basketball Accuracy Confidence Alignment Figure 16: The QA performance, confidence, and alignment of ChatGPT under different answer popularity. 0 20 40 60 80 100 Answer Popularity0.00.20.40.60.81.0Values
https://arxiv.org/abs/2505.17537v1
Llama3-8B on Movies Accuracy Confidence Alignment 20 40 60 80 100 120 140 160 Answer Popularity0.00.20.40.60.81.0Values Llama3-8B on Songs Accuracy Confidence Alignment 0 50 100 150 200 250 300 Answer Popularity0.00.20.40.60.81.0Values Llama3-8B on Basketball Accuracy Confidence Alignment Figure 17: The QA performance, confidence, and alignment of Llama3 under different relation popularity. 0 20 40 60 80 100 Answer Popularity0.00.20.40.60.81.0Values Qwen2-7B on Movies Accuracy Confidence Alignment 20 40 60 80 100 120 140 160 Answer Popularity0.00.20.40.60.81.0Values Qwen2-7B on Songs Accuracy Confidence Alignment 0 50 100 150 200 250 300 Answer Popularity0.00.20.40.60.81.0Values Qwen2-7B on Basketball Accuracy Confidence Alignment Figure 18: The QA performance, confidence, and alignment of Qwen2 under different relation popularity. 0 20 40 60 80 100 Answer Popularity0.00.20.40.60.81.0Values ChatGPT on Movies Accuracy Confidence Alignment 20 40 60 80 100 120 140 160 Answer Popularity0.00.20.40.60.81.0Values ChatGPT on Songs Accuracy Confidence Alignment 0 50 100 150 200 250 300 Answer Popularity0.00.20.40.60.81.0Values ChatGPT on Basketball Accuracy Confidence Alignment Figure 19: The QA performance, confidence, and alignment of ChatGPT under different relation popularity. Input: Answer the following question with one or few words.Question: Who is the director of the movie The IntouchablesAnswer:Response: Eric ToledanoFigure 20: A question-answering example for Llama3. Input: Judge whether the following answer (this is your self-generated answer) about the question is correct. If you are sure the answer is correct, say certain. If not, please say uncertain. Just give your judgement without any other words.Question: Where is the birthplace of the basketball player Jiang Xingquan? Answer: Beijing.Response: Uncertain. Figure 21: An example for verbalized confidence. Question: Rate how familiar you are with the movie'The Intouchables'. The familiarity is rated on a scale from 1 to 10, where 10 means you are highly familiar with it, and 1 means you have little to no knowledge about it. Your answer needs to be a precise integer. Provide only the number, without any additional explanation.Number:Response: 8 Figure 22: An example of obtaining question popularity on the movies dataset using LLaMA3 in a zero-shot setting. Question: Rate how familiar you are with the director'Eric Toledano'. The familiarity is rated on a scale from 1 to 10, where 10 means you are highly familiar with it, and 1 means you have little to no knowledge about it. Your answer needs to be a precise integer. Provide only the number, without any additional explanation.Number:Response: 4 Figure 23: An example of obtaining generated answer popularity on the movies dataset using LLaMA3 in a zero-shot setting. Question: Rate how familiar you are with the relationshipbetween the movie'The Intouchables' and the director'Eric Toledano'. The familiarity is rated on a scale from 1 to 10, where 10 means you are highly familiar with their relationship, and 1 means you know little to nothing about it. Your answer needs to be a precise integer. Provide only the number, without any additional explanation.Number:Response: 8 Figure 24: An example of obtaining relation popularity on the movies dataset using LLaMA3 in a zero-shot setting. Question: Rate how familiar you are with the movie'Swept Away'. The familiarity is rated on a scale from 1 to 10, where
https://arxiv.org/abs/2505.17537v1
10 means you are highly familiar with it, and 1 means you have little to no knowledge about it. Your answer needs to be a precise integer. Provide only the number, without any additional explanation.Here are some examples:The movie: Matchstick MenNumber: 2The movie: Kick-AssNumber: 5The movie: SkyfallNumber: 8Rate how familiar you are with the movie'Swept Away'. The familiarity is rated on a scale from 1 to 10, where 10 means you are highly familiar with it, and 1 means you have little to no knowledge about it. Your answer needs to be a precise integer. Provide only the number, without any additional explanation.Number:Response: 3Figure 25: An example of obtaining question popularity on the movies dataset using ChatGPT in a 3-shot setting. Question: Rate how familiar you are with the director'Guy Ritchie'. The familiarity is rated on a scale from 1 to 10, where 10 means you are highly familiar with it, and 1 means you have little to no knowledge about it. Your answer needs to be a precise integer. Provide only the number, without any additional explanation.Here are some examples:The director: James McTeigueNumber: 2The director: Guy RitchieNumber: 5The director: Jodie FosterNumber: 8Rate how familiar you are with the director'Guy Ritchie'. The familiarity is rated on a scale from 1 to 10, where 10 means you are highly familiar with it, and 1 means you have little to no knowledge about it. Your answer needs to be a precise integer. Provide only the number, without any additional explanation.Number:Response: 7 Figure 26: An example of obtaining answer popularity on the movies dataset using ChatGPT in a 3-shot setting. Question: Rate how familiar you are with the relationship between the movie'Swept Away' and the director 'Guy Ritchie'. The familiarity is rated on a scale from 1 to 10, where 10 means you are highly familiar with their relationship, and 1 means you know little to nothing about it. Your answer needs to be a precise integer. Provide only the number, without any additional explanation.Here are some examples:The movie: Kick-Ass; The director: Matthew VaughnNumber: 2The movie: Eraserhead; The director: David LynchNumber: 5The movie: Heat; The director: Michael MannNumber: 8Rate how familiar you are with the relationship between the movie'Swept Away' and the director'Guy Ritchie'. The familiarity is rated on a scale from 1 to 10, where 10 means you are highly familiar with their relationship, and 1 means you know little to nothing about it. Your answer needs to be a precise integer. Provide only the number, without any additional explanation.Number:Response: 7Figure 27: An example of obtaining relation popularity on the movies dataset using ChatGPT in a 3-shot setting.
https://arxiv.org/abs/2505.17537v1
arXiv:2505.17538v1 [cs.CL] 23 May 2025Swedish Whispers; Leveraging a Massive Speech Corpus for Swedish Speech Recognition Leonora Vesterbacka, Faton Rekathati, Robin Kurtz, Justyna Sikora, Agnes Toftg ˚ard 1KBLab, National Library of Sweden, Sweden {leonora.vesterbackaolsson,faton.rekathati,robin.kurtz,justyna.sikora,agnes.toftgard } @kb.se Abstract This work presents a suite of fine-tuned Whisper models for Swedish, trained on a dataset of unprecedented size and vari- ability for this mid-resourced language. As languages of smaller sizes are often underrepresented in multilingual train- ing datasets, substantial improvements in performance can be achieved by fine-tuning existing multilingual models, as shown in this work. This work reports an overall improvement across model sizes compared to OpenAI’s Whisper evaluated on Swedish. Most notably, we report an average 47% reduc- tion in WER comparing our best performing model to OpenAI’s whisper-large-v3 , in evaluations across FLEURS, Com- mon V oice, and NST. Index Terms : Automatic speech recognition, Speech-to-text, Whisper 1. Introduction The abundance of openly available audio data in English en- ables pre-training and fine-tuning of speech-to-text models on hundreds of thousands to millions of hours of recorded speech. As a result, automatic speech recognition (ASR) systems are ap- proaching human level robustness in English. Other languages’ performance in multilingual speech recognition tends to stand in proportion to the amount of data included from the language – or the language family – in question. For low- to mid-resource languages with fewer speakers, such as Swedish, the amount of openly available data may be limited, and as a consequence these languages tend to be underrepresented in large-scale ef- forts to train multilingual speech recognition systems. This re- sults in suboptimal performance of ASR applications, such as voice-based customer service, and automatic captioning. Lan- guage models and ASR models are often trained on massively multilingual datasets. By continuing training these multilingual models for a specific language, the performance is greatly im- proved compared to the performance of multilingual models. This is necessary for low- and mid-resource languages to re- main competitive and sovereign in the rapid AI development of today. Similarly to previous work for Norwegian [1], we fine- tune Whisper for all canonical sizes on our massive dataset for Swedish. Large collections of audio recordings maintained by cul- tural heritage institutions, such as the National Library of Swe- den, and government agencies can potentially bridge some of the existing speech recognition performance gaps between Swedish and higher resourced languages. By scaling up the amount of training data, one can noticeably improve Swedish speech recognition. In this work, KBLab [2] at the National Library of Sweden have a have constructed a massive speech corpus with an emphasis on all variations of spoken Swedish,which is used to train speech recognition models for Swedish. 2. Automatic speech recognition Automatic Speech Recognition models translate speech (i.e. spoken language) into text (i.e. written language). The form of the output may vary between phonetic transcriptions, raw text without punctuation, and formatted sentences with punctuation. 2.1. Model architecture Whisper [3] is a multilingual ASR model with additional trans- lation capabilities. By relaxing the quality standard of the la- beled training data, vast amounts of
https://arxiv.org/abs/2505.17538v1
training data are unlocked. The authors show that even with imperfectly labeled training data, such as subtitles, it is still possible to achieve speech recognition performance approaching human-level robustness in English. Using the latest official Whisper models as a start- ing point, we fine-tune them on our herewith presented corpus for Swedish speech. The Whisper model architecture is an encoder-decoder transformer [4]. The input audio is split up into 30 s chunks, resampled to 16 kHz, and transformed to an 80-channel log- magnitude Mel spectrogram (128-channel for large-v3 ). Feature normalization is applied along with a two-layered 1- dimensional convolutional encoder with a GELU activation function, followed by sinusoidal position embeddings, and transformer encoder blocks. We employ a Byte Pair Encoding (BPE) dropout [5] of 0.2, which was shown to have beneficial regularizing effects for the large model size [1]. An activation dropout of 0.1 is also applied. The model is trained using the PyTorch implementation by Hugging Face that is detailed in the Whisper fine-tuning Event1. We adapt this script and add functionality to train with times- tamps as well as with previous contexts as a prompt. 3. A massive Swedish speech corpus OpenAI’s whisper-large-v32has been trained on 680 000 h of multilingual speech, of which 2119 h are in Swedish. While [3] report that all increases in dataset size lead to improved performance, the improvement slightly stagnates after a dataset size of around 50 000 h. With these numbers in mind, a dataset consisting of 50 000 h of transcribed Swedish was constructed. 1https://github.com/huggingface/ community-events/tree/main/ whisper-fine-tuning-event 2https://huggingface.co/openai/ whisper-large-v3 Not only is the amount of data important to training ASR models, but also language variation. One common drawback of using the multilingual Whisper is its inability to recognize vari- ations in a lower resourced language, such as dialects. While di- alects are generally more difficult to correctly transcribe, given their decreased occurrence in the training data, this is espe- cially noticeable for low- and mid-resource languages. Models trained on material from the web, which can be assumed for models such as Whisper, have only learnt to recognize speakers that are commonly represented in web material, e.g. YouTube videos. Training on this type of material excludes many vari- eties of speech that exists in a population. By actively including data sources that represent other types of speech, we increase the probability of developing a model that performs better for a wider variety of dialects, sociolects, age groups, and accents spoken by non-native Swedish speakers. In this work, KBLab at the National Library of Sweden has collaborated with the Riksdag administration (i.e. the parlia- ment administration in Sweden), Sweden’s Television (SVT), and the Institute for Language and Folklore (Institutet f ¨or spr ˚ak och folkminnen), and together we have been able to construct a transcribed Swedish dataset of unprecedented size and variabil- ity. These dataset types can be split into four categories, which are presented below. 3.1. Subtitles Subtitles have traditionally existed for Swedish television to subtitle non-Swedish TV broadcasts. Swedish closed captions for Swedish spoken content have become increasingly avail- able as
https://arxiv.org/abs/2505.17538v1
a means to aid accessibility for the hearing impaired. The Swedish subtitling tradition tends to condense the content rather than provide verbatim transcriptions, in order to provide the viewer enough time to read. In our filtering, we select only Swedish TV broadcasts and web content where metadata indicates the subtitle track is in Swedish. To ensure the speech is also in Swedish, we perform language detection using OpenAI’s whisper-large-v3 . The final subtitle dataset is thus constructed out of audio identi- fied as Swedish paired with Swedish subtitles. 3.1.1. SMDB With access to the Swedish Media Database (SMDB) at the Na- tional Library of Sweden, we extracted audio and subtitles from 15 Swedish TV channels, in a 24 h stream format, without any program metadata, from 2021 to 2023, resulting in more than 150 000 h of raw broadcasts. Of the 63 000 h of audio match- ing the subtitles, around 80% were duplicates. From this dedu- plicated subset, 20% did not contain Swedish speech. Finally, another 30% of chunks were dropped, not reaching the desired quality for training. 3.1.2. SVT In contrast to the SMDB dataset, cleaning the data provided by SVT was kept to a minimum, as it was delivered separated into various programs with reliable metadata. Starting with 30 275 h of chunks, 61% was kept for training. 3.1.3. YouTube We additionally collected audio and subtitles from 2704 YouTube channels resulting in about 15 416 h, before further quality filtering is applied using our pipeline described in Sec- tion 3.5. Since subtitles on YouTube do not necessarily followthe same guidelines as TV , they generally provide a more ver- batim transcription of what is spoken. 3.2. Parliamentary recordings As a part of this project, we have been granted access to all digitized audio recordings of the Swedish Riksdag since 1966. These recordings were matched and force-aligned against pro- tocol transcripts of the debates. The resulting speech corpus is named R IXVOX-V2, and consists of 18 000 h of transcribed speech recorded before 2003 and 5500 h after 2003. Apart from its sheer size, this corpus also exhibits a wide variation in Swedish dialects as well as providing a both modern and his- toric resource for political spoken Swedish. The total amount of audio recordings used in this project is 23 500 h, of which almost 22 000 h pass the “Stage 1” quality filters outlined in Section 3.5. We publish the entire dataset R IXVOX-V2 on Hug- ging Face with a permissive license to further research efforts in Swedish speech technology. 3.3. Dialect recordings To improve ASR performance on Swedish dialects, special di- alect recordings were added to the training data. The Institute for Language and Folklore is a Swedish government agency that hosts a large collection of speech recordings. Some of these recordings are the results of research projects, such as Swe- Dia [6], in which various Swedish dialects have been profes- sionally recorded and transcribed. Although this corpus does not compare in size with the other datasets, they provide an im- portant diversity in its speech, needed
https://arxiv.org/abs/2505.17538v1
for developing a model which is representative of the whole population. 3.4. Other 3.4.1. NST The NST dataset, collected by Nordic Language Technology (Nordisk Spr ˚akteknologi), is hosted by Language Bank at the National Library of Norway3. It consists of hundreds of hours of recordings and transcriptions of up to 1000 speakers with dif- ferent Swedish dialects. The recordings consist of both longer sentences as well as single-word examples. The dataset is split into a training set which is included in our training data and a test set which is used as a hold-out test set in our evaluation. 3.4.2. Common Voice Mozilla is collecting an open source, crowd-sourced, multi- lingual “dataset of voices” called Common Voice [7]. The dataset is the largest publicly available speech dataset, of which Swedish constitutes 54 h of recordings. The train, validation, and test splits of Common V oice 16.1 are all used as hold-out test sets. 3.4.3. FLEURS FLEURS [8], Few-shot Learning Evaluation of Universal Rep- resentations of Speech , is an n-way parallel speech dataset in 102 languages, with approximately 12 h of speech per language. We use the FLEURS validation split for validation purposes during training, but keep its train and test splits as hold-out test sets for our final evaluation. 3https://www.nb.no/sprakbanken/en/ resource-catalogue/oai-nb-no-sbr-56/ 3.5. Data preprocessing The preprocessing pipeline is common to all dataset types and is outlined below: 1. The audio is kept in its source format and converted to 16 kHz mono slices on the fly during all processing steps. A cor- responding format-agnostic metadata file for each audio file contains the raw units of observation: e.g. subtitle blocks or force aligned sentences, using a single unified format. 2. Only Swedish text transcriptions are retained. We filter based on subtitle track metadata for data sources whose metadata is reliable. Language detection classifiers for text are used where metadata is not reliable. 3. Candidate chunks to be used during training are created by combining and packing our observations to chunks of up to 30 s length based on timestamp metadata. Shorter chunks are additionally sampled by setting a smaller maximum chunk length, in order to boost the model’s robustness to variable audio input lengths. 4. Audio language detection is performed on all candidate chunks using whisper-large-v3 . Only chunks where Swedish is detected are retained. 5. Candidate chunks passing the previous filter are transcribed using whisper-large-v3 and V OXREX, a Swedish Wav2Vec2.0[9]. 6. The transcribed results are compared to the original transcript using the following metrics: • The Word Error Rate (WER) is a common metric derived from the Levenshtein distance. It measures how accurately an ASR model transcribes spoken language compared to a reference transcript. A lower WER corresponds to a better transcription compared to the reference text. The Charac- ter Error Rate (CER) is the character-level equivalent of WER. • The Bilingual Evaluation Understudy (BLEU) score [10], is a precision-based metric that computes a value between 0 and 1, where 1 corresponds to a perfect agreement be- tween the prediction and the gold standard. •Recall-Oriented Understudy for Gisting Evaluation score
https://arxiv.org/abs/2505.17538v1
(ROUGE-N) [11] is a recall-based metric that measures the overlap of n-grams between a model’s output and a reference text. • Approximate string matching is the technique of finding strings that match a pattern approximately (rather than ex- actly). This is used to match the first and last words in a chunk. We additionally calculate a CER based metric to match the first and last 10 characters in a chunk. Using the above metrics to measure the quality of the tran- scription, we create two categories for our training corpus. Data that do not meet the quality filters applied in either category are excluded from the training. Stage 1 The first category is defined by a set of relaxed val- ues for CER and BLEU, comparing the original transcription to that of whisper-large-v3 and V OXREX. Although the whisper-large-v3 transcriptions are generally of higher quality than those from V OXREX, both transcriptions are used to safeguard against potential hallucinations from Whisper. We apply dataset specific BLUE thresholds based on qualitative as- sessments, where the main aim during “Stage 1” is to filter out most of the low signal and pure noise observations where ground truth has very low or no overlap with the audio. CER thresholds comparing the first and last 10 characters of groundtruth and machine transcriptions are used only to determine which samples to train with timestamp tokens during this stage. The hypothesis behind this category of data is that even though the transcriptions are not perfect, they are still good enough to learn from. Stage 2 The second category, denoted “Stage 2”, is char- acterized by a set of higher requirements for the quality of the transcriptions, to ensure that only the most accurate examples from the training data are included in the second stage. If the previous category was characterized by the width of the varia- tion of Swedish, this category will instead be characterized by the transcription style. Additionally, in Stage 2 filtering, we in- troduce another metric, ROUGE-N, to further ensure high tran- scription accuracy. During filtering, we calculate BLEU score and a weighted combined ROUGE-N score using weights that place higher emphasis on longer n-grams. We assign weights 0, 0.25, 0.5, 0.25, respectively to 1-, 2-, 3- and 4-grams. A CER threshold of 0.2 or lower for the first and last 10 characters is used here to determine whether to include the sample. The final composition of our corpus is shown in Table 1. Table 1: Composition of our Swedish speech corpus Dataset Stage 1 (h) Stage 2 (h) SubtitlesSVT 18 468 688 SMDB 8 629 421 YouTube 7 164 2 001 Riksdag 21 949 5 119 Dialects 54 54 NST 250 250 Total 56 514 8 533 3.5.1. Non-speech and timestamps Although the above pipeline would filter out any examples where there is no speech, we actively include some portion of the data where no speech is present. One common issue with Whisper is its hallucinations, where the model outputs tran- scriptions not corresponding to the speech, or transcriptions when there is no speech.
https://arxiv.org/abs/2505.17538v1
One hypothesis behind hallucinations is the lack of training examples with no speech and no corre- sponding transcriptions. To this end we include examples in our training data with no speech or transcription to help the model learn not to output any transcriptions when there is silence. The total amount of non-speech in our training data is 100 h. A portion of the training data is prepared with timestamp tokens added at the subtitle block or sentence level, with the goal of retaining the original Whisper model’s ability to out- put timestamps. In order to provide a consistent signal to the model, we only train with timestamps on those examples where the CER between ground truth and the first and last 10 charac- ters of an ASR transcription is equal to 0.2 or lower. Times- tamps are sampled into the training at a rate of roughly 50% during “Stage 2”, and at lower single digit percentage rates in “Stage 1”. 3.5.2. Previous context as prompt The datasets are processed to allow for the option to train on the text of a previous adjoining audio segment as a contextual prompt. We train with the previous context at a 50% rate. Since Whisper uses the same token, <|endoftranscript|> , for padding and end of transcript, a crucial detail in loss masking implementation is to leave one <|endoftranscript|> un- masked, while masking the last n−1occurences. 4. Model training In [3] five model sizes of Whisper were trained. In order to accommodate all end users, from large scale transcription tasks utilizing GPUs to modest applications run on a laptop, all model sizes are fine-tuned. The model sizes in parameters range from 39 million to 1.5 billion and are outlined in Table 2 along with the chosen learning rates for the training. Much of the training Table 2: Number of parameters, learning rate and node hour consumption for all Whisper model sizes Model name Parameters Learning rate Node hours tiny 39 M 6×10−4600 base 74 M 4×10−4700 small 244 M 2×10−4880 medium 769 M 5×10−51600 large 1.55 B 7×10−53100 details are similar to those detailed in [3], such as AdamW [12] optimizer and a linearly decaying learning rate after 5000 steps. The main modifications with respect to the original Whisper training lay in the choice of learning rate, that we set to slightly lower values. All model sizes are trained with a two-stage ap- proach, an initial training using the Stage 1 portion of the data for150 000 steps, with the corresponding learning rates detailed in Table 2. This initial training stage is followed by a second training stage for 50 000 steps using the Stage 2 category of the data. All models are trained on the L EONARDO supercom- puter [13], with access awarded through the EuroHPC Joint Un- dertaking. Each computing node at the L EONARDO supercom- puter is equipped with four Nvidia custom A100 64GB GPUs. The total computing consumption is detailed in Table 2. The total train batch size is set to 1024, with per-device batch sizes of 16. 5. Results and discussion
https://arxiv.org/abs/2505.17538v1
We report test results as a comparison between OpenAI’s Whis- per and our models evaluated using the metrics word error rate (WER) and BLEU scores. The splits of three hold-out datasets which have not been employed for validation purposes during training are used: namely FLEURS (train and test set), NST (test set), and Common V oice (train, validation, and test set). Common V oice and FLEURS are not part of our training data and as such they represent the models’ out-of-domain perfor- mance. There is a significant improvement in performance for all model sizes trained in this paper compared to OpenAI’s Whisper, which can be seen in Table 3 for WER and Table 4 for BLEU. Most notable is the improvement in WER for the smaller model sizes, where the WER for the OpenAI Whisper-tiny model evaluated on FLEURS resulted in 59.2, a number which has rendered the model more or less inutile. We report an im- provement in WER to 13.2 for the smallest model size (tiny). The second noteworthy observation is that our Whisper-small outperforms OpenAI’s Whisper-large as evaluated on these benchmarks. This result has a positive implication on theTable 3: Model performance measured in WER (lower is better) evaluated on FLEURS, CommonVoice, and NST. Model size FLEURSCommon- VoiceNST tinyKBLab 13.2 12.9 11.2 OpenAI 59.2 67.8 85.2 baseKBLab 9.1 8.7 7.8 OpenAI 39.6 52.1 53.4 smallKBLab 7.3 6.4 6.6 OpenAI 20.6 26.4 26.4 mediumKBLab 6.6 5.4 5.8 OpenAI 12.1 15.8 17.1 large-v3KBLab 5.4 4.1 5.2 OpenAI 7.8 9.5 11.3 Table 4: Model performance measured in BLEU (higher is bet- ter) evaluated on FLEURS, CommonVoice, and NST. Model size FLEURSCommon- VoiceNST tinyKBLab 76.6 73.7 74.3 OpenAI 26.9 21.1 24.0 baseKBLab 83.2 79.9 78.3 OpenAI 41.1 32.5 36.9 smallKBLab 86.6 83.5 79.6 OpenAI 64.0 56.5 58.2 mediumKBLab 87.6 85.0 80.2 OpenAI 77.1 70.1 68.9 large-v3KBLab 89.8 87.2 81.1 OpenAI 84.9 79.1 75.1 amount of compute needed for transcription tasks, as the same performance can be achieved for a much smaller model that in turn consumes significantly less compute during inference. Most notably, we report an average 47% reduction in WER comparing our best performing model to OpenAI’s Whisper large-v3, in evaluations across FLEURS, Common V oice, and NST. 5.1. Future work Although the results reported in this paper exhibit a massive improvement in ASR for Swedish, we see many avenues for future work that can further strengthen ASR for Swedish. The data preprocessing pipeline as outlined in Sec- tion 3.5 is relying on the transcriptions from OpenAI’s whisper-large-v3 to select high-quality training data. An obvious drawback with this approach is that one can not differ- entiate if the training data is of bad quality (e.g. not identified as Swedish) or if the model used to transcribe is not good enough. One risk is that this approach is rejecting training data with dif- ficult dialects as the OpenAI model has not been trained on this type of speech before. By using our fine-tuned Whisper-large for the transcription step in the data preprocessing pipeline, one could potentially select more high-quality training data as well
https://arxiv.org/abs/2505.17538v1
as training data with rare dialects. So data that had otherwise been filtered out due to a bad model performance, could in fact be included in the training data now, and further widen the range of quality in the training data. With the same reasoning, the use of a better Wav2Vec2.0 model than the V OXREXused in the data preprocessing pipeline would be beneficial. 6. Acknowledgements We acknowledge the EuroHPC Joint Undertaking for award- ing this project access to the EuroHPC supercomputer LEONARDO, hosted by CINECA (Italy) and the LEONARDO consortium, through the Development Access call and AI and data intensive applications access call. 7. References [1] P. E. Kummervold, J. de la Rosa, F. Wetjen, R.-A. Braaten, and P. E. Solberg, “Whispering in norwegian: Navigating ortho- graphic and dialectic challenges,” in Interspeech 2024 , 2024, pp. 3984–3988. [2] L. B ¨orjeson, C. Haffenden, M. Malmsten, F. Klingwall, E. Rende, R. Kurtz, F. Rekathati, H. H ¨aggl¨of, and J. Sikora, “Transfiguring the library as digital research infrastructure: Making kblab at the national library of sweden,” College & Research Libraries , vol. 85, no. 4, p. 564–582, 2024. [Online]. Available: https://crl.acrl.org/index.php/crl/article/view/26325 [3] A. Radford, J. W. Kim, T. Xu, G. Brockman, C. Mcleavey, and I. Sutskever, “Robust speech recognition via large-scale weak supervision,” in Proceedings of the 40th International Conference on Machine Learning , ser. Proceedings of Machine Learning Research, A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, and J. Scarlett, Eds., vol. 202. PMLR, 23– 29 Jul 2023, pp. 28 492–28 518. [Online]. Available: https: //proceedings.mlr.press/v202/radford23a.html [4] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. u. Kaiser, and I. Polosukhin, “Atten- tion is all you need,” in Advances in Neural Information Processing Systems , I. Guyon, U. V . Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, Eds., vol. 30. Curran Associates, Inc., 2017. [Online]. Avail- able: https://proceedings.neurips.cc/paper files/paper/2017/file/ 3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf [5] I. Provilkov, D. Emelianenko, and E. V oita, “BPE-dropout: Simple and effective subword regularization,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , D. Jurafsky, J. Chai, N. Schluter, and J. Tetreault, Eds. Online: Association for Computational Linguistics, Jul. 2020, pp. 1882–1892. [Online]. Available: https://aclanthology. org/2020.acl-main.170/ [6] J. Lindh and A. Eriksson, “The swedat project and swedia database for phonetic and acoustic research,” in 2009 Fifth IEEE International Conference on e-Science , 2009, pp. 45–49. [7] R. Ardila, M. Branson, K. Davis, M. Henretty, M. Kohler, J. Meyer, R. Morais, L. Saunders, F. M. Tyers, and G. Weber, “Common voice: A massively-multilingual speech corpus,” 2020. [8] A. Conneau, M. Ma, S. Khanuja, Y . Zhang, V . Axelrod, S. Dalmia, J. Riesa, C. Rivera, and A. Bapna, “Fleurs: Few-shot learning evaluation of universal representations of speech,” in 2022 IEEE Spoken Language Technology Workshop (SLT) , 2023, pp. 798– 805. [9] M. Malmsten, C. Haffenden, and L. B ¨orjeson, “Hearing voices at the national library – a speech corpus and acoustic model for the swedish language,” 2022. [Online]. Available: https://arxiv.org/abs/2205.03026 [10] K. Papineni, S. Roukos,
https://arxiv.org/abs/2505.17538v1
T. Ward, and W.-J. Zhu, “Bleu: a method for automatic evaluation of machine translation,” in Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics , P. Isabelle, E. Charniak, and D. Lin, Eds. Philadelphia, Pennsylvania, USA: Association for Computational Linguistics, Jul. 2002, pp. 311–318. [Online]. Available: https://aclanthology.org/P02-1040/ [11] C.-Y . Lin, “Rouge: A package for automatic evaluation of sum- maries,” in Text summarization branches out , 2004, pp. 74–81.[12] I. Loshchilov and F. Hutter, “Decoupled weight de- cay regularization,” in International Conference on Learn- ing Representations , 2019. [Online]. Available: https: //openreview.net/forum?id=Bkg6RiCqY7 [13] M. Turisini, G. Amati, and M. Cestari, “Leonardo: A pan- european pre-exascale supercomputer for hpc and ai applications,” 2023.
https://arxiv.org/abs/2505.17538v1
arXiv:2505.17553v1 [cs.LG] 23 May 2025CoMoE: Contrastive Representation for Mixture-of-Experts in Parameter-Efficient Fine-tuning Jinyuan Feng12∗Chaopeng Wei3∗Tenghai Qiu2†Tianyi Hu12Zhiqiang Pu12 1Institute of Automation, Chinese Academy of Sciences 2School of Artificial Intelligence, University of Chinese Academy of Sciences 3University of Science and Technology Beijing Abstract In parameter-efficient fine-tuning, mixture- of-experts (MoE), which involves specializ- ing functionalities into different experts and sparsely activating them appropriately, has been widely adopted as a promising approach to trade-off between model capacity and com- putation overhead. However, current MoE vari- ants fall short on heterogeneous datasets, ig- noring the fact that experts may learn similar knowledge, resulting in the underutilization of MoE’s capacity. In this paper, we propose Con- trastive Representation for MoE (CoMoE), a novel method to promote modularization and specialization in MoE, where the experts are trained along with a contrastive objective by sampling from activated and inactivated experts in top-k routing. We demonstrate that such a contrastive objective recovers the mutual- information gap between inputs and the two types of experts. Experiments on several bench- marks and in multi-task settings demonstrate that CoMoE can consistently enhance MoE’s capacity and promote modularization among the experts. 1 Introduction Parameter-Efficient Fine-Tuning (PEFT) has emerged to efficiently adapt Large Language Mod- els (LLMs) to downstream tasks by updating only a subset of parameters, significantly reducing com- putational and memory overhead (Hu et al., 2021; Liu et al., 2022; He et al., 2021). However, it strug- gles with substantially increased dataset sizes, espe- cially heterogeneous training datasets, which poses a significant practical challenge (Huang et al., 2024; Wang et al., 2024). Mixture-of-Experts (MoE) of- fers a versatile solution to the challenge for its modular design (Zhang et al., 2024). Thus, Low-rank Adaptation (LoRA), as a pop- ular and effective PEFT method, has been widely ∗Equal contributions. †Corresponding author.integrated with MoE (Dou et al., 2023; Li et al., 2024), leveraging MoE’s modularity to enhance the model’s capacity and performance. By sparsely activating a subset of experts, LoRA’s MoE vari- ants achieve efficient training on heterogeneous datasets and allocate the experts adaptively (Tian et al., 2024). Specifically, the sparse activation is controlled through a router mechanism (e.g., top- k routing) that dispatches inputs to the activated experts. Basically, given an input token, only a sub- set of specialized experts contribute to the output, while other irrelevant experts remain inactive. Ideally, each expert should specialize in dis- tinct representation subspaces and semantic skills, thereby collaboratively enhancing the model’s rep- resentational capacity and enabling a broader spec- trum of knowledge (Liu et al., 2023). However, despite the explicit division into multiple experts in MoE architecture, its modularization degree re- mains questionable. Two issues persist: (1) ex- pert knowledge redundancy , where insufficient specialization constraints lead to overlapping func- tionalities among experts, limiting the model’s ca- pacity (Feng et al., 2025); (2) expert load imbal- ance , where inadequate modularity and specializa- tion during training result in frequent activation of only a subset of experts, which underutilizes other experts and contradicts its original design intent. Consequently, as some studies have indicated (Qian et al., 2024), simply stacking
https://arxiv.org/abs/2505.17553v1
more experts does not linearly improve performance; instead, it leads to a performance bottleneck. Existing studies propose load balance loss (Li et al., 2024) and localized bal- ancing constraint (Dou et al., 2023) to alleviate the mentioned issues, but that is still far from enough. In this paper, we propose a novel perspective to promote the specialization of experts. As illus- trated in Fig. 1, building upon top-k routing, we categorize the experts into activated experts and inactivated experts. Then, we quantify the spe- cialization of experts by mutual information (MI) 1 2 4 3 router1 4 3 2output input x inactivated input xinactivated activated Maximize MI between x and 𝑒1,𝑒2 Minimize MI between x and 𝑒3,𝑒4 (a)Top-2 routing (b)CoMoEFigure 1: Given an input token x, (a) illustrates the workflows of top-2 routing, which serves as a fundamental mechanism of CoMoE; (b) illustrates the motivation of CoMoE: maximizing MI between input xand activated experts while minimizing MI between input xand inactive experts. between the input token and the two types of ex- perts. To promote expert specialization, we define an MI gap, which is derived from the aforemen- tioned MI, and aim to maximize it. In practice, based on the InfoNCE theory (Oord et al., 2018), such an MI gap can be approximated via a con- trastive objective by using positive samples from the activated experts and negative samples from the inactive experts (Lan et al., 2024; Wen et al., 2024). The contrastive objective is incorporated as an auxiliary objective during training, encouraging specialization and modularization among experts. We name the proposed method Contrastive Repre- sentation for MoE (CoMoE), a novel MoE variant. Empirically, we evaluate CoMoE on diverse bench- marks, showcasing its remarkable performance on heterogeneous tasks. Summary of our contribu- tions: •We define an MI gap to quantify expert spe- cialization and redundancy in top-k routing, with contrastive learning providing an effi- cient estimation approach. •We propose a novel MoE variant, named Co- MoE, which incorporates an auxiliary con- trastive objective to enhance expert specializa- tion and modularization. •Comprehensive experiments are conducted to demonstrate that our method consistently im- proves MoE on heterogeneous tasks.2 Preliminaries LoRA Basics LoRA (Hu et al., 2021) introduces a pair of low-rank matrices AandBto reparam- eterize the pretrained weights W0in a residual manner. Specifically, input xis processed through both the frozen weights and the low-rank matrices: y′=W0x+BAx, (1) where y′denotes the output, with A∈Rr×d2and B∈Rd1×r. The rank r≪min(d1, d2)is signifi- cantly small to reduce tunable parameters. Mixture of Experts In LoRA’s MoE variants, the original LoRA module is substituted with nparal- lel experts, each denoted as {Ei(x) =BiAix}n i=1. These experts are activated via a router g(x;G) to process the input collaboratively. Specifically, given an input x, the router calculates the impor- tance of each expert, and the output y′is computed residually as a weighted sum of outputs from the experts: y′=W0x+nX i=1gi(x;G)Ei(x), (2) where gi(x;G)represents the weight of the i-th expert, and Ei(x)denotes the output of expert i. Top-k Routing Top-k routing is a common and effective routing strategy of the router g(x;G)in MoE,
https://arxiv.org/abs/2505.17553v1
which sparsely activates a subset of the ex- perts. Specifically, only the top kexperts with the highest values in g(x;G)are activated. Then, g(x;G)is renormalized for the activated experts. The renormalization is computed as follows: ˆgi(x) =(gi(x)P j∈top(g(x),k)gj(x)ifi∈top(g(x), k) 0 ifi /∈top(g(x), k), (3) where top(g(x), k)returns the indices of the largest kelements in g(x). 3 Related Works 3.1 Parameter-Efficient Fine-Tuning PEFT methods can be categorized into Adapter Tuning, Prompt Tuning, and Prefix Tuning: Adapter Tuning (Hu et al., 2023, 2021; Zhang et al., 2023b) includes algorithms like BitFit (Za- ken et al., 2021), which links learnable vectors to hidden states using multiplication or addition. (IA)3(Liu et al., 2022) introduces interpretable adapters to enhance model transparency and task- specific adaptability. Prompt-Tuning (Li and Liang, 2021; Liu et al., 2021; Zhu et al., 2024) introduces learnable prefixes to the input sequence. Prefix-Tuning (Li and Liang, 2021) modifies hid- den states by adding embeddings, making it effi- cient for few-shot tasks. LoRA (Hu et al., 2021) reduces the number of tunable parameters by introducing low-rank matrices into pre-trained weights, enhancing per- formance and efficiency. Currently, LoRA be- comes the most popular and commonly used PEFT method (Cui et al., 2023; Zhang et al., 2022). Nu- merous studies have focused on improving LoRA: Tied-LoRA (Renduchintala et al., 2023) further reduces trainable parameters by applying weight binding. AdaLoRA (Zhang et al., 2023a) utilizes singular value decomposition to decompose the weight matrices and eliminates insignificant singu- lar values, thereby simplifying the update process. DoRA (Liu et al., 2024) decomposes weights into magnitude and direction components for efficiency. 3.2 Mixture of Experts Recent research has increased its focus on com- bining LoRA with MoE (Shazeer et al., 2017; Ja- cobs et al., 1991) for adaptable and scalable LLMs, emerging as a balance between model capacity and computation overhead (Li et al., 2024; Zhang et al., 2024). Generally, MoE is employed for two primary purposes: (1) Avoiding catastrophic forgetting : LoRAMoE (Dou et al., 2023) lever- ages token-based routing to mitigate knowledge loss. MOELoRA (Liu et al., 2023) enhances expertselection in multi-tasking. MoRAL (Yang et al., 2024) adapts to new tasks in lifelong learning while preserving old knowledge. (2) Model efficiency : HydraLoRA (Tian et al., 2024) adopts an asymmet- ric LoRA structure without domain expertise, and MiLoRA (Zhang et al., 2024) employs a prompt- aware routing to reduce latency. LoRAMoE (Dou et al., 2023) incorporates a localized balancing con- straint to achieve balanced workloads. SCMoE (Shi et al., 2024) leverages unchosen experts to enhance parameter efficiency. Current methods focus on ar- chitectural expert partitioning but neglect capacity underutilization caused by expert redundancy. 4 Methods In this section, we first define an MI gap for top- k routing. Then we derive a contrastive objective to estimate the gap with learning expert represen- tations. Finally, we present a training approach that incorporates the contrastive objective as an auxiliary loss. 4.1 Motivation As demonstrated in OMoE (Feng et al., 2025), the vanilla MoE variant lacks specialization and modularity, causing LoRA experts to collapse into similar distributions. The lack of specialization and redundant
https://arxiv.org/abs/2505.17553v1
knowledge minimizes the utiliza- tion of capacity, which exacerbates performance degradation in heterogeneous tasks. Existing MoE variants (Li et al., 2024; Liu et al., 2023; Luo et al., 2024) leverage balance loss to promote specializa- tion among the experts, but fall far short. Ideally, experts should exhibit modularity and high specialization with minimal redundancy. To quantify these properties, we leverage MI, a ba- sic concept in information theory, to evaluate the dependence between inputs and experts in MoE. Thus, we formalize the aforementioned idea using MI to quantify the specialization and redundancy between the experts in top-k routing: •Maximizing the MI between inputs and ac- tivated experts : Promote activated experts to respond to inputs that highly match, thereby encouraging specialization. In addition, MI serves as an information bottleneck that filters irrelevant noise. •Minimizing the MI between inputs and in- activated experts : Suppress the response of inactivated experts to irrelevant inputs, 𝑒𝑖𝑡(𝑖∉𝑡𝑜𝑝2) Top-2 routingFFN Expert 1FFN Expert 2FFN Expert 3 Expert N... ...∑ 𝐿𝑐𝑜𝑛𝑡𝑟𝑎𝑠𝑡𝑖𝑣𝑒 Tokensq k+ 𝑒𝑖𝑡𝑖∈𝑡𝑜𝑝2k−SimilarityContrastive Loss FFN Multi -task dataset×Noutput Multi -Head AttentionAdd & NormFeed ForwardAdd & Norm 𝑒1𝑡𝑒𝑛𝑡… 𝑒𝑖𝑡— 𝑖-thexpert representation Figure 2: Architecture of CoMoE. Activated experts et i, i∈top2 in top-2 routing are selected as the query qand positive keys k+, while inactivated experts et i, i /∈top2 serve as negative keys k−. thereby preventing multiple experts from learning similar representations. 4.2 MI Gap for Top-k Routing Building upon the motivation, we begin by defin- ing an MI Gap for input token xand task experts setMin top-k routing. For simplicity, we denote I(·;·)as MI, H(·)as Shannon entropy, and Ei(·) as the network of expert i. Correspondingly, the representations of activated and inactivated experts are formulated as e+=E+(x)ande−=E−(x). Given an input token, an ideal expert in MoE should retain specialized knowledge while discard- ing redundant knowledge with others. Intuitively, this equals the uncertainty reduction in the acti- vated experts given the input token. We use pto denote the joint distribution as well as their asso- ciated marginals. Then the MI between the input token xand task experts set Mis defined as: I(x;M) =Ex,M∼D logp(M|x) p(M) (4) where p(M|x)andp(M)follow empirical distri- butions according to the existing input and output dataset D. Then, we establish a lower bound for I(x;M)to convert it to the MI between xand the expert representation e=E(x):I(x;M) =Ex,M logp(x|M) p(x) =Ex,M logZ ep(x|e)p(e|M) p(x)de =Ex,M logEep(x|e) p(x) =Ex,M logEep(e|x) p(e) , (5) where p(e|x)can be approximated by the ex- pert network E(x). Since log(·)is a convex func- tion, Jensen’s inequality yields the theoretical lower bound of I(x;M): I(x;M)≥Ex,e,M logp(e|x) p(e) . (6) In the top-k routing, the input token xand the ex- pert representation eform samples (x, e), which can be categorized into two different datasets: (x, e+)∼ D top-k, samples from the activated ex- perts; and (x, e−)∼ D¬top-k, samples from the in- activated experts. For clarity, we denote M+as the activated task experts and M−as the inactivated. Then, we define the MI gap for top-k routing as: Definition 1 MI Gap for activated experts and in- activated tas experts in top-k routing is
https://arxiv.org/abs/2505.17553v1
defined by: ∆I=Itop-k(x, M+)−I¬top-k(x, M−),(7) where Itop-kis the MI between input token xand ac- tivated experts M+, and I¬top-kis the MI between input token xand inactivated experts M−. The MI terms Itop-k andI¬top-k can be converted into a similar form as Eq. 5. To maximize the MI gap∆I, it is necessary to maximize the MI be- tween xand the activated task experts M+while minimizing the MI between xandM−. When the experts exhibit high specialization, each expert only yields high mutual information for specific to- ken subsets. Concurrently, when knowledge redun- dancy among the experts is minimized, the mutual information between tokens and inactivated experts approaches zero. 4.3 Contrastive Representation for MI Gap To estimate the MI gap, Eq. 6 provides a lower bound for the MI between input token xand task experts M. In practice, p(e|x)/p(e)cannot be calculated directly, thus we use two score func- tions h1(·)andh2(·)to measure the information density ratio, which preserves the MI between x andefor the activated and inactivated experts, re- spectively. Let h1(x, e+)∝p(e+|x)/p(e+)and h2(x, e−)∝p(e−|x)/p(e−). Then, we adopt con- trastive learning to estimate the MI gap. Instead of estimating Itop-kandI¬top-kseparately, we simplify the learning process by establishing a single contrastive objective to estimate ∆I. Specif- ically, we define the samples (x, e+)from the acti- vated experts in Dtop-kas positive samples. And the negative samples are sampled from the inactivated experts in D¬top-k. The loss function of contrastive learning can be formulated as: LNCE=−Ep(x,e+)ED¬top-k" logh1(x, e+) h1(x, e+) +P e−∈D¬top-kh2(x, e−)# . (8) Intuitively, the score function quantifies the expo- nential correlation between input token xand ex- pert representation e, assigning higher scores to positive samples and lower scores otherwise. The following theorem shows that the proposed con- trastive objective can serve as an estimation for the MI gap with sufficient negative samples. Theorem 1 (InfoNCE). The MI gap ∆I= Itop-k(x, e+)−I¬top-k(x, e−)can be lower bounded by the contrastive objective, as follows: ∆I≥log(N)− L NCE, (9) where Nis the number of negative samples from the inactivated experts.INCE= log( N)−L NCE approximates the true MI gap as Nincreases, which is a tight lower bound. Please refer to Appendix A for full derivations. To optimize the contrastive objective in Eq. 8, h1(·)andh2(·)are adopted to estimate the infor- mation density. Ideally, h1(·)assigns high scores only to positive samples (x, e+), andh2(·)assigns low scores only to negative samples (x, e−). Co- incidentally, (x, e−)∼ D¬top-kcan serve as nega- tive samples for the scoring function h1(·), while (x, e+)∼ D top-kfunctions as positive samples for h2(·), establishing a synergistic relationship be- tween them. Thus, we integrate h1(·)andh2(·) into a single function h(·), which assigns high scores to (x, e+)and low scores to (x, e−). Then, we can simplify Eq. 8 into a new version: ˆLNCE=−Ep(x,e+)ED¬top-k" logh(x, e+) h(x, e+) +P e−∈D¬top-kh(x, e−)# . (10) (x, e top-k)and(x, e¬top-k)form a bidirectional sam- ple pair, with each entity acting as both a positive and negative reference to its counterpart. For im- plementation, we choose the exponential similarity function: h(x, e) = exp( E+(x)·e)/τ, (11) which is commonly used to measure the similarity between two
https://arxiv.org/abs/2505.17553v1
representations. Here, τis a tempera- ture hyperparameter. In optimization, the score function assigns high scores to the representations of activated experts and low scores to inactivated experts. The de- rived InfoNCE loss in Eq. 10 can be generalized as a common contrastive loss: for each query qi=E+ i(x), its positive keys k+are obtained from the representations of other activated experts ink+∼ D top-k, k+̸=qi, while negative keys k− are sampled from k−∼ D¬top-k. The contrastive loss can be written as: Lcon=kX i=1−log exp(qi·k+ i/τ) exp(qi·k+ i/τ) +P k− iexp(qi·k− i/τ)! , (12) where kdenotes the number of activated experts. Method Params ST/MT ARC-e ARC-c BoolQ OBQA PIQA Avg. LoRA 2.9%ST 73.8 50.9 62.2 80.4 82.1 69.9 MT 61.3 ( -12.5 ) 55.7 ( +4.8 ) 66.7 ( +4.5 ) 71.6 ( -8.8 ) 72.4 ( -9.7 ) 65.5 ( -4.4 ) DoRA 2.9%ST 76.5 59.8 71.7 80.6 82.7 74.3 MT 64.5 ( -12 ) 54.1 ( -5.7 ) 65.4 ( -6.3 ) 75.8 ( -4.8 ) 71.9 ( -10.8 ) 66.3 ( -8 ) MOELoRA 1.0%ST 76.8 60.2 72.0 81.1 82.7 74.6 MT 76.1 ( -0.7 ) 59.3 ( -0.9 ) 71.5 ( +0.1 ) 80.7 ( -0.4 ) 82.1 ( -0.3 ) 73.9 ( -0.5 ) MiLoRA 0.93%ST 77.8 61.2 72.8 81.7 83.3 75.4 MT 77.4 ( -0.4 ) 61.5 ( +0.3 ) 72.3 ( -0.3 ) 81.3 ( -0.4 ) 83.5 ( +0.3 ) 75.2 ( -0.1 ) MixLoRA 2.9%ST 78.4 56.1 72.7 81.6 83.2 74.4 MT 76.6 ( -1.8 ) 64.2 ( +8.1 ) 71.2 ( -1.5 ) 81.6 ( -0.0 ) 82.7 ( -0.5 ) 75.3 ( +0.9 ) OMoE-LoRA 0.73%ST 79.3 56.6 73.5 80.6 84.5 74.9 MT 79.8 ( +0.5 ) 66.8 ( +10.2 ) 72.4 ( -1.1 ) 76.8 ( -3.8 ) 81.6 ( -2.9 ) 75.4 ( +0.5 ) CoMoE-LoRA (ours) 1.45%ST 80.3 57.3 72.9 80.4 83.6 74.9 MT 79.6 ( -0.7 ) 66.5 ( +9.2 ) 71.8 ( -1.1 ) 81.2 ( +0.8 ) 81.8 ( -1.8 ) 76.2 ( +1.3 ) Table 1: Overall comparison of different PEFT methods for multi-task learning. The backbone model is LLaMA-2 7B. ST refers to the single-task settings, while MT refers to the multi-task settings. Reported results are accuracy scores, with differences between MT and ST indicated in red for decreases and in blue for increases. Method Params ARC-e ARC-c BoolQ OBQA PIQA SIQA HellaS WinoG LoRA 2.9% 73.8 50.9 62.2 80.4 82.1 69.9 88.4 66.8 DoRA 2.9% 76.5 59.8 71.7 80.6 82.7 74.1 89.6 67.3 MixLoRA 2.9% 78.4 56.1 72.7 81.6 83.2 78.0 92.8 76.8 CoMoE-LoRA 1.45% 80.3 57.3 72.9 80.4 83.6 79.2 93.2 77.3 CoMoE-DoRA 1.45% 80.2 57.0 73.3 81.2 83.8 79.1 92.8 77.3 Table 2: Overall comparison of different PEFT methods for single-task learning, using base models with different numbers of parameters. Bold indicates the best results. 4.4 Training Approach In standard supervised fine-tuning, the primary objective is to minimize the cross-entropy loss LCE between predicted tokens and target tokens. As a core component in MoE, the top- krouter directly determines which experts are
https://arxiv.org/abs/2505.17553v1
activated dur- ing inference. Based on our analysis and deriva- tions in Sections 4.2 and 4.3, we incorporate the contrastive loss as an auxiliary optimization ob- jective, with the complete workflow illustrated in Fig. 2. The contrastive loss effectively enhances the distinctiveness of experts, thereby promoting specialization and reducing redundancy among the experts. By incorporating the contrastive loss in Eq. 12, the total loss is computed as: Ltotal=LCE+λ· Lcon, (13) where λis a hyperparameter to scale the auxiliary loss. Our method requires no pretraining and can be seamlessly integrated into existing MoE archi- tectures. To summarize, the complete algorithm pseudocode is provided in Appendix B. 5 Experiments In this section, we conduct extensive experi- ments coupled with ablation and visualization ex-periments to evaluate the effectiveness of CoMoE, accompanied by concise analyses. 5.1 Experimental Settings Datasets and Benchmarks. We conduct experi- ments on a collection of tasks: (a) Diverse common- sense reasoning datasets: ARC-e and ARC-c (Clark et al., 2018), OpenBookQA (OBQA) (Mihaylov et al., 2018), PIQA (Bisk et al., 2020), SocialIQA (SIQA) (Sap et al., 2019), and BoolQ (Clark et al., 2019). (b) A science completion task: Hel- laswag (Zellers et al., 2019). (c) A fill-in-the-blank task: Winogrande (Sakaguchi et al., 2021). We utilize the PEFT framework provided by (Hu et al., 2023; Li et al., 2024) for training on these datasets. We choose LLaMA-2 7B and Gemma 2B as our backbone models. The detailed statistics and evaluation metrics can be found in Appendix C. Baselines. In this study, we compare our method with several popular and well-established base- lines to assess its performance. For multi-task settings, we evaluate CoMoE against LoRA, its variants, and MoE-based methods, including: (1) LoRA (Hu et al., 2021); (2) DoRA (Liu et al., 2024), which decomposes LoRA weights into magnitude and direction; (3) MoELoRA (Liu et al., 2023), which decomposes a LoRA module into a mixture of experts; (4) MiLoRA (Zhang et al., 2024), which treats each LoRA mod- ule as an expert and employs a routing mecha- nism; (5) MixLoRA (Li et al., 2024), a resource- efficient sparse MoE model based on LoRA; (6) OMoE (Feng et al., 2025), which applies hard constraints to promote diversity. For single-task settings, we compare our approach with other PEFT baselines, including: AdaLoRA (Zhang et al., 2023a); Parallel-Adapter (He et al., 2021); Learned-Adapter (Zhang et al., 2023b); P-tuning v2(Liu et al., 2021); IAPT (Zhu et al., 2024); Bit- Fit(Zaken et al., 2021); (IA)3(Liu et al., 2022); SSP (Hu et al., 2022). Most of the results are di- rectly extracted from (Zhang et al., 2024), with a few baselines reproduced by running the provided source code. In single-task settings, we compare CoMoE with LoRA, DoRA, and MixLoRA as rep- resentative baselines, while comparisons with other methods are presented in Appendix E. Implementation Details. To evaluate the effec- tiveness of CoMoE, we apply it on the basis of LoRA and DoRA, respectively, and label them as CoMoE-LoRA and CoMoE-DoRA in the experi- ments. Unless otherwise specified, CoMoE is con- figured with r= 16 , incorporating 4 experts and a top-2 router,
https://arxiv.org/abs/2505.17553v1
applied to the q,k,v, andoparameters in the attention layers. For all settings, we adopt su- pervised fine-tuning only. Due to space limitations, the detailed experimental settings for baselines and hyperparameters are provided in Appendix D. 5.2 Main Results Multi-task Setup. Table 1 summarizes the multi- task performance of CoMoE and baselines on LLaMA-2 7B. The results verify that both LoRA and DoRA, due to their lack of MoE structures, struggle to handle heterogeneous datasets, observ- ing a significant drop in multi-task settings (7% ∼12% degradation). MoE-based PEFT methods (MoELoRA, MiLoRA, MixLoRA, and OMoE) mit- igate the performance degradation in multi-task settings but fail to leverage the modularity and spe- cialization in MoE, leaving potential performance gains unexploited. In contrast, CoMoE introduces contrastive learning to promote the modularization and specialization of experts, enabling efficient uti- lization of the MoE’s capacity. Thus, CoMoE not only improves parameter efficiency (reduces the number of experts) but also achieves an average 0.0 0.001 0.01 0.1 1.0 71.471.671.8accuracy% BoolQ(a) single-task setting 0 0.001 0.01 0.1 1 72.573.073.5Accuracy% ARC-c,ARC-e,BoolQ,OBQA (b) multi-task setting Figure 3: Performances under different coefficients λ. (a) In single-task settings. (b) In multi-task settings. Method ARC-e ARC-c BoolQ OBQA Avg. CoMoE- △ 80.5 67.0 72.4 78.4 74.6 CoMoE- ▽ 79.7 65.1 72.3 76.8 73.5 CoMoE- ♢ 77.9 65.3 72.1 77.6 73.2 CoMoE- ▷ ◁ 77.7 66.6 72.4 80.0 74.2 Table 3: Performance comparison of different CoMoE variants on four datasets. accuracy gain of +1.3. Single-task Setup. In this setup, we compare the performance of CoMoE and baselines in single- task settings. The experimental results are shown in Table 2. Comparisons with other baselines are provided in Appendix E. CoMoE demonstrates su- perior parameter efficiency while maintaining com- parable performance, with a reduction of approx- imately 50% in tunable parameters. Remarkably, the diversity among experts even yields accuracy improvements on a subset of datasets (e.g., ARC-e, BoolQ, and SIQA). 5.3 Ablation Studies and Further Analysis Effects of the Hyperparameter λ.We further evaluate λ∈ {0.0,0.001,0.01,0.1,1.0}on the BoolQ and the multi-task settings (ARC-c, ARC- e, BoolQ, and OBQA). Experimental results in Fig. 3(a) and (b) demonstrate that λ= 1e−2yields optimal performance in both single-task and multi- task settings. In single-task configurations, model performance shows gradual improvement as the value increases from 0, 0.001 to 0.01, suggesting that even individual tasks can benefit from diver- sity. However, significant performance degradation occurs at higher values (0.1 and 1.0), indicating that excessive diversity impedes effective dataset adaptation. This phenomenon is more pronounced in multi-task settings, where the model shows am- plified gains from diversity. Effects of the Number of Experts n.The results are provided in Appendix F. We can see that Co- /uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000033/uni00000048/uni00000055/uni00000046/uni00000048/uni00000051/uni00000057/uni00000044/uni0000004a/uni00000048/uni0000003a/uni00000052/uni00000055/uni0000004e/uni0000004f/uni00000052/uni00000044/uni00000047/uni00000003/uni00000049/uni00000052/uni00000055/uni00000003/uni00000048/uni0000005b/uni00000053/uni00000048/uni00000055/uni00000057/uni00000056/uni00000003/uni0000005a/uni0000004c/uni00000057/uni0000004b/uni00000052/uni00000058/uni00000057/uni00000003/uni00000046/uni00000052/uni00000051/uni00000057/uni00000055/uni00000044/uni00000056/uni00000057/uni0000004c/uni00000059/uni00000048/uni00000003/uni0000004f/uni00000052/uni00000056/uni00000056 /uni00000048/uni0000005b/uni00000053/uni00000048/uni00000055/uni00000057/uni00000014 /uni00000048/uni0000005b/uni00000053/uni00000048/uni00000055/uni00000057/uni00000015 /uni00000048/uni0000005b/uni00000053/uni00000048/uni00000055/uni00000057/uni00000016 /uni00000048/uni0000005b/uni00000053/uni00000048/uni00000055/uni00000057/uni00000017 ARC /uni00000010/uni00000046 B/uni00000052/uni00000052/uni0000004fQ OBQA(a) w/o contrastive loss /uni00000013/uni00000011/uni00000013/uni00000013/uni00000013/uni00000011/uni00000013/uni00000018/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000018/uni00000013/uni00000011/uni00000015/uni00000013/uni00000013/uni00000011/uni00000015/uni00000018/uni00000013/uni00000011/uni00000016/uni00000013/uni00000013/uni00000011/uni00000016/uni00000018/uni00000013/uni00000011/uni00000017/uni00000013/uni00000033/uni00000048/uni00000055/uni00000046/uni00000048/uni00000051/uni00000057/uni00000044/uni0000004a/uni00000048/uni0000003a/uni00000052/uni00000055/uni0000004e/uni0000004f/uni00000052/uni00000044/uni00000047/uni00000003/uni00000049/uni00000052/uni00000055/uni00000003/uni00000048/uni0000005b/uni00000053/uni00000048/uni00000055/uni00000057/uni00000056/uni00000003/uni0000005a/uni0000004c/uni00000057/uni0000004b/uni00000003/uni00000046/uni00000052/uni00000051/uni00000057/uni00000055/uni00000044/uni00000056/uni00000057/uni0000004c/uni00000059/uni00000048/uni00000003/uni0000004f/uni00000052/uni00000056/uni00000056 /uni00000048/uni0000005b/uni00000053/uni00000048/uni00000055/uni00000057/uni00000014 /uni00000048/uni0000005b/uni00000053/uni00000048/uni00000055/uni00000057/uni00000015 /uni00000048/uni0000005b/uni00000053/uni00000048/uni00000055/uni00000057/uni00000016 /uni00000048/uni0000005b/uni00000053/uni00000048/uni00000055/uni00000057/uni00000017 ARC /uni00000010/uni00000046 B/uni00000052/uni00000052/uni0000004fQ OBQA (b) w/ contrastive loss Figure 4: Comparison of the workloads of experts before and after contrastive loss incorporation in a multi-task setting. (a) Without contrastive loss. (b) With contrastive loss. layer:32, task:obqa Expert 1 Expert 2 Expert 3 Expert 4 (a) w/o contrastive loss layer:32, task:obqa Expert 1 Expert
https://arxiv.org/abs/2505.17553v1
2 Expert 3 Expert 4 (b) w/ contrastive loss Figure 5: Comparison of expert representations in obqa before and after contrastive loss incorporation in a multi- task setting. (a) Without contrastive loss. (b) With contrastive loss. MoE benefits from more experts. Ablation on Different Backbones. We conduct multi-task experiments on Gemma 2B, with de- tailed results provided in Appendix G. Model Efficiency. Please refer to Appendix H. Layer-wise Diversity Analysis. Having estab- lished the benefit of expert diversity in multi-task settings, we naturally ask: Which layers in LLMs benefit most from diverse experts? Using LLaMA- 2 7B as a case, we simplify the large language model into three levels: low (from layer 1 to 10), medium (from layer 11 to 20), and high (from layer 21 to 32), and inject CoMoE into them. Specif- ically, four types of layer-wise diversity config- urations are explored: (1) CoMoE- △, applying CoMoE in low layers; (2) CoMoE- ▽, applying Co- MoE in high layers; (3) CoMoE- ♢, applying Co- MoE in medium layers; (4) CoMoE- ▷ ◁, applying CoMoE in low and high layers. Our experiments in Table 3 reveal that CoMoE- △and CoMoE- ▷ ◁achieve superior performance, outperforming CoMoE- ▽and CoMoE- ♢by 0.7∼1.4 on averageaccuracy across datasets. This performance gap stems from the placement of CoMoE in lower trans- former layers, which is critical for establishing di- versity early in the processing hierarchy. Analysis of Workload Balance. The primary dis- tinction between CoMoE and vanilla MoE vari- ants lies in the contrastive learning objective that promotes specialization and modularity among ex- perts. To further elucidate the effectiveness of Co- MoE, we conducted an in-depth analysis of ex- pert activation in MoE under a multi-task setting. Fig. 4 presents a comparative visualization of ex- pert workload distributions before and after intro- ducing the contrastive loss. Without contrastive loss, all tasks predominantly concentrated on ex- pert 1 and 2, indicating insufficient differentiation and an imbalanced workload. After introducing contrastive loss, distinct tasks exhibited marked differences in expert selection. For ARC-c, expert 1 and 3 showed significantly increased activation frequency. For BoolQ, expert 1 and 4 formed a sta- ble collaborative relationship. Notably, our method did not incorporate any routing balance loss, yet the collaboration among experts emerged naturally. Visualization of Expert Representations. To demonstrate the impact of contrastive loss on ex- pert diversity, we conduct qualitative analysis by visualizing expert representations before and after its incorporation, which is illustrated in Fig. 5. In multi-task settings, we choose the OBQA dataset for example and the visualizations of other datasets are provided in Appendix I. The results demon- strate that the absence of contrastive loss leads to substantial redundancy among experts, compromis- ing their discriminability. In contrast, the introduc- tion of contrastive loss fosters divergent experts, culminating in specialization and modularity. 6 Conclusion In this paper, we focus on the problem of redun- dant knowledge learning in MoE, which leads to the underutilization of its capacity. To address this issue, we propose Contrastive Representation for MoE (CoMoE), a novel MoE variant that promotes modularization and specialization. Specifically, we
https://arxiv.org/abs/2505.17553v1
define a mutual information (MI) gap between ac- tivated and inactivated experts and approximate it through a contrastive objective. This objective ef- fectively captures the MI gap and is incorporated into supervised fine-tuning as an auxiliary optimiza- tion term. Experiments on various tasks demon- strate that CoMoE outperforms the baselines in multi-task settings and enhances expert modularity. Limitations We show that our proposed method can signifi- cantly improve the specialization of experts and the performance of MoE in multi-task settings. How- ever, there are several limitations to acknowledge: (a) The computational cost of CoMoE. CoMoE leverages inactivated experts as negative samples, and its computational cost is proportional to O(N). However, it substantially improves expert special- ization and modularization, achieving remarkable performance with a limited number of experts; (b) Other evaluation benchmarks were not considered. Nevertheless, CoMoE is simple and can be eas- ily integrated with different backbone models and various downstream tasks. Ethics Statement This paper proposes a novel method to en- hance parameter-efficient fine-tuning based on the Mixture-of-Experts architecture, which simultane- ously improves expert modularization and special- ization while boosting performance on heteroge- neous and complex datasets. The experiments em- ploy widely adopted benchmark datasets in the research community that, to our knowledge, in- volve no privacy concerns or ethical controver- sies. The experiments are conducted on the open- source LLaMA-2 series of large language models. It should be emphasized that this work represents fundamental research focused exclusively on ad- vancing MoE-based fine-tuning methods for LLMs, rather than developing applications.References Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. 2020. Piqa: Reasoning about physical com- monsense in natural language. In Proceedings of the AAAI conference on artificial intelligence , volume 34, pages 7432–7439. Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. Boolq: Exploring the surprising difficulty of natural yes/no questions. arXiv preprint arXiv:1905.10044 . Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question an- swering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457 . Ganqu Cui, Lifan Yuan, Ning Ding, Guanming Yao, Wei Zhu, Yuan Ni, Guotong Xie, Zhiyuan Liu, and Maosong Sun. 2023. Ultrafeedback: Boosting lan- guage models with high-quality feedback. Shihan Dou, Enyu Zhou, Yan Liu, Songyang Gao, Jun Zhao, Wei Shen, Yuhao Zhou, Zhiheng Xi, Xiao Wang, Xiaoran Fan, et al. 2023. Loramoe: Alleviate world knowledge forgetting in large lan- guage models via moe-style plugin. arXiv preprint arXiv:2312.09979 . Jinyuan Feng, Zhiqiang Pu, Tianyi Hu, Dongmin Li, Xiaolin Ai, and Huimu Wang. 2025. Omoe: Diversi- fying mixture of low-rank adaptation by orthogonal finetuning. arXiv preprint arXiv:2501.10062 . Junxian He, Chunting Zhou, Xuezhe Ma, Taylor Berg- Kirkpatrick, and Graham Neubig. 2021. Towards a unified view of parameter-efficient transfer learning. arXiv preprint arXiv:2110.04366 . Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adap- tation of large language models. arXiv preprint arXiv:2106.09685 . Shengding Hu, Zhen Zhang, Ning Ding, Yadao Wang, Yasheng Wang, Zhiyuan Liu, and Maosong Sun.
https://arxiv.org/abs/2505.17553v1
2022. Sparse structure search for parameter-efficient tuning. arXiv preprint arXiv:2206.07382 . Zhiqiang Hu, Lei Wang, Yihuai Lan, Wanyu Xu, Ee- Peng Lim, Lidong Bing, Xing Xu, Soujanya Poria, and Roy Lee. 2023. Llm-adapters: An adapter family for parameter-efficient fine-tuning of large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 5254–5276. Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Yao Fu, et al. 2024. C-eval: A multi-level multi-discipline chinese evalua- tion suite for foundation models. Advances in Neural Information Processing Systems , 36. Robert A Jacobs, Michael I Jordan, Steven J Nowlan, and Geoffrey E Hinton. 1991. Adaptive mixtures of local experts. Neural computation , 3(1):79–87. Siming Lan, Rui Zhang, Qi Yi, Jiaming Guo, Shaohui Peng, Yunkai Gao, Fan Wu, Ruizhi Chen, Zidong Du, Xing Hu, et al. 2024. Contrastive modules with tem- poral attention for multi-task reinforcement learning. Advances in Neural Information Processing Systems , 36. Dengchun Li, Yingzi Ma, Naizheng Wang, Zhiyuan Cheng, Lei Duan, Jie Zuo, Cal Yang, and Mingjie Tang. 2024. Mixlora: Enhancing large language models fine-tuning with lora based mixture of experts. arXiv preprint arXiv:2404.15159 . Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190 . Haokun Liu, Derek Tam, Mohammed Muqeeth, Jay Mo- hta, Tenghao Huang, Mohit Bansal, and Colin A Raf- fel. 2022. Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning. Ad- vances in Neural Information Processing Systems , 35:1950–1965. Qidong Liu, Xian Wu, Xiangyu Zhao, Yuanshao Zhu, Derong Xu, Feng Tian, and Yefeng Zheng. 2023. Moelora: An moe-based parameter efficient fine- tuning method for multi-task medical applications. arXiv preprint arXiv:2310.18339 . Shih-Yang Liu, Chien-Yi Wang, Hongxu Yin, Pavlo Molchanov, Yu-Chiang Frank Wang, Kwang-Ting Cheng, and Min-Hung Chen. 2024. Dora: Weight- decomposed low-rank adaptation. arXiv preprint arXiv:2402.09353 . Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Lam Tam, Zhengxiao Du, Zhilin Yang, and Jie Tang. 2021. P- tuning v2: Prompt tuning can be comparable to fine- tuning universally across scales and tasks. arXiv preprint arXiv:2110.07602 . Tongxu Luo, Jiahe Lei, Fangyu Lei, Weihao Liu, Shizhu He, Jun Zhao, and Kang Liu. 2024. Moelora: Contrastive learning guided mixture of experts on parameter-efficient fine-tuning for large language models. arXiv preprint arXiv:2402.12851 . Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct elec- tricity? a new dataset for open book question answer- ing. arXiv preprint arXiv:1809.02789 . Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748 . Chen Qian, Zihao Xie, Yifei Wang, Wei Liu, Yu- fan Dang, Zhuoyun Du, Weize Chen, Cheng Yang, Zhiyuan Liu, and Maosong Sun. 2024. Scaling large-language-model-based multi-agent collabora- tion. arXiv preprint arXiv:2406.07155 .Adithya Renduchintala, Tugrul Konuk, and Oleksii Kuchaiev. 2023. Tied-lora: Enhancing parameter efficiency of lora with weight tying. arXiv preprint arXiv:2311.09578 . Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavat- ula, and Yejin Choi. 2021. Winogrande: An adver- sarial winograd schema challenge at scale. Commu- nications of the ACM
https://arxiv.org/abs/2505.17553v1
, 64(9):99–106. Maarten Sap, Hannah Rashkin, Derek Chen, Ronan LeBras, and Yejin Choi. 2019. Socialiqa: Com- monsense reasoning about social interactions. arXiv preprint arXiv:1904.09728 . Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538 . Chufan Shi, Cheng Yang, Xinyu Zhu, Jiahao Wang, Taiqiang Wu, Siheng Li, Deng Cai, Yujiu Yang, and Yu Meng. 2024. Unchosen experts can contribute too: Unleashing moe models’ power by self-contrast. arXiv preprint arXiv:2405.14507 . Chunlin Tian, Zhan Shi, Zhijiang Guo, Li Li, and Chengzhong Xu. 2024. Hydralora: An asymmet- ric lora architecture for efficient fine-tuning. arXiv preprint arXiv:2404.19245 . Pengfei Wang, Huanran Zheng, Silong Dai, Wenjing Yue, Wei Zhu, and Xiaoling Wang. 2024. Ts-tcd: Triplet-level cross-modal distillation for time-series forecasting using large language models. arXiv preprint arXiv:2409.14978 . Xiaoyu Wen, Chenjia Bai, Kang Xu, Xudong Yu, Yang Zhang, Xuelong Li, and Zhen Wang. 2024. Con- trastive representation for data filtering in cross- domain offline reinforcement learning. arXiv preprint arXiv:2405.06192 . Shu Yang, Muhammad Asif Ali, Cheng-Long Wang, Li- jie Hu, and Di Wang. 2024. Moral: Moe augmented lora for llms’ lifelong learning. arXiv preprint arXiv:2402.11260 . Elad Ben Zaken, Shauli Ravfogel, and Yoav Gold- berg. 2021. Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language- models. arXiv preprint arXiv:2106.10199 . Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830 . Jingfan Zhang, Yi Zhao, Dan Chen, Xing Tian, Huan- ran Zheng, and Wei Zhu. 2024. Milora: Efficient mixture of low-rank adaptation for large language models fine-tuning. In Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 17071–17084. Qingru Zhang, Minshuo Chen, Alexander Bukharin, Nikos Karampatziakis, Pengcheng He, Yu Cheng, Weizhu Chen, and Tuo Zhao. 2023a. Adalora: Adap- tive budget allocation for parameter-efficient fine- tuning. arXiv preprint arXiv:2303.10512 . Yuming Zhang, Peng Wang, Ming Tan, and Wei Zhu. 2023b. Learned adapters are better than manually designed adapters. In Findings of the Association for Computational Linguistics: ACL 2023 , pages 7420– 7437. Zhen Zhang, Wei Zhu, Jinfan Zhang, Peng Wang, Rize Jin, and Tae-Sun Chung. 2022. Pcee-bert: Acceler- ating bert inference via patient and confident early exiting. In Findings of the Association for Computa- tional Linguistics: NAACL 2022 , pages 327–338. Wei Zhu, Aaron Xuxiang Tian, Congrui Yin, Yuan Ni, Xiaoling Wang, and Guotong Xie. 2024. Iapt: Instruction-aware prompt tuning for large language models. arXiv preprint arXiv:2405.18203 . A Theoretical Proof Theorem 2 (InfoNCE). The MI gap ∆I=Itop-k(x, e+)−I¬top-k(x, e−)can be lower bounded by the contrastive objective, as ∆I≥log(N)− L NCE, (14) where Nis the number of negative samples from the inactivated expert dataset. Proof. As illustrated in Eq. 8, we utilize h1(x, e+)∝p(e+|x)/p(e+)andh2(x, e−)∝p(e−|x)/p(e−) to approximate the information density ratio, which preserves the MI between xandM. Based on the standard derivations presented in InfoNCE (Oord et al., 2018) and (Wen et al., 2024), the contrastive objective can be rewritten as: LNCE=−Ep(x,e+)ED¬top-klog" p(e+|x)/p(e+) p(e+|x)/p(e+) +P e−∈D¬top-kp(e−|x)/p(e−)# =Ep(x,e+)ED¬top-klog 1 +p(e+) p(e+|x)X e−∈D¬top-kp(e−|x) p(e−)  =Ep(x,e+)ED¬top-klog 1
https://arxiv.org/abs/2505.17553v1
+Np(e+) p(e+|x)(1 N)X e−∈D¬top-kp(e−|x) p(e−)  ≥Ep(x,e+)log Np(e+) p(e+|x)(1 N)X e−∈D¬top-kp(e−|x) p(e−)  =Ep(x,e+)log (1 N)X e−∈D¬top-kNp(e+) p(e+|x)p(e−|x) p(e−) (15) We derive Eq. 15 using Jensen’s inequality, noting that logis a concave function. LNCE≥Ep(x,e+) (1 N)X e−∈D¬top-klog Np(e+) p(e+|x)p(e−|x) p(e−)  =Ep(x,e+) (1 N)X e−∈D¬top-k logN +logp(e+) p(e+|x)+logp(e−|x) p(e−)  ≈Ep(x,e+) logN +logp(e+) p(e+|x)+Ee−∈D¬top-klogp(e−|x) p(e−) =logN−Itop-k+I¬top-k =lonN−∆I(16) Thus, we prove the aforementioned theorem: ∆I≥log(N)− L NCE. (17) B Optimization Algorithm To enhance specialization and modularization among experts, we devise Contrastive Representa- tion for MoE , outlined in Algorithm 1. For each sample, one expert is randomly selected from its top-kexpert indices Tas the anchor. The remain-ing top- kexperts act as positives, while all other inactivated experts form negatives. The procedure is: 1.Expert representation. Obtain the individual expert outputs {Ej(x)}n j=1, each with dimen- sionality Ej(x)∈RD. Algorithm 1: Contrastive Loss Computation among Experts (single-sample) Input: Top-kexpert indices T∈Nk Expert representations {Ej(x)}n j=1, where Ej(x)∈RD Temperature τ Output: Contrastive loss Lcontrast 1r∼ U{ 1, . . . , k } // Random anchor position 2a←T[r] // Anchor expert index 3q←Normalize( Ea(x)) 4P← {Normalize( ET[j](x))|j̸=r} // Positive set, size k−1 5N← {Normalize( Ej(x))|j /∈T} // Negative set, size n−k 6spos←(q·P⊤)/τ 7sneg←(q·N⊤)/τ 8logits←[spos, sneg] 9Lcontrast =−logPexp(spos)Pexp(logits) + ε 10return Lcontrast 2.Anchor selection. Uniformly sample an index rfrom{1, . . . , k }. Define the anchor (query) vector as q= Normalize( ET[r](x)). 3.Positive set. Aggregate the remaining (k− 1)expert representations, excluding the one indexed by T[r]from the index set T, into a setP, applying normalization: P={Normalize( ET[j](x))|j̸=r}. 4.Negative set. Collect and normalize represen- tations from experts not included in the top- k indices: N={Normalize( Ej(x))|j /∈T}. 5.Similarity computation. Compute cosine similarities between anchor vector qand each representation in PandN, scaled by temper- ature τ, yielding similarity scores sposand sneg: spos=q·P⊤ τ, s neg=q·N⊤ τ. 6.InfoNCE loss. Concatenate the logits and compute the InfoNCE loss as Lcontrast =−logPexp(spos)Pexp([ spos, sneg]) +ε . where εis a small positive value (e.g., 10−3) used to ensure numerical stability, avoiding computational issues caused by the denomina- tor being zero.C Datasets Detailed information about the datasets used in the experiments is presented in Table 4. All datasets are downloaded from HuggingFace. D Experimental settings Computing Infrastructure We run all our ex- periments on NVIDIA A6000 (48GB) GPUs, using Python 3.10 and Ubuntu 20.04 on x86-64 CPUs. Pretrained Backbones The main experiments use the most recent open-sourced LLM, LLaMA- 2 7B and Gemma 2B, as the pretrained back- bone model. When fine-tuning LLaMA-2 7B and Gemma 2B, we consider only the supervised fine- tuning setting. Hyperparameters for CoMoE In our experi- ments, unless otherwise specified, we set the hy- perparameters as illustrated in Table 5. In the table, the hyperparameters set by other baseline meth- ods, LoRA, DoRA, MixLoRA, MixDoRA, OMoE- LoRA, are also included. Under this setting, Co- MoE introduces approximately 1.45% tunable pa- rameters to the LLaMA-2 7B backbone. Descriptive Statistics about Results We con- duct experiments on all training settings using five different random seeds, and the final results repre- sent the median accuracy within each setting. E Additional Results
https://arxiv.org/abs/2505.17553v1
on Other Baselines In the main paper, we compared CoMoE with three widely recognized and well-performing base- lines (LoRA, DoRA, and MixLoRA) using the Datasets #train #test Type Metrics BoolQ 9,427 3,270 Text Classification acc OBQA 4,957 500 Question Answering acc ARC-e 2,251 2,376 Question Answering acc ARC-c 1,119 1,172 Question Answering acc PIQA 16,100 1,840 Question Answering acc SIQA 33,410 1,954 Question Answering acc HellaSwag 39,905 10,042 Sentence Completion acc WinoGrande 9,248 1,267 Fill in the Blank acc Table 4: The dataset statistics. Hyperparameters LoRA/DoRA MixLoRA/MixDoRA OMoE-LoRA CoMoE Cutoff Length 512 Learning Rate 2e-4 Optimizer AdamW Batch size 16 Accumulation Steps 8 Dropout 0.05 Epochs 2 Where Q, K, V , O, Up, Down, Gate LoRA Rank r 80 16 16 16 LoRA Alpha α 160 32 32 32 Experts - 8 2 4 Routing strategy - Top - 2 routing Soft routing Top - 2 routing Table 5: Hyperparameter configurations of LoRA, DoRA, MixLoRA, MixDoRA, OMoE-LoRA and CoMoE for fine-tuning LLaMA-2 7B on datasets. LLaMA-2 7B model. In addition to the results shown in Table 2, we provide experimental re- sults involving 11 additional strong baselines on the same LLaMA-2 7B backbone, as detailed in Table 9. The results demonstrate that CoMoE achieves significant improvements in both parame- ter efficiency and overall performance compared to these baselines. F Additional Results on Different Number of Experts n We further compared the experimental results under different numbers of experts. In the multi- task setting, using the ARC-c, ARC-e, BoolQ, and OBQA datasets, the results are shown in Table 6. As the number of experts increases from 4 to 8, the model’s performance stabilizes and shows modest improvement. While merely increasing the number of experts does not guarantee significant perfor- mance enhancement, expanding the expert pool in multi-task settings contributes to improved model stability and may lead to certain performance gains.G Additional Results on Different Backbone Models Our main experiments are conducted on LLaMA- 2 7B. To demonstrate the adaptation of our method, we compare CoMoE with MixLoRA on the Gemma 2B backbone, as shown in Table 7. Results indicate that CoMoE still achieves certain performance gains over the baseline across differ- ent backbones. H Model efficiency and Computational budget To evaluate the model efficiency and computa- tional budget of CoMoE, we compare it with the baselines(LoRA, DoRA, MixLoRA, OMOE) in three aspects: inference latency, memory cost, and training time. We base our evaluation on the follow- ing three metrics: (a) the inference time required for generating responses (ms), (b) the GPU memory cost (MiB), and (c) the training time in multi-task settings (h). In the multi-task setting, ARC-e, ARC- c, BoolQ, and OBQA are trained simultaneously. Number of Experts nARC-e ARC-c BoolQ OBQA Avg. n= 4 80.0 66.6 71.2 77.6 73.9 n= 5 79.8 64.4 70.9 77.4 73.1 n= 6 80.6 64.5 73.1 76.0 73.6 n= 7 79.7 65.4 72.5 78.0 73.9 n= 8 79.1 64.7 72.0 80.8 74.2 Table 6: Accuracy results across different expert configurations (from 4 to 8 experts) on multi-task evaluation. Method ARC-e ARC-c BoolQ OBQA Avg. MixLoRA
https://arxiv.org/abs/2505.17553v1
22.4 24.0 62.2 27.6 34.1 CoMoE-LoRA 25.7 23.7 62.2 25.2 34.2 Table 7: Comparison of MixLoRA and CoMoE in multi-task learning. The backbone model is Gemma 2B. Method Latency (ms) Memory (MiB) Training time (h) LoRA 2,096 +1,630 1.8h DoRA 1,748 +2,184 1.7h MixLoRA 4,217 +1,776 2.2h OMoE(Top-2) 4,863 +1,776 2.3h CoMoE 3,789 +1,311 3.5h Table 8: The inference latency, memory cost and training time of the LLaMA-2 7B for generating a batch of responses using CoMoE and baselines. The results are provided in Table 8. From Table 8, we observe that compared to the well-performing MixLoRA, CoMoE achieves a 10% improvement in inference efficiency while reducing GPU mem- ory usage by 465 MiB. In terms of training time, CoMoE requires 3.5 hours of training on an A6000 GPU under the multi-task setting. Although Co- MoE increases the training burden, it does not com- promise inference efficiency and simultaneously enhances model performance. I Additional Visualization of Representations under Different Datasets Our main experiment visualizes the expert repre- sentations of the obqa dataset in multi-task settings before and after introducing the contrastive loss, which is illustrated in Fig. 5. The visualization results demonstrate that the introduced contrastive loss promotes modularity among experts while pre- venting knowledge redundancy between them. Vi- sualization results from other datasets (ARC-c and BoolQ) under the same model, which are provided in Fig. 6 and Fig. 7, yield similar conclusions. layer:32, task:arcc Expert 1 Expert 2 Expert 3 Expert 4(a) w/o contrastive loss layer:32, task:arcc Expert 1 Expert 2 Expert 3 Expert 4 (b) w/ contrastive loss Figure 6: Comparison of expert representations in ARC-c before and after contrastive loss incorporation in a multi-task setting. (a) Without contrastive loss. (b) With contrastive loss. layer:32, task:boolq Expert 1 Expert 2 Expert 3 Expert 4 (a) w/o contrastive loss layer:32, task:boolq Expert 1 Expert 2 Expert 3 Expert 4 (b) w/ contrastive loss Figure 7: Comparison of expert representations in BoolQ before and after contrastive loss incorporation in a multi-task setting. (a) Without contrastive loss. (b) With contrastive loss. Method Params ARC-e ARC-c BoolQ OBQA PIQA Baselines Parallel-Adapter 0.96% 67.1 54.2 65.2 76.3 69.8 Learned-Adapter 0.94% 69.3 54.4 64.9 78.4 75.6 P-tuning v2 0.97% 63.5 51.3 61.2 76.1 66.2 IAPT 0.96% 66.3 54.7 67.8 79.2 77.3 BitFit 1.00% 65.9 54.1 66.4 77.2 76.6 (IA)30.90% 68.1 54.6 67.2 78.1 75.4 SSP 0.93% 71.6 57.6 69.6 79.5 79.7 AdaLoRA 0.92% 73.8 57.9 69.2 80.4 82.1 MOELoRA 1.00% 76.8 60.2 72.0 81.1 82.7 MiLoRA 0.93% 77.8 61.2 72.8 81.7 83.3 MiDoRA 0.93% 77.5 61.3 72.9 81.3 83.1 Our proposed methods CoMoE-LoRA 1.45% 79.0 55.5 72.9 80.0 83.6 CoMoE-DoRA 1.45% 78.7 57.0 72.9 78.4 83.8 Table 9: An overall comparison of different PEFT methods for single-task learning. Most results are extracted from the original papers or reproduced by running the provided source code. The backbone model is LLaMA-2 7B.
https://arxiv.org/abs/2505.17553v1
arXiv:2505.17558v1 [cs.CL] 23 May 2025Teaching with Lies: Curriculum DPO on Synthetic Negatives for Hallucination Detection Shrey Pandit*†, Ashwin Vinod† Liu Leqi ,Ying Ding /g♀beWebpage: https://teachingwithlies.github.io/ The University of Texas at Austin Abstract Aligning large language models (LLMs) to ac- curately detect hallucinations remains a signifi- cant challenge due to the sophisticated nature of hallucinated text. Recognizing that halluci- nated samples typically exhibit higher decep- tive quality than traditional negative samples, we use these carefully engineered hallucina- tions as negative examples in the DPO align- ment procedure. Our method incorporates a curriculum learning strategy, gradually transi- tioning the training from easier samples, iden- tified based on the greatest reduction in prob- ability scores from independent fact checking models, to progressively harder ones. This structured difficulty scaling ensures stable and incremental learning. Experimental evalua- tion demonstrates that our HaluCheck mod- els, trained with curriculum DPO approach and high quality negative samples, significantly improves model performance across various metrics, achieving improvements of upto 24% on difficult benchmarks like MedHallu and HaluEval. Additionally, HaluCheck models demonstrate robustness in zero-shot settings, significantly outperforming larger state-of-the- art models across various benchmarks. 1 Introduction Large language models (LLMs) have achieved im- pressive performance across numerous NLP tasks, yet their deployment is limited by a tendency to pro- duce fluent but factually incorrect “hallucinations.” Such errors erode trust and carry serious risks in do- mains with LLM applications like healthcare (Sing- hal et al., 2022), software-development (Krishna et al., 2024) and Law (Lai et al., 2024). Although various detection and mitigation strategies often based on external fact-checkers or simplistic nega- tive samples have been proposed, they struggle to identify sophisticated, plausibly crafted falsehoods. *Corresponding author: shreypandit@utexas.edu †Equal contribution Figure 1: Illustration of the qualitative difference be- tween standard negative samples used in conventional DPO alignment and our proposed method, which lever- ages carefully curated hallucinated answers as high- quality negative examples in DPO alignment. To address these challenges, we introduce a novel alignment strategy leveraging Direct Pref- erence Optimization (DPO) (Rafailov et al., 2023), enhanced through a curriculum learning (Bengio et al., 2009a) (Elman, 1993a) approach specifically tailored for hallucination detection. Our approach incorporates high quality hallucinated samples as negative samples into the alignment process instead of the usual low quality negative samples that are often selected from failed generations. We introduce HaluCheck , a family of Hallucina- tion detection LLMs at two scales aligned via our curriculum-based DPO framework. We conduct ex- tensive evaluations on the MedHallu (Pandit et al., 2025) and HaluEval (Li et al., 2023) benchmarks and zero-shot evaluation on DROP, CovidQA, and PubMedQA, demonstrating that HaluCheck sub- stantially outperforms existing baselines, including the widely adopted Llama-3.2 (1B and 3B) models. Notably, HaluCheck 3B yields up to a 24% rela- tive gain across core detection metrics (accuracy, precision, recall, and F1-score), while remaining competitive with far larger models such as GPT-4o. 1 Figure 2: Figure showing the pipeline for selecting high -quality hallucinated negatives for Direct Preference Optimization (DPO). Each question and context is paired with a hallucinated answer and scored for grounded factuality via MiniCheck, then
https://arxiv.org/abs/2505.17558v1
ranked by difficulty. In each batch, gold references (chosen) and top -ranked hallucinations (rejected) form preference pairs. These pairs optimize the DPO objective, ensuring training against vetted, high-quality negatives rather than arbitrary failures. Our contributions are summarized as follows: 1.We introduce a novel curriculum based sam- pling strategy that progressively selects hallu- cinated samples of increasing difficulty ranges obtained from fact verification models to en- hance alignment training. 2.We introduce HaluCheck , a suite of 1B–3B parameter models aligned with our DPO cur- riculum that leverages high-quality negative samples to deliver hallucination detection gains outperforming state of the art LLMs. 3.Results demonstrate strong transferability of HaluCheck across multiple benchmarks and domains (Sec. 5), including zeroshot evalua- tion, confirming robustness in hallucinations detection task on diverse datasets. 2 Related Works Finetuning Models for Hallucination Detection Recent research shows that both model-centric fine- tuning and sampling-based methods effectively de- tect hallucinations. LYNX (Ravi et al., 2024), an open-source detector refined with distilled chain-of- thought reasoning, outperforms closed-source alter- natives and provides HaluBench(Ravi et al., 2024),a diverse benchmark of semantically perturbed hal- lucinations. FACTCHECKMATE (Alnuhait et al., 2024) preemptively flags hallucination risks via a lightweight MLP on hidden states and uses an intervention network to boost factuality with min- imal overhead. SelfCheckGPT (Manakul et al., 2023) requires no output probabilities or external knowledge: it samples multiple outputs and applies consistency measures such as BERTScore (Zhang et al., 2019a) at both sentence and passage levels. Existing work does not exploit alignment methods such as DPO (Rafailov et al., 2023), despite their proven effectiveness. We introduce the first DPO approach that leverages curated hallucinated nega- tives, markedly improving hallucination detection. Hallucination Detection Task Hallucination in large language models (LLMs) has been exten- sively documented across various natural language processing tasks, such as machine translation (Lee et al., 2019), dialogue systems (Balakrishnan et al., 2019), text summarization (Durmus et al., 2020), and question answering (Sellam et al., 2020), as de- tailed in recent survey literature (Ji et al., 2023). Benchmarks like Hades (Liu et al., 2022) and HaluEval (Li et al., 2023) offer strong hallucination- detection protocols, and MedHallu (Pandit et al., 2 Model Average F1MedHallu (Pandit et al., 2025) HaluEval (Li et al., 2023) F1 Precision Accuracy F1 Precision Accuracy Qwen-2.5 1.5B 0.464 0.227 0.642 0.525 0.701 0.568 0.610 LLama-3.2 1B 0.237 0.108 0.406 0.494 0.366 0.450 0.466 Qwen-2.5 3B 0.638 0.606 0.495 0.492 0.671 0.506 0.512 LLama-3.2 3B 0.612 0.499 0.696 0.566 0.726 0.743 0.732 LLama-3.1 8B 0.571 0.522 0.791 0.608 0.620 0.903 0.711 Qwen-2.5 14B 0.720 0.619 0.691 0.633 0.821 0.862 0.829 GPT 4o 0.799 0.737 0.723 0.772 0.862 0.896 0.867 HalluCheck-Llama 1B 0.637 0.664 0.511 0.527 0.611 0.481 0.468 HalluCheck-Llama 3B 0.756 0.759 0.845 0.782 0.753 0.857 0.767 Table 1: Performance comparison of various models on the MedHallu and HaluEval hallucination detection benchmarks. Our proposed HaluCheck variants (1B and 3B) consistently outperform significantly larger foundational models. Notably, HaluCheck 3B demonstrates superior or comparable performance across both benchmarks, highlighting its efficiency and effectiveness despite its smaller size. Best scores are bold, runners-up are underlined. 2025)
https://arxiv.org/abs/2505.17558v1
provides carefully crafted adversarial an- swers that are ideal for our alignment approach. For the purpose of this work we choose MedHallu and HaluEval for the DPO alignment, as they have high quality hallucinated samples. Our proposed method is agnostic of task, and can be extended to other hallucination detection tasks like in summa- rization and dialogue answering setting. 3 Hallucination Detection and Alignment Problem formulation For each sample iwe de- fine Let x(i)denote the detection prompt (context + question + task instruction), y(i) hallrepresent the hal- lucinated class completion, and y(i) truerepresent the factual class completion. We define l(i)∈ {0,1} as the gold label, where a value of 1 indicates hal- lucination. From every labelled example we obtain apreference pair (x(i),y(i) w,y(i) l), where (y(i) w,y(i) l) =n y(i) true,y(i) hall MiniCheck-Based Grounding Difficulty scoring Before curriculum partitioning, we evaluate how well each hallucinated output is supported by its context using MiniCheck (Tang et al., 2024). For each example (x(i),y(i) hall), we treat question = y(i) hallandcontext =x(i), and compute the ground- ing probability p(i) l=F question =y(i) hall context =x(i) . We then use p(i) lto score difficulty and drive our curriculum stages. After sorting all examples byp(i) l(ascending), {Bs}S s=1←split into Sbins. Lower plindicates easier hallucination cases, ensuring the curriculum starts with easy (high- grounding) and gradually moves to harder ones.DPO Objective for Hallucination Detection Letπθbe the current policy and πrefthe frozen reference model. With trust–region parameter β, andσ(z) = 1 /(1 +e−z)the batch loss is: LDPO(θ) =−X (x,yw,yl)∈Blogσ β logπθ(yw|x) −logπθ(yl|x) −β logπref(yw|x) −logπref(yl|x) . (1) We provide a detailed algorithm for this pipeline in the supplementary (Alg. 1) 4 Experimental Setup We describe the setup in the following section, and have a detailed section in supplementary C and D Model & Datasets We fine-tune Llama-3.2 backbones (1 B and 3 B parameters) with LoRA adapters under the Direct Preference Optimization objective, using a joint corpus drawn from Med- Hallu and HaluEval. Hallucination detection is cast as binary classification via task-specific prompts. Sampling Strategy & Curriculum Learning Negative examples are high-quality hallucinations scored by the MiniCheck fact-verifier. We sort them by decreasing MiniCheck confidence drop and train with a curriculum that proceeds from the easiest to the hardest negatives, yielding smoother and more robust convergence. 5 Results In the upcoming sections, 5.1 ➊we demonstrate that our HaluCheck models (1B and 3B) signifi- cantly outperform foundation LLMs despite their 3 Model DROP CovidQA PQA Avg Llama 3.2 3B 52.50 56.10 55.20 54.60 HaluCheck 3B 57.30 62.50 57.70 59.16 GPT-3.5-Turbo 57.20 56.70 62.80 58.90 Table 2: Accuracy (%) on DROP, CovidQA and PQA (PubMedQA) for the baseline Llama 3.2 3B, our HaluCheck 3B, and GPT-3.5-Turbo (results from HaluBench (Ravi et al., 2024)). Results indicate strong performance of HaluCheck in zeroshot setting. smaller size. In Sec.5.2, we further show that ➋HaluCheck generalizes effectively to unseen datasets in a zero-shot setting, clearly outperform- ing its baseline model. In Sec.5.3, we validate the importance of using curated hallucinated samples rather than standard failed generations as negatives in DPO, showing that ➌our model trained
https://arxiv.org/abs/2505.17558v1
with curated hallucinated answers as negatives achieves superior performance. Finally, in Sec. A.1 and A.2, we conduct ablations demonstrating HaluCheck’s superior transferable skills when trained on in- dividual datasets, and highlight the benefits of curriculum-based sampling over random selection. 5.1 HaluCheck vs Baseline As presented in Table 1 HaluCheck 3B , trained with DPO on hallucinated answers as high quality negative samples, significantly outperforms sim- ilar and larger sized models. On HaluEval, it achieves an F1-score of 0.753 , surpassing the base- lineLLama-3.2 3B (F1: 0.726 ). On MedHallu, it outperforms the base model by +26% F1 gain. Sim- ilarly, HaluCheck 1B shows strong performance on MedHallu (F1: 0.711 ), while baseline LLama- 3.2 1B lags behind (F1: 0.366 ).➊These results highlight our curriculum-based DPO approach’s efficacy in enhancing hallucination detection while maintaining computational efficiency. 5.2 Zero-shot evaluation To gauge out -of-domain robustness, we ran a strict zeroshot test of HaluCheck 3B without any extra tuning or prompt changes against the backbone model Llama-3.2 3B and much larger GPT-3.5-Turbo on three external QA style hallu- cination benchmarks taken from the HaluBench dataset (Ravi et al., 2024): DROP (Dua et al., 2019), COVID QA(Möller et al., 2020), and PUB- MEDQA (Jin et al., 2019). As shown in Ta- ble 2, HaluCheck 3B outperforms the Llama 3.2 3B model across the board, improving ac- curacy by +4.8% ,+6.4% , and +2.5% on theSample TypeEasy Medium Hard Mean Median Mean Median Mean Median Standard Negative 0.282 0.202 0.273 0.201 0.248 0.182 Our Hallucinated 0.303 0.202 0.379 0.269 0.391 0.294 Table 3: Grounded factuality scores (MiniCheck true_prob ; higher is harder to spot) for standard nega- tives versus our curated hallucinated negatives, averaged over difficulty tiers for MedHallu dataset. The curated set provides consistently higher means and medians, confirming its superiority as training negatives for DPO. respective datasets, and also outperforming the GPT-3.5-Turbo on CovidQA by a substantial mar- gin.➋These consistent gains achieved affirm that our curriculum based DPO alignment with using hallucinated samples as a high quality negative samples confers transferable hallucination detec- tion skills that scale to unseen datasets. 5.3 DPO using Hallucinated vs Standard negative samples We show the importance of choosing curated hal- lucinated answers as a negative sample for DPO alignment by comparing the performance of Llama- 3.2 3B model trained with standard negative sam- ples. We sample these standard negative samples, by querying LLM for the question, and keeping the failed answers as negative samples, that is gener- ally chosen as negative samples for DPO. We report the results in Table 7, which clearly indicates that ➌HaluCheck outperforms the later trained model. Also, to further back this choice, we report the grounded factuality score for the hallucinated an- swers from MedHallu and the standard negative samples we created, in Table 3, showing the superi- ority of the samples as negatives for DPO, thereby being a better choice for DPO. 6 Conclusion We present HaluCheck a curriculum-guided Di- rect Preference Optimization (DPO) framework for training an LLM for reliable hallucination de- tection task. A key contribution lies in replacing generic, model-generated
https://arxiv.org/abs/2505.17558v1
failures with carefully curated, difficulty-ranked hallucinated samples as negative preferences during DPO alignment. This structured curriculum yields consistent gains, out- performing larger state-of-the-art models on mul- tiple benchmarks and zero-shot tasks. Ablation results further validate that difficulty-aware nega- tive sampling markedly strengthens the robustness of smaller language models. 4 Limitations Our proposed approach, while effective, exhibits certain limitations worth acknowledging. The curriculum-based Direct Preference Optimization (DPO) heavily relies on the quality and accuracy of the external fact-verification model (MiniCheck), potentially propagating any inherent biases or in- accuracies into our training process. Furthermore, our evaluations primarily focus on hallucinations within question-answering contexts, leaving unex- plored the effectiveness in other NLP tasks such as dialogue generation, summarization, or multi- lingual settings. Additionally, treating hallucina- tion detection purely as a binary classification task restricts the model’s ability to identify partial or span-level hallucinations, thus limiting fine-grained interpretability. Lastly, although zeroshot evalua- tions suggest good generalization, there remains a risk of overfitting to dataset-specific adversarial patterns used during training, which may affect broader applicability and robustness. Ethics statement Our work develops HaluCheck to improve reliable detection of hallucinations in LLM outputs, with the goal of reducing the risk of disseminating mis- leading or harmful information. Our work uses pub- licly available MedHallu, and HaluEval data under MIT licenses We acknowledge that our reliance on an external fact-verification model may introduce its own biases, and users should avoid treating au- tomated detectors as infallible; human oversight re- mains essential, especially in high-stakes domains like healthcare or law. We encourage ongoing eval- uation for fairness and transparency, and recom- mend that practitioners combine our approach with diverse verification methods to mitigate unintended biases or misuse. References Deema Alnuhait, Neeraja Kirtane, Muhammad Khalifa, and Hao Peng. 2024. Factcheckmate: Preemptively detecting and mitigating hallucinations in lms. arXiv preprint arXiv:2410.02899 . Anusha Balakrishnan, Jinfeng Rao, Kartikeya Upasani, Michael White, and Rajen Subba. 2019. Constrained decoding for neural NLG from compositional repre- sentations in task-oriented dialogue. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 831–844, Florence, Italy. Association for Computational Linguistics.Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. 2009a. Curriculum learning. In Proceedings of the 26th annual international confer- ence on machine learning , pages 41–48. Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. 2009b. Curriculum learning. In Proceedings of the 26th annual international confer- ence on machine learning , pages 41–48. Monojit Choudhury, Kalika Bali, Sunayana Sitaram, and Ashutosh Baheti. 2017. Curriculum design for code-switching: Experiments with language iden- tification and language modeling with deep neural networks. In Proceedings of the 14th International Conference on Natural Language Processing (ICON- 2017) , pages 65–74. Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. Drop: A reading comprehension benchmark re- quiring discrete reasoning over paragraphs. arXiv preprint arXiv:1903.00161 . Esin Durmus, He He, and Mona Diab. 2020. FEQA: A question answering evaluation framework for faith- fulness assessment in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics , pages 5055– 5070, Online.
https://arxiv.org/abs/2505.17558v1
Association for Computational Lin- guistics. Jeffrey L Elman. 1993a. Learning and development in neural networks: The importance of starting small. Cognition , 48(1):71–99. Jeffrey L Elman. 1993b. Learning and development in neural networks: The importance of starting small. Cognition , 48(1):71–99. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, and 1 others. 2022. Lora: Low-rank adaptation of large language models. ICLR , 1(2):3. Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of halluci- nation in natural language generation. ACM Comput- ing Surveys , 55(12):1–38. Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William W Cohen, and Xinghua Lu. 2019. Pubmedqa: A dataset for biomedical research question answering. arXiv preprint arXiv:1909.06146 . Madhava Krishna, Bhagesh Gaur, Arsh Verma, and Pankaj Jalote. 2024. Using llms in software require- ments specifications: An empirical evaluation. In 2024 IEEE 32nd International Requirements Engi- neering Conference (RE) , pages 475–483. Jinqi Lai, Wensheng Gan, Jiayang Wu, Zhenlian Qi, and Philip S Yu. 2024. Large language models in law: A survey. AI Open . 5 Katherine Lee, Orhan Firat, Ashish Agarwal, Clara Fan- njiang, and David Sussillo. 2019. Hallucinations in neural machine translation. Junyi Li, Xiaoxue Cheng, Wayne Xin Zhao, Jian-Yun Nie, and Ji-Rong Wen. 2023. Halueval: A large- scale hallucination evaluation benchmark for large language models. Preprint , arXiv:2305.11747. Tianyu Liu, Yizhe Zhang, Chris Brockett, Yi Mao, Zhifang Sui, Weizhu Chen, and Bill Dolan. 2022. A token-level reference-free hallucination detection benchmark for free-form text generation. Preprint , arXiv:2104.08704. Potsawee Manakul, Adian Liusie, and Mark JF Gales. 2023. Selfcheckgpt: Zero-resource black-box hal- lucination detection for generative large language models. arXiv preprint arXiv:2303.08896 . Meta. 2024. The llama 3 herd of models. Preprint , arXiv:2407.21783. Timo Möller, Anthony Reina, Raghavan Jayakumar, and Malte Pietsch. 2020. Covid-qa: A question answer- ing dataset for covid-19. In Proceedings of the 1st Workshop on NLP for COVID-19 at ACL 2020 . OpenAI, :, Aaron Hurst, Adam Lerer, Adam P. Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, Aleksander M ˛ adry, Alex Baker-Whitcomb, Alex Beutel, Alex Borzunov, Alex Carney, Alex Chow, Alex Kirillov, and 401 others. 2024. Gpt-4o system card. Preprint , arXiv:2410.21276. Shrey Pandit, Jiawei Xu, Junyuan Hong, Zhangyang Wang, Tianlong Chen, Kaidi Xu, and Ying Ding. 2025. Medhallu: A comprehensive benchmark for detecting medical hallucinations in large language models. Preprint , arXiv:2502.14302. Pulkit Pattnaik, Rishabh Maheshwary, Kelechi Ogueji, Vikas Yadav, and Sathwik Tejaswi Madhusudhan. 2024. Curry-dpo: Enhancing alignment using curriculum learning & ranked preferences. arXiv preprint arXiv:2403.07230 . Rafael Rafailov, Archit Sharma, Eric Mitchell, Christo- pher D Manning, Stefano Ermon, and Chelsea Finn. 2023. Direct preference optimization: Your lan- guage model is secretly a reward model. Advances in Neural Information Processing Systems , 36:53728– 53741. Selvan Sunitha Ravi, Bartosz Mielczarek, Anand Kan- nappan, Douwe Kiela, and Rebecca Qian. 2024. Lynx: An open source hallucination evaluation model. arXiv preprint arXiv:2407.08488 . Mrinmaya Sachan and Eric Xing. 2016. Easy questions first? a case study
https://arxiv.org/abs/2505.17558v1
on curriculum learning for question answering. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 453–463.Thibault Sellam, Dipanjan Das, and Ankur P. Parikh. 2020. Bleurt: Learning robust metrics for text gener- ation. Preprint , arXiv:2004.04696. Karan Singhal, Shekoofeh Azizi, Tao Tu, S Sara Mahdavi, Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Tanwani, Heather Cole-Lewis, Stephen Pfohl, and 1 others. 2022. Large language mod- els encode clinical knowledge. arXiv preprint arXiv:2212.13138 . Liyan Tang, Philippe Laban, and Greg Durrett. 2024. Minicheck: Efficient fact-checking of llms on ground- ing documents. arXiv preprint arXiv:2404.10774 . Yi Tay, Shuohang Wang, Luu Anh Tuan, Jie Fu, Minh C Phan, Xingdi Yuan, Jinfeng Rao, Siu Cheung Hui, and Aston Zhang. 2019. Simple and effective cur- riculum pointer-generator networks for reading com- prehension over long narratives. arXiv preprint arXiv:1905.10847 . Qwen Team. 2024. Qwen2.5: A party of foundation models. Benfeng Xu, Licheng Zhang, Zhendong Mao, Quan Wang, Hongtao Xie, and Yongdong Zhang. 2020. Curriculum learning for natural language understand- ing. In Proceedings of the 58th annual meeting of the association for computational linguistics , pages 6095–6104. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Wein- berger, and Yoav Artzi. 2019a. Bertscore: Eval- uating text generation with bert. arXiv preprint arXiv:1904.09675 . Xuan Zhang, Pamela Shapiro, Gaurav Kumar, Paul Mc- Namee, Marine Carpuat, and Kevin Duh. 2019b. Cur- riculum learning for domain adaptation in neural ma- chine translation. arXiv preprint arXiv:1905.05816 . 6 A Ablations A.1 Training on individual datasets Only Train on MedHallu When we fine-tune the HaluCheck-Llama-3B detector exclusively on the MedHallu DPO set, the model achieves strong in- domain performance, with an F1of0.729 , pre- cision of 0.892 , and accuracy of 0.784 on the MedHallu benchmark. However, this specializa- tion comes at the expense of generalization: when evaluated on HaluEval , the same model’s F1drops to0.627 , precision to 0.578 , and accuracy to 0.593 . These results demonstrate that training solely on one dataset leads to overfitting to its particular style and content, limiting cross-dataset transfer. Only Train on HaluEval Conversely, training exclusively on the HaluEval DPO set yields a model that excels on HaluEval (F1=0.793 , pre- cision = 0.794 , accuracy = 0.793 ), but under- performs on MedHallu (F1=0.675 , precision =0.623 , accuracy = 0.644 ). Although the in- domain metrics on HaluEval are highest among the single-dataset trainings, the drop in MedHallu performance again highlights the narrow adapta- tion of the model to the peculiarities of its training set. Training on each dataset in isolation yields high in-domain accuracy but poor transfer. In contrast, combining both DPO sets produces a model that maintains strong performance across MedHallu andHaluEval , underscoring the importance of di- verse hallucination examples for robust detector alignment. Figure 3: Figure showing the grounded factuality of the hallucinated samples from MedHallu dataset. We keep only the samples that have a score above 0.25.Algorithm 1 Curriculum-Based DPO Alignment for Hallucination Detection Require: Detection data {(x(i),y(i) true,y(i) hall, l(i))}N i=1, fact- checker F(using MiniCheck, returns probability), policy
https://arxiv.org/abs/2505.17558v1
πθ, frozen ref. policy πref, stages S Ensure: Fine-tuned detector πθ 1:# Score difficulty 2:foreach(x,ytrue,yhall, l)do 3: pl← F (yl|x) 4:end for 5:# Partition into stages 6: sort by pl(asc.) and split into {Bs}S s=1 7:# Generate preference pairs 8:fori= 1, . . . , N do 9: y(i) w←y(i) true 10: y(i) l←y(i) hall 11: store (x(i),y(i) w,y(i) l) 12:end for 13:# Stage-wise DPO fine-tuning 14:fors= 1, . . . , S do 15: Define: 16: δθ(x,yw,yl) = log πθ(yw|x)−logπθ(yl|x) 17: δref(x,yw,yl) = log πref(yw|x)−logπref(yl|x) 18: Minimize over (x,yw,yl)∈ Bs: LDPO(θ) =−X logσ β δθ(x,yw,yl) −β δref(x,yw,yl) 19:end for 20:return πθ A.2 Random vs Curriculum learning DPO As Table 5 shows, replacing the usual random se- lection of negative samples with a curriculum that feeds the model increasingly difficult hallucina- tions produces a clear performance boost on both benchmarks and at both parameter scales. With just 1 B parameters, curriculum guided DPO lifts F1 on MedHallu 0.528 for the random baseline to 0.664 and on HaluEval from 0.446 to 0.611 gains that transform a lightweight detector from marginal to competitive accuracy. The effect is even more pronounced at 3 B curriculum training drives Med- Hallu F1 to 0.759 and HaluEval F1 to 0.753, sur- passing the random counterpart by a wide margin and closing much of the gap to models an order of magnitude larger. These results confirm the intu- ition that hard, well vetted negatives presented in a staged fashion teach the model subtler decision boundaries than a grab-bag of arbitrary failures, leading to more robust hallucination detection with no increase in parameter count or compute budget. B Additional Related Works Curriculum learning Curriculum learning repre- sents a training paradigm that strategically presents data samples in a meaningful sequence, effec- 7 ModelDPO set MedHallu HaluEval MedHallu HaluEval F1 Precision Accuracy F1 Precision Accuracy HalluCheck-Llama 3B ✓ ✗ 0.729 0.892 0.784 0.627 0.578 0.593 HalluCheck-Llama 3B ✗ ✓ 0.675 0.623 0.644 0.793 0.794 0.793 HalluCheck-Llama 3B ✓ ✓ 0.759 0.845 0.782 0.733 0.857 0.767 Table 4: Performance over training with different train sets. Model MedHallu F1 HaluEval F1 HalluCheck 1B (Random) 52.80 44.60 HalluCheck 1B (Curr.) 66.40 61.10 HalluCheck 3B (Random) 69.40 63.10 HalluCheck 3B (Curr.) 75.90 75.30 Table 5: F1 comparison of curriculum-guided vs. ran- dom sampling for HalluCheck models on MedHallu and HaluEval. tively managing and optimizing the information a model encounters at each training step (Elman, 1993b; Bengio et al., 2009b). Research has demon- strated the effectiveness of progressing from sim- ple to complex examples across various NLP tasks, including language modeling (Choudhury et al., 2017; Xu et al., 2020), reading comprehension (Tay et al., 2019), question answering (Sachan and Xing, 2016), and machine translation (Zhang et al., 2019b). In the context of LLM alignment, cur- riculum learning applications remain limited, with (Pattnaik et al., 2024) applying curriculum learning principles within the DPO framework for align- ment. C Detailed experimental setup C.1 Model and Dataset Details We adopt the publicly released Llama-3.2 check- points at two scales (1 B and 3 B parameters). LoRA hyper-parameters follow Hu et
https://arxiv.org/abs/2505.17558v1
al. (2022): rank=8, α=32, dropout=0.05, and target modules q_proj ,k_proj ,v_proj , and o_proj . Training data comprise 9 000 examples from MedHallu’s pqa_artificial split plus 8 000 items (80 %) from the HaluEval training partition, forming 17 000 DPO preference pairs. Evaluation is conducted on the 1 000-example MedHallu pqa_labeled set and the held-out 2 000 HaluEval test items. C.2 Curriculum Construction For every hallucinated answer hipaired with con- textci, the MiniCheck verifier returns a grounding probability pi. Examples with pi<0.25(very poor grounding) are discarded. The remainder are sortedby ascending values of pi. DPO training proceeds batch wise on the sorted data for four epochs, with all batches trained per epoch, thereby gradually ex- posing the model to increasingly difficult negatives. Table 6 in the main paper reports ablations over al- ternative cut-offs; the chosen 0.25–1.0range yields the highest F1 scores, consistent with the grounded factuality distribution visualized in Figure 3. D Implementation details Training was performed using Direct Preference Optimization (DPO) with hyperparameters set as follows: learning rate = 1×10−5, beta = 0.1, gra- dient accumulation steps = 4, per-device batch size = 4, and total epochs = 25. We used a paged AdamW optimizer with 8-bit quantization and mixed-precision training (FP16) for computational efficiency. Sequential sampling was used during training to maintain curriculum learning order. The model’s performance was periodically assessed on the MedHallu labeled validation set. Evaluation metrics included accuracy, precision, recall, and F1-score, computed both overall and separately by difficulty (easy, medium, hard). E LLMs Used in Discriminative Tasks GPT-4o and GPT-4o mini. GPT-4o (OpenAI et al., 2024) are a series of commercial LLMs developed by OpenAI. Renowned for their state-of-the-art per- formance, these models have been extensively uti- lized in tasks such as medical hallucination detec- tion. Our study employs the official API provided by the OpenAI platform to access these models. For all other models below, we implement them through Hugging Face package. Llama-3.1 and Llama-3.2. Llama-3.1 and Llama-3.2 (Meta, 2024) are part of Meta’s open- source multilingual LLMs, Llama 3.1 (July 2024) includes 8B, 70B, and 405B parameter models optimized for multilingual dialogue. Llama 3.2 (September 2024) offers 1B, 3B, 11B, and 90B models with enhanced accuracy and speed. We use Llama 3.2 1B and 3B models as our backbone for 8 Split Range Model Avg F1MedHallu HaluEval F1 Prec Acc F1 Prec Acc 0.00–0.75HaluCheck 1B 0.499 0.404 0.717 0.596 0.595 0.491 0.458 HaluCheck 3B 0.714 0.729 0.892 0.784 0.699 0.812 0.728 0.25–1.00HaluCheck 1B 0.637 0.664 0.511 0.527 0.611 0.481 0.468 HaluCheck 3B 0.756 0.759 0.845 0.782 0.753 0.857 0.767 0.25–0.75HaluCheck 1B 0.625 0.651 0.501 0.511 0.599 0.512 0.469 HaluCheck 3B 0.712 0.696 0.727 0.704 0.728 0.824 0.739 0.00–1.00HaluCheck 1B 0.614 0.622 0.601 0.459 0.606 0.494 0.455 HaluCheck 3B 0.743 0.743 0.905 0.770 0.744 0.829 0.759 Table 6: Ablation over curriculum difficulty cut-offs . Each split indicates the MiniCheck grounding-probability interval used when selecting hallucinated negatives. “Avg F1” is the mean F1 score across MedHallu and HaluEval; higher is better for all metrics. Model F1 Precision Accuracy HaluCheck 1B 0.664 0.511 0.527 Llama-3.2 1B-SN
https://arxiv.org/abs/2505.17558v1
0.622 0.494 0.491 HaluCheck 3B 0.729 0.845 0.782 Llama-3.2 3B-SN 0.691 0.772 0.717 Table 7: Hallucination detection on the MedHallu dataset . “SN” models were aligned with standard neg- ative samples in DPO, while HaluCheck models were aligned with curated hallucinated negatives. Higher is better on all metrics. training DPO, and also use the Llama 3.1 8B model in our evaluation table for performance comparison Qwen2.5. Qwen2.5 (Team, 2024) is an advanced LLM designed to handle complex language tasks efficiently. It has been applied in various domains, including medical hallucination detection. We use the 3B, 7B and 14B variants in our work. F Hardware Resources and Computational Costs During the DPO training process using LoRA, we primarily used the Llama-3.2 1B andLlama-3.2 3Bmodel as a base model for our HaluCheck Model, running it for 12 hours on an NVIDIA RTX A6000 GPU with 48,685 MiB of RAM. Addition- ally, we employed models such as Qwen2.5-1.5B, 3B, 14B , and GPT models as evaluators for bench- markings. To enhance the efficiency and speed of our code execution, we utilized software tools like vLLM and implemented batching strategies. These optimizations were critical for managing the com- putational load and ensuring timely processing of our experiments. 9
https://arxiv.org/abs/2505.17558v1
PPT: A Process-based Preference Learning Framework for Self Improving Table Question Answering Models Wei Zhou1,3Mohsen Mesgar1Heike Adel2Annemarie Friedrich3 1Bosch Center for Artificial Intelligence, Renningen, Germany 2Hochschule der Medien, Stuttgart, Germany3University of Augsburg, Germany {wei.zhou|mohsen.mesgar}@de.bosch.com annemarie.friedrich@uni-a.de adel-vu@hdm-stuttgart.de Abstract Improving large language models (LLMs) with self-generated data has demonstrated success in tasks such as mathematical reasoning and code generation. Yet, no exploration has been made on table question answering (TQA), where a system answers questions based on tabular data. Addressing this gap is crucial for TQA, as effec- tive self-improvement can boost performance without requiring costly or manually annotated data. In this work, we propose PPT, aProcess- based Preference learning framework for TQA. It decomposes reasoning chains into discrete states, assigns scores to each state, and samples contrastive steps for preference learning. Ex- perimental results show that PPTeffectively im- proves TQA models by up to 5% on in-domain datasets and 2.4% on out-of-domain datasets, with only 8,000 preference pairs. Furthermore, the resulting models achieve competitive re- sults compared to more complex and larger state-of-the-art TQA systems, while being five times more efficient during inference. 1 Introduction Table Question Answering (TQA) aims to an- swer questions based on tabular data. Current ap- proaches developing TQA models (Wu and Feng, 2024; Zhang et al., 2025b) fine-tune pre-trained large language models (LLMs) on labeled answers augmented with Chain-of-Thought (CoT) reason- ing (Wei et al., 2023). Recent studies suggest that LLM performance can be improved by fine-tuning with self-generated data on many tasks, such as mathematical reason- ing (Singh et al., 2024) and code generation (Ze- likman et al., 2024). Yet, no exploration of self- improvement has been conducted in TQA. Address- ing this gap is crucial, as an effective improvement method further enhances an existing model without costly or manually annotated data. To self-improve TQA models, two challenges arise: (1) devising an effective learning strategy TQA model s1 s2... 0 0.1process-based preference data 1. state generation ...2. value estimation problem ...3. pair-wise step sampling0 0.1 ...1.0process preferencefine-tuningFigure 1: Given a TQA problem, a TQA model gener- ates reasoning states, composed of cumulative reasoning steps. A value function is used to score each state. We roll out parent states to obtain pair-wise steps and se- lect those with value differences larger than a specified threshold. After data collection, we perform step-wise preference learning for improving the TQA model. and (2) automatically obtaining high-quality data for learning. Drawing inspiration from contrastive process-based learning (Lai et al., 2024; Zhang et al., 2025a), where pairs of intermediate reason- ing steps provide contrastive signals for model learning, we propose PPT, a novel Process-based Preference learning framework for T QA. As shown in Figure 1, our framework contains three stages: state generation ,state value estima- tionandpair-wise step sampling . During the first stage, we sample multiple reasoning chains from a TQA model and formulate them into reasoning states, composed of cumulative reasoning steps, During state value estimation , a value function gives continuous scores based on the potential of a state for deriving a final correct answer. During pair-wise steps sampling , we
https://arxiv.org/abs/2505.17565v1
roll out parent states to obtain paired child states and select state pairs with value differences larger than a specified thresh- old. After obtaining preference data, we perform process-based preference learning using direct pref- erence optimization (DPO) (Rafailov et al., 2024) to self-improve the TQA model. Experiments show that PPTeffectively improves TQA models by up to 5% on in-domain datasetsarXiv:2505.17565v1 [cs.CL] 23 May 2025 and 2.4% on out-of-domain datasets with only 8k preference data. It achieves similar performances compared to methods requiring additional LLMs as judges, yet being ten times more efficient. The en- hanced model outperforms existing fine-tuned TQA models. On three datasets, it even achieves simi- lar performances compared with much larger and complex frameworks, yet boasts five times more efficiency during inference. 2 Related Work We discuss current approaches in addressing TQA and methods for self-improvements. Table Question Answering. Current TQA sys- tems can be categorized into two groups: fine-tuned models (Zhang et al., 2023a; Wu and Feng, 2024) and training-free frameworks (Zhou et al., 2025; Nahid and Rafiei, 2024). The former collects ex- isting TQA datasets to train an end-to-end system that either directly outputs answers (Zhang et al., 2023a) or makes use of an additional reasoning chain (Wu and Feng, 2024). small-size models (≤8B) are utilized out of efficiency considerations. Training-free frameworks in TQA are dependent on larger models, e.g., GPT-3.5, Qwen-72B (Qwen et al., 2025) and involve more complex designs, such as tools (Nahid and Rafiei, 2024) and agentic collaborations (Zhou et al., 2025). We focus on the fine-tuning approach given its inference efficiency. Self-Improving LLMs. Self-improving focuses on using training data generated by the model itself to further obtain performance gains (Singh et al., 2024; Zelikman et al., 2024; Xiong et al., 2024; He et al., 2024). Common methods applied for fine-tuning include rejected sampling fine-tuning (RST) (Yuan et al., 2023) and DPO (Tu et al., 2025; Xu et al., 2024; Zhang et al., 2025a). To obtain step-DPO data for preference learning, previous work either uses closed-source models for locating errors (Lai et al., 2024) or existing process supervi- sion datasets (Lightman et al., 2023) to train a step reward model. (Zhang et al., 2025a). In contrast, our framework does not involve LLM judges, nor does it depend on manually annotated process su- pervision data. To the best of our knowledge, we are the first to explore process-based supervision for self-improvement in TQA.3 Process-based Preference Learning for TQA Task Setup. Given a table t, a question q, a fine- tuned TQA model Mftshould provide a correct answer a. To enhance systems’ performances, a reasoning chain ris generated along with a.r consists of msteps: {k1, k2, ..., k m}. A problem (t, q) and an instruction ( u) form an initial state s0. Each new state siconsists of previous state si−1and a step kigenerated at the timestep i:si= (si−1, ki)andsi−1={s0, k1, k2, ....k i−1}. State Generation. Given a TQA problem, we first sample mreasoning chains from Mft, result- ing in{r}m i=1. Prompts can be found in Appendix A.1. Each reasoning chain consists of multiple steps. We
https://arxiv.org/abs/2505.17565v1
accumulate steps to construct states: si={s0, k1, ...k i}. If all reasoning traces lead to correct answers, we abandon the problem as it might be too easy and only lead to high value states. State Value Estimation. A state value function Vtakes in a state and returns its value. We ap- proximate a state’s value by Monte Carlo (MC) sampling: V(si)is calculated as the probability of sileading to the correct answer. More specifically, Mfttakes in siand completes the current reason- ing chain until reaching an answer. This is repeated ntimes, resulting in {(si+1,j, ..., s zj,j, aj)}n j=1, where ajandzjare the predicted answer and the total number of steps for the j-th finalized solution, respectively. V(si)can be represented as: V(si) =Pn j=1I(aj=a) n(1) The continuous value allows more flexible selec- tion of pair-wise steps described in the following paragraph as we can control the distance between the selected states. Pair-wise Step Sampling After obtaining val- ues for each state, we filter out intermediate states siwhere V(si) = 0 and also their child state{si+1, ...s z}. This is exemplified by the red nodes in Figure 2. We assume a state of value 0 to contain erroneous steps. Rolling out from it is likely to create bad quality child states. For each remaining state si, we use the com- pletion traces sampled when calculating V(si−1): {(si,j, ..., s zj,j, aj)}n j=1as roll outs, and calculate state values for each sampled {si,j}n j=1. This re- sults in a set of pair-wise states: (si,{si,j}n j=1) that can be used to construct step-wise preference dataset Dsdpo. AsV(si)is a continuous value, a good state swand a bad state slpair is selected if V(sw)−V(sl)≥τ, where τis a hyper-parameter. The preference data for step DPO can be repre- sented as Dsdpo={(si−1, kw i, kl i)d}|Dsdpo| d=1. Preference Training. After gathering the process-based preference dataset, we fine-tune Mftusing pairs of good and bad steps given previous steps. The loss function is: LSDPO =−E(si−1,kw i,kl i)∼Dsdpo logσ βlogπθ(kw i|si−1) πref(kw i|si−1) −βlogπθ(kl i|si−1) πref(kl i|si−1) (2) 4 Experiments TQA Models. Note that existing fine-tuned TQA systems do not feature clear step separation in their reasoning chains. Separating steps by line breaks as in math (Luo et al., 2024) is problematic since a line break also indicates a table row in TQA. Therefore, we fine-tune TQA models ourselves that can output clearly separated steps. Details are presented in Appendix A.3. We term the fine-tuned model Mft. Baselines. We consider baselines for self- improvement strategies :RFT (Yuan et al., 2023) trains a model with correct reasoning traces using supervised fine-tuning. FDPO (Xu et al., 2024) trains a model with pair-wise correct and incor- rect full reasoning chains using DPO. Baselines for value functions include: MC with binary labels (MC-B), it returns binary state values based on if states derive final correct answers. Mixed estima- tion(MIX) (Zhang et al., 2025c) scores si1 if both MC-Band an external LLM judge Mjoutput 1. If both judges return 0, siis 0. States receiving dif- ferent scores by the judges are not considered for building the preference dataset. This
https://arxiv.org/abs/2505.17565v1
accounts for approximately 36% of data (Figure 5). Baselines forTQA models include both end-to-end (Zhang et al., 2023a; Wu and Feng, 2024) and training-free frameworks (Zhou et al., 2025; Nahid and Rafiei, 2024). Details of the base methods and models can be found in Appendix A.4. Datasets. We train Mftusing the training sets of three common TQA datasets: WTQ (Pasupat and Liang, 2015), TabFact (Chen et al., 2020), and HiTab (Cheng et al., 2022). To obtain pref- erence data, we sample from their validation sets. s0 s11 s12 s21 s22 s310.3 0 0.60.4 0.8s'220si a1a2a3si-1 (b) Data sampling (a) Value estimationFigure 2: Process-based preference data collection. We estimate a state value by the probability of a state lead- ing to a correct answer. In the first example, V(si) =2 3. After obtaining state values, we do not consider inter- mediate states that have a value of 0 ( s21), together with their child states ( s31). We sample pair-wise states for each remaining state, e.g., s′ 22is sampled by rolling out s12and is regarded as a pair state for s22. We use the test sets of these three datasets as in- domain evaluation data, and incorporate three out- of-domain datasets: WikiSQL (Zhong et al., 2017), SCITAB (Lu et al., 2023), and CRT (Zhang et al., 2023b) to test models’ generalisability. Details about the datasets are presented in Appendix A.2. Experimental Settings. We choose Qwen-2.5- 7B (Qwen et al., 2025) and LlaMA-3.1-8B (Grattafiori et al., 2024) as base models. During preference learning, we fix the fine-tuning dataset size to 8k for Qwen-2.5-7B and 6.7k for LlaMA- 3.1-8B, as different baselines result in different sample sizes (statistics are shown in Appendix A.5). For fair comparison, we use the smallest sample size collected as the fine-tuning data size. We make sure all sampled data correspond to the same set of TQA problems to rule out instance-level effects on fine-tuning. Hyper-parameters are shown in Ap- pendix A.6. We use Qwen-2.5-72B as Mj. The Prompt for Mjis shown in Appendix A.1. The number of reasoning chains mis set to 4 and the roll-out number nis set to 8. The threshold τ is set to 0.9 and temperature is set to 0.7 and 0 during dataset construction and inference respec- tively. We use Exact Match as the evaluation met- ric. All experiments are conducted using 4 A100 GPUs. Training is performed with LlaMA-Factory (Zheng et al., 2024) and inference is performed with VLLM (Kwon et al., 2023). 5 Results and Discussions Figure 3 shows the Exact Match of different meth- ods on in-domain datasets, averaged across models. Per model results can be found in Appendix A.7. 606264 5963636365 WTQEM 838485 8385 848585 TabFactRFT FDPO PPT(MC-B) PPT(MIX) PPT 646668 646668 6669 HiTabFigure 3: Comparing our PPTwith baselines using Ex- act Match (EM). Results are averaged across models. RFT and FDPO stand for rejected sampling fine-tuning and full chain DPO. We experiment with value func- tions: MC-Band MIX. Dashed lines show performances of fine-tuned TQA models Mftbefore applying self- improvement methods. PPT effectively improves performance of TQA models. As Figure
https://arxiv.org/abs/2505.17565v1
3 shows, applying PPT en- hances the performance of Mftby 3.5% on average on in-domain datasets. The gains are more obvious on WTQ (5%) compared to TabFact (2%). This might be attributed to dataset features: TabFact is a binary classification dataset, thus it is easier for models to achieve high performance and harder to be further improved. When evaluating on out-of- domain datasets, we witness an average of 2.2% performance gain after applying our framework ( ↑ 2.2% for WikiSQL, ↑2.4% for SCITAB and ↑2.1% for CRT). The findings show the genealisibility of our method on out-of-domain data. Process-based preference learning is effective and efficient for self-improving TQA models. Comparing PPT with RFT and FDPO, we find RFT does not improve performance. In contrast, preference learning effectively self-improves mod- els. Moreover, process-based preference learning (PPT) outperforms coarse-grained full chain learn- ing (FDPO) on two out of three datasets. Com- paring different value functions, we find PPTwith continuous MC estimation to work better than other methods. This is an important finding as it shows that simply using Mftto estimate state values with a high threshold τ= 0.9to control pair quali- ties can work similarly or better than additionally querying an LLM judge. It significantly boosts the efficiency of the framework: we find using PPT takes 10 times less time than using PPT(MIX)when sampling the same number of data. In a manual evaluation of step correctness (Shown in Appendix A.8), we do not find differences between PPTand PPT(MIX). This suggests PPT’s effectiveness and efficiency are not compromised for reasoning cor- rectness. We also provide an analysis of thresholdModels WTQ TabFact HiTab WikiSQL SCITAB CRT Protrix 56.2 71.6 - 67.4 45.0 40.2 T-LlaMA 35.0 82.6 64.7 50.5 38.6 26.9 Mft 54.9 82.4 61.4 68.3 54.9 48.5 Msi 63.1 84.9 67.6 72.0 56.9 51.4 MACT 70.4 - - - 55.8 57.4 T-SQLify 64.7 80.2 - 76.7 50.9 42.0 Table 1: Exact Match of TQA models. MftandMsi refers to fine-tuned and self-improved TQA models respectively. T-LlaMA refers to TableLlaMA and T- SQLify refers to TabSQLify. Framework results (the last two rows) are obtained using GPT-3.5 as the back- bone. SoTA TQA results are obtained from previous work (Zhou et al., 2024; Zhang et al., 2023a; Wu and Feng, 2024; Nahid and Rafiei, 2024). impact in Appendix A.9, showing the necessity of picking a relatively high threshold for distinguish- ing step pairs during preference learning. Self-improved TQA models achieve compet- itive results compared with more complicated SoTA approaches with five times less inference time. Table 5 shows results for current TQA mod- els. For the sake of space, we only show results from Qwen-2.5. Full results containing LlaMA-3.1 are shown in Appendix A.7. The first four rows of Table 5 compare small-size fine-tuned TQA mod- els, and the last two rows show SoTA training-free frameworks back-boned by GPT-3.5. We find Msi, enhanced using PPT outperforms existing TQA models. More importantly, it achieves competi- tive performance compared with larger and more complex frameworks where tools and agentic col- laboration are involved. On SCITAB, Msieven achieves the best performance. We
https://arxiv.org/abs/2505.17565v1
emphasize the inference efficiency of Msi: it requires eight times less inference time than MACT and five times less than TabSQLify, yet provides competitive results. 6 Conclusions In this study, we provided the first self- improvement framework for TQA, using process- based preference learning. Our framework effec- tively improved the performance of TQA models up to 5% and 2.4% on in-domain datasets and out- of-domain datasets respectively. The resulting mod- els demonstrated competitive performances com- pared to state-of-the-art TQA systems, which are dependent on large LLMs and tool usages. Yet, it requires five times less inference time. Limitation While our method effectively enhances the perfor- mance of small-size TQA models, its impact on large TQA models remains unexplored. To the best of our knowledge, current fine-tuned TQA models only focus on small-size LLMs. Future work can explore an efficient training strategy for large fine-tuned TQA models. Secondly, we limit the task in our study to only TQA, while there exist other table-related tasks, such as table sum- marization. Thirdly, although our framework sup- ports iterative self-learning, the present work only demonstrates the effectiveness of the first iteration, leaving multi-iteration evaluations for future study. As the datasets we used in this study are originally sourced from Wikipedia, scientific paper and statis- tical reports, we do not observe any potential risks from the datasets. References Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou, and William Yang Wang. 2020. Tabfact : A large-scale dataset for table-based fact verification. In Inter- national Conference on Learning Representations (ICLR) , Addis Ababa, Ethiopia. Zhoujun Cheng, Haoyu Dong, Zhiruo Wang, Ran Jia, Jiaqi Guo, Yan Gao, Shi Han, Jian-Guang Lou, and Dongmei Zhang. 2022. HiTab: A hierarchical table dataset for question answering and natural language generation. In Proceedings of the 60th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 1094–1110, Dublin, Ireland. Association for Computational Linguistics. DeepSeek-AI, Aixin Liu, Bei Feng, Bing Xue, Bingx- uan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Daya Guo, Dejian Yang, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Haowei Zhang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Li, Hui Qu, J. L. Cai, Jian Liang, Jianzhong Guo, Jiaqi Ni, Jiashi Li, Jiawei Wang, Jin Chen, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, Junxiao Song, Kai Dong, Kai Hu, Kaige Gao, Kang Guan, Kexin Huang, Kuai Yu, Lean Wang, Lecong Zhang, Lei Xu, Leyi Xia, Liang Zhao, Litong Wang, Liyue Zhang, Meng Li, Miaojun Wang, Mingchuan Zhang, Minghua Zhang, Minghui Tang, Mingming Li, Ning Tian, Panpan Huang, Peiyi Wang, Peng Zhang, Qiancheng Wang, Qihao Zhu, Qinyu Chen, Qiushi Du, R. J. Chen, R. L. Jin, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, Runxin Xu, Ruoyu Zhang, Ruyi Chen, S. S. Li, ShanghaoLu, Shangyan Zhou, Shanhuang Chen, Shaoqing Wu, Shengfeng Ye, Shengfeng Ye, Shirong Ma, Shiyu Wang, Shuang Zhou, Shuiping Yu, Shunfeng Zhou, Shuting
https://arxiv.org/abs/2505.17565v1
Pan, T. Wang, Tao Yun, Tian Pei, Tianyu Sun, W. L. Xiao, Wangding Zeng, Wanjia Zhao, Wei An, Wen Liu, Wenfeng Liang, Wenjun Gao, Wenqin Yu, Wentao Zhang, X. Q. Li, Xiangyue Jin, Xianzu Wang, Xiao Bi, Xiaodong Liu, Xiaohan Wang, Xiaojin Shen, Xiaokang Chen, Xiaokang Zhang, Xiaosha Chen, Xiaotao Nie, Xiaowen Sun, Xiaoxiang Wang, Xin Cheng, Xin Liu, Xin Xie, Xingchao Liu, Xingkai Yu, Xinnan Song, Xinxia Shan, Xinyi Zhou, Xinyu Yang, Xinyuan Li, Xuecheng Su, Xuheng Lin, Y . K. Li, Y . Q. Wang, Y . X. Wei, Y . X. Zhu, Yang Zhang, Yan- hong Xu, Yanhong Xu, Yanping Huang, Yao Li, Yao Zhao, Yaofeng Sun, Yaohui Li, Yaohui Wang, Yi Yu, Yi Zheng, Yichao Zhang, Yifan Shi, Yiliang Xiong, Ying He, Ying Tang, Yishi Piao, Yisong Wang, Yix- uan Tan, Yiyang Ma, Yiyuan Liu, Yongqiang Guo, Yu Wu, Yuan Ou, Yuchen Zhu, Yuduan Wang, Yue Gong, Yuheng Zou, Yujia He, Yukun Zha, Yunfan Xiong, Yunxian Ma, Yuting Yan, Yuxiang Luo, Yuxi- ang You, Yuxuan Liu, Yuyang Zhou, Z. F. Wu, Z. Z. Ren, Zehui Ren, Zhangli Sha, Zhe Fu, Zhean Xu, Zhen Huang, Zhen Zhang, Zhenda Xie, Zhengyan Zhang, Zhewen Hao, Zhibin Gou, Zhicheng Ma, Zhi- gang Yan, Zhihong Shao, Zhipeng Xu, Zhiyu Wu, Zhongyu Zhang, Zhuoshu Li, Zihui Gu, Zijia Zhu, Zijun Liu, Zilin Li, Ziwei Xie, Ziyang Song, Ziyi Gao, and Zizheng Pan. 2025. Deepseek-v3 technical report. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al- Dahle, Aiesha Letman, Akhil Mathur, Alan Schel- ten, Alex Vaughan, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mi- tra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston Zhang, Aurelien Ro- driguez, Austen Gregerson, Ava Spataru, Baptiste Roziere, Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong, Cristian Canton Ferrer, Cyrus Nikolaidis, Damien Al- lonsius, Daniel Song, Danielle Pintz, Danny Livshits, Danny Wyatt, David Esiobu, Dhruv Choudhary, Dhruv Mahajan, Diego Garcia-Olano, Diego Perino, Dieuwke Hupkes, Egor Lakomkin, Ehab AlBadawy, Elina Lobanova, Emily Dinan, Eric Michael Smith, Filip Radenovic, Francisco Guzmán, Frank Zhang, Gabriel Synnaeve, Gabrielle Lee, Georgia Lewis An- derson, Govind Thattai, Graeme Nail, Gregoire Mi- alon, Guan Pang, Guillem Cucurell, Hailey Nguyen, Hannah Korevaar, Hu Xu, Hugo Touvron, Iliyan Zarov, Imanol Arrieta Ibarra, Isabel Kloumann, Is- han Misra, Ivan Evtimov, Jack Zhang, Jade Copet, Jaewon Lee, Jan Geffert, Jana Vranes, Jason Park, Jay Mahadeokar, Jeet Shah, Jelmer van der Linde, Jennifer Billock, Jenny Hong, Jenya Lee, Jeremy Fu, Jianfeng Chi, Jianyu Huang, Jiawen Liu, Jie Wang, Jiecao Yu, Joanna Bitton, Joe Spisak, Jongsoo Park, Joseph Rocca, Joshua Johnstun, Joshua Saxe, Jun- teng Jia, Kalyan Vasuden Alwala, Karthik Prasad, Kartikeya Upasani, Kate Plawiak, Ke Li, Kenneth Heafield, Kevin Stone, Khalid El-Arini, Krithika Iyer, Kshitiz Malik, Kuenley Chiu, Kunal Bhalla, Kushal Lakhotia, Lauren Rantala-Yeary, Laurens van der Maaten, Lawrence Chen, Liang Tan, Liz Jenkins, Louis Martin, Lovish Madaan, Lubo Malo, Lukas Blecher, Lukas Landzaat, Luke de Oliveira, Madeline Muzzi, Mahesh Pasupuleti, Mannat Singh, Manohar Paluri, Marcin Kardas, Maria
https://arxiv.org/abs/2505.17565v1
Tsimpoukelli, Mathew Oldham, Mathieu Rita, Maya Pavlova, Melanie Kam- badur, Mike Lewis, Min Si, Mitesh Kumar Singh, Mona Hassan, Naman Goyal, Narjes Torabi, Niko- lay Bashlykov, Nikolay Bogoychev, Niladri Chatterji, Ning Zhang, Olivier Duchenne, Onur Çelebi, Patrick Alrassy, Pengchuan Zhang, Pengwei Li, Petar Va- sic, Peter Weng, Prajjwal Bhargava, Pratik Dubal, Praveen Krishnan, Punit Singh Koura, Puxin Xu, Qing He, Qingxiao Dong, Ragavan Srinivasan, Raj Ganapathy, Ramon Calderer, Ricardo Silveira Cabral, Robert Stojnic, Roberta Raileanu, Rohan Maheswari, Rohit Girdhar, Rohit Patel, Romain Sauvestre, Ron- nie Polidoro, Roshan Sumbaly, Ross Taylor, Ruan Silva, Rui Hou, Rui Wang, Saghar Hosseini, Sa- hana Chennabasappa, Sanjay Singh, Sean Bell, Seo- hyun Sonia Kim, Sergey Edunov, Shaoliang Nie, Sha- ran Narang, Sharath Raparthy, Sheng Shen, Shengye Wan, Shruti Bhosale, Shun Zhang, Simon Van- denhende, Soumya Batra, Spencer Whitman, Sten Sootla, Stephane Collot, Suchin Gururangan, Syd- ney Borodinsky, Tamar Herman, Tara Fowler, Tarek Sheasha, Thomas Georgiou, Thomas Scialom, Tobias Speckbacher, Todor Mihaylov, Tong Xiao, Ujjwal Karn, Vedanuj Goswami, Vibhor Gupta, Vignesh Ramanathan, Viktor Kerkez, Vincent Gonguet, Vir- ginie Do, Vish V ogeti, Vítor Albiero, Vladan Petro- vic, Weiwei Chu, Wenhan Xiong, Wenyin Fu, Whit- ney Meers, Xavier Martinet, Xiaodong Wang, Xi- aofang Wang, Xiaoqing Ellen Tan, Xide Xia, Xin- feng Xie, Xuchao Jia, Xuewei Wang, Yaelle Gold- schlag, Yashesh Gaur, Yasmine Babaei, Yi Wen, Yiwen Song, Yuchen Zhang, Yue Li, Yuning Mao, Zacharie Delpierre Coudert, Zheng Yan, Zhengxing Chen, Zoe Papakipos, Aaditya Singh, Aayushi Sri- vastava, Abha Jain, Adam Kelsey, Adam Shajnfeld, Adithya Gangidi, Adolfo Victoria, Ahuva Goldstand, Ajay Menon, Ajay Sharma, Alex Boesenberg, Alexei Baevski, Allie Feinstein, Amanda Kallet, Amit San- gani, Amos Teo, Anam Yunus, Andrei Lupu, An- dres Alvarado, Andrew Caples, Andrew Gu, Andrew Ho, Andrew Poulton, Andrew Ryan, Ankit Ramchan- dani, Annie Dong, Annie Franco, Anuj Goyal, Apara- jita Saraf, Arkabandhu Chowdhury, Ashley Gabriel, Ashwin Bharambe, Assaf Eisenman, Azadeh Yaz- dan, Beau James, Ben Maurer, Benjamin Leonhardi, Bernie Huang, Beth Loyd, Beto De Paola, Bhargavi Paranjape, Bing Liu, Bo Wu, Boyu Ni, Braden Han- cock, Bram Wasti, Brandon Spence, Brani Stojkovic, Brian Gamido, Britt Montalvo, Carl Parker, Carly Burton, Catalina Mejia, Ce Liu, Changhan Wang, Changkyu Kim, Chao Zhou, Chester Hu, Ching- Hsiang Chu, Chris Cai, Chris Tindal, Christoph Fe- ichtenhofer, Cynthia Gao, Damon Civin, Dana Beaty, Daniel Kreymer, Daniel Li, David Adkins, DavidXu, Davide Testuggine, Delia David, Devi Parikh, Diana Liskovich, Didem Foss, Dingkang Wang, Duc Le, Dustin Holland, Edward Dowling, Eissa Jamil, Elaine Montgomery, Eleonora Presani, Emily Hahn, Emily Wood, Eric-Tuan Le, Erik Brinkman, Este- ban Arcaute, Evan Dunbar, Evan Smothers, Fei Sun, Felix Kreuk, Feng Tian, Filippos Kokkinos, Firat Ozgenel, Francesco Caggioni, Frank Kanayet, Frank Seide, Gabriela Medina Florez, Gabriella Schwarz, Gada Badeer, Georgia Swee, Gil Halpern, Grant Herman, Grigory Sizov, Guangyi, Zhang, Guna Lakshminarayanan, Hakan Inan, Hamid Shojanaz- eri, Han Zou, Hannah Wang, Hanwen Zha, Haroun Habeeb, Harrison Rudolph, Helen Suk, Henry As- pegren, Hunter Goldman, Hongyuan Zhan, Ibrahim Damlaj, Igor Molybog, Igor Tufanov, Ilias Leontiadis, Irina-Elena Veliche, Itai Gat, Jake Weissman, James Geboski, James Kohli, Janice Lam, Japhet Asher, Jean-Baptiste Gaya, Jeff Marcus, Jeff Tang, Jen- nifer Chan, Jenny
https://arxiv.org/abs/2505.17565v1
Zhen, Jeremy Reizenstein, Jeremy Teboul, Jessica Zhong, Jian Jin, Jingyi Yang, Joe Cummings, Jon Carvill, Jon Shepard, Jonathan Mc- Phie, Jonathan Torres, Josh Ginsburg, Junjie Wang, Kai Wu, Kam Hou U, Karan Saxena, Kartikay Khan- delwal, Katayoun Zand, Kathy Matosich, Kaushik Veeraraghavan, Kelly Michelena, Keqian Li, Ki- ran Jagadeesh, Kun Huang, Kunal Chawla, Kyle Huang, Lailin Chen, Lakshya Garg, Lavender A, Leandro Silva, Lee Bell, Lei Zhang, Liangpeng Guo, Licheng Yu, Liron Moshkovich, Luca Wehrst- edt, Madian Khabsa, Manav Avalani, Manish Bhatt, Martynas Mankus, Matan Hasson, Matthew Lennie, Matthias Reso, Maxim Groshev, Maxim Naumov, Maya Lathi, Meghan Keneally, Miao Liu, Michael L. Seltzer, Michal Valko, Michelle Restrepo, Mihir Pa- tel, Mik Vyatskov, Mikayel Samvelyan, Mike Clark, Mike Macey, Mike Wang, Miquel Jubert Hermoso, Mo Metanat, Mohammad Rastegari, Munish Bansal, Nandhini Santhanam, Natascha Parks, Natasha White, Navyata Bawa, Nayan Singhal, Nick Egebo, Nicolas Usunier, Nikhil Mehta, Nikolay Pavlovich Laptev, Ning Dong, Norman Cheng, Oleg Chernoguz, Olivia Hart, Omkar Salpekar, Ozlem Kalinli, Parkin Kent, Parth Parekh, Paul Saab, Pavan Balaji, Pe- dro Rittner, Philip Bontrager, Pierre Roux, Piotr Dollar, Polina Zvyagina, Prashant Ratanchandani, Pritish Yuvraj, Qian Liang, Rachad Alao, Rachel Rodriguez, Rafi Ayub, Raghotham Murthy, Raghu Nayani, Rahul Mitra, Rangaprabhu Parthasarathy, Raymond Li, Rebekkah Hogan, Robin Battey, Rocky Wang, Russ Howes, Ruty Rinott, Sachin Mehta, Sachin Siby, Sai Jayesh Bondu, Samyak Datta, Sara Chugh, Sara Hunt, Sargun Dhillon, Sasha Sidorov, Satadru Pan, Saurabh Mahajan, Saurabh Verma, Seiji Yamamoto, Sharadh Ramaswamy, Shaun Lind- say, Shaun Lindsay, Sheng Feng, Shenghao Lin, Shengxin Cindy Zha, Shishir Patil, Shiva Shankar, Shuqiang Zhang, Shuqiang Zhang, Sinong Wang, Sneha Agarwal, Soji Sajuyigbe, Soumith Chintala, Stephanie Max, Stephen Chen, Steve Kehoe, Steve Satterfield, Sudarshan Govindaprasad, Sumit Gupta, Summer Deng, Sungmin Cho, Sunny Virk, Suraj Subramanian, Sy Choudhury, Sydney Goldman, Tal Remez, Tamar Glaser, Tamara Best, Thilo Koehler, Thomas Robinson, Tianhe Li, Tianjun Zhang, Tim Matthews, Timothy Chou, Tzook Shaked, Varun V ontimitta, Victoria Ajayi, Victoria Montanez, Vijai Mohan, Vinay Satish Kumar, Vishal Mangla, Vlad Ionescu, Vlad Poenaru, Vlad Tiberiu Mihailescu, Vladimir Ivanov, Wei Li, Wenchen Wang, Wen- wen Jiang, Wes Bouaziz, Will Constable, Xiaocheng Tang, Xiaojian Wu, Xiaolan Wang, Xilun Wu, Xinbo Gao, Yaniv Kleinman, Yanjun Chen, Ye Hu, Ye Jia, Ye Qi, Yenda Li, Yilin Zhang, Ying Zhang, Yossi Adi, Youngjin Nam, Yu, Wang, Yu Zhao, Yuchen Hao, Yundi Qian, Yunlu Li, Yuzi He, Zach Rait, Zachary DeVito, Zef Rosnbrick, Zhaoduo Wen, Zhenyu Yang, Zhiwei Zhao, and Zhiyu Ma. 2024. The llama 3 herd of models. Yifei He, Haoxiang Wang, Ziyan Jiang, Alexandros Papangelis, and Han Zhao. 2024. Semi-supervised reward modeling via iterative self-training. Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. 2023. Effi- cient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles . Xin Lai, Zhuotao Tian, Yukang Chen, Senqiao Yang, Xi- angru Peng, and Jiaya Jia. 2024. Step-dpo: Step-wise preference optimization for long-chain reasoning of llms. Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman,
https://arxiv.org/abs/2505.17565v1
Ilya Sutskever, and Karl Cobbe. 2023. Let’s verify step by step. Xinyuan Lu, Liangming Pan, Qian Liu, Preslav Nakov, and Min-Yen Kan. 2023. SCITAB: A challenging benchmark for compositional reasoning and claim verification on scientific tables. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 7787–7813, Singapore. Association for Computational Linguistics. Liangchen Luo, Yinxiao Liu, Rosanne Liu, Samrat Phatale, Meiqi Guo, Harsh Lara, Yunxuan Li, Lei Shu, Yun Zhu, Lei Meng, Jiao Sun, and Abhinav Rastogi. 2024. Improve mathematical reasoning in language models by automated process supervision. Md Mahadi Hasan Nahid and Davood Rafiei. 2024. Tab- SQLify: Enhancing reasoning capabilities of LLMs through table decomposition. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies (Volume 1: Long Pa- pers) , pages 5725–5737, Mexico City, Mexico. Asso- ciation for Computational Linguistics. Panupong Pasupat and Percy Liang. 2015. Composi- tional semantic parsing on semi-structured tables. In Proceedings of the 53rd Annual Meeting of the As- sociation for Computational Linguistics and the 7thInternational Joint Conference on Natural Language Processing (Volume 1: Long Papers) , pages 1470– 1480, Beijing, China. Association for Computational Linguistics. Qwen, :, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tianyi Tang, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. 2025. Qwen2.5 technical report. Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, and Chelsea Finn. 2024. Direct preference optimization: Your language model is secretly a reward model. Avi Singh, John D. Co-Reyes, Rishabh Agarwal, Ankesh Anand, Piyush Patil, Xavier Garcia, Pe- ter J. Liu, James Harrison, Jaehoon Lee, Kelvin Xu, Aaron Parisi, Abhishek Kumar, Alex Alemi, Alex Rizkowsky, Azade Nova, Ben Adlam, Bernd Bohnet, Gamaleldin Elsayed, Hanie Sedghi, Igor Mordatch, Isabelle Simpson, Izzeddin Gur, Jasper Snoek, Jef- frey Pennington, Jiri Hron, Kathleen Kenealy, Kevin Swersky, Kshiteej Mahajan, Laura Culp, Lechao Xiao, Maxwell L. Bileschi, Noah Constant, Roman Novak, Rosanne Liu, Tris Warkentin, Yundi Qian, Yamini Bansal, Ethan Dyer, Behnam Neyshabur, Jascha Sohl-Dickstein, and Noah Fiedel. 2024. Be- yond human data: Scaling self-training for problem- solving with language models. Songjun Tu, Jiahao Lin, Xiangyu Tian, Qichao Zhang, Linjing Li, Yuqian Fu, Nan Xu, Wei He, Xiangyuan Lan, Dongmei Jiang, and Dongbin Zhao. 2025. En- hancing llm reasoning with iterative dpo: A compre- hensive empirical investigation. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. Chain-of-thought prompting elic- its reasoning in large language models. Zirui Wu and Yansong Feng. 2024. ProTrix: Building models for planning and reasoning over tables with sentence context. In Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 4378–4406,
https://arxiv.org/abs/2505.17565v1
Miami, Florida, USA. Association for Computational Linguistics. Weimin Xiong, Yifan Song, Xiutian Zhao, Wenhao Wu, Xun Wang, Ke Wang, Cheng Li, Wei Peng, and Sujian Li. 2024. Watch every step! LLM agent learning via iterative step-level process refinement. InProceedings of the 2024 Conference on Empiri- cal Methods in Natural Language Processing , pages 1556–1572, Miami, Florida, USA. Association for Computational Linguistics. Yifan Xu, Xiao Liu, Xinghan Liu, Zhenyu Hou, Yueyan Li, Xiaohan Zhang, Zihan Wang, Aohan Zeng, Zhengxiao Du, Zhao Wenyi, Jie Tang, and Yux- iao Dong. 2024. ChatGLM-math: Improving math problem-solving in large language models with a self-critique pipeline. In Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 9733–9760, Miami, Florida, USA. Association for Computational Linguistics. Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Keming Lu, Chuanqi Tan, Chang Zhou, and Jingren Zhou. 2023. Scaling relationship on learning mathematical reasoning with large language models. Eric Zelikman, Eliana Lorch, Lester Mackey, and Adam Tauman Kalai. 2024. Self-taught optimizer (stop): Recursively self-improving code generation. Shimao Zhang, Xiao Liu, Xin Zhang, Junxiao Liu, Zhe- heng Luo, Shujian Huang, and Yeyun Gong. 2025a. Process-based self-rewarding language models. Tianshu Zhang, Xiang Yue, Yifei Li, and Huan Sun. 2023a. Tablellama: Towards open large generalist models for tables. In North American Chapter of the Association for Computational Linguistics . Xiaokang Zhang, Sijia Luo, Bohan Zhang, Zeyao Ma, Jing Zhang, Yang Li, Guanlin Li, Zijun Yao, Kangli Xu, Jinchang Zhou, Daniel Zhang-Li, Jifan Yu, Shu Zhao, Juanzi Li, and Jie Tang. 2025b. Tablellm: Enabling tabular data manipulation by llms in real office usage scenarios. Zhehao Zhang, Xitao Li, Yan Gao, and Jian-Guang Lou. 2023b. CRT-QA: A dataset of complex reasoning question answering over tabular data. In Proceed- ings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 2131–2153, Singapore. Association for Computational Linguis- tics. Zhenru Zhang, Chujie Zheng, Yangzhen Wu, Beichen Zhang, Runji Lin, Bowen Yu, Dayiheng Liu, Jin- gren Zhou, and Junyang Lin. 2025c. The lessons of developing process reward models in mathematical reasoning. Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo, Zhangchi Feng, and Yongqiang Ma. 2024. Llamafactory: Unified efficient fine-tuning of 100+ language models. In Proceedings of the 62nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 3: System Demonstra- tions) , Bangkok, Thailand. Association for Computa- tional Linguistics. Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. CoRR , abs/1709.00103. Wei Zhou, Mohsen Mesgar, Heike Adel, and Annemarie Friedrich. 2024. FREB-TQA: A fine-grained robust- ness evaluation benchmark for table question answer- ing. In Proceedings of the 2024 Conference of theNorth American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies (Volume 1: Long Papers) , pages 2479–2497, Mexico City, Mexico. Association for Computational Linguistics. Wei Zhou, Mohsen Mesgar, Annemarie Friedrich, and Heike Adel. 2025. Efficient multi-agent collabora- tion with tool use for online planning in complex table question answering. In Findings of the Associ- ation for Computational Linguistics: NAACL 2025 , pages 945–968, Albuquerque, New Mexico. Associa- tion for Computational Linguistics. A
https://arxiv.org/abs/2505.17565v1
Appendix A.1 Prompts Figure 6, 7 and 8 show prompts for generating a full reasoning trace, completing a reasoning trace and LLM judge evaluation for a reasoning trace. A.2 Datasets Table 2 shows number of instances and domains for the test data we used. A.3 Fine-tune TQA Models Following previous work (Wu and Feng, 2024; Zhang et al., 2025b), we prompt Deepseek-V3 (DeepSeek-AI et al., 2025) for reasoning chains that lead to correct answers to fine-tune our TQA models. Specially, we sample 2.4k, 1.5k and 2.3k data from the training set of WTQ (Pasupat and Liang, 2015), TabFact (Chen et al., 2020) and HiTab (Cheng et al., 2022). Then we prompt Deepseek-V3 to generate reasoning chains and an- swers. We only keep reasoning chains leading to correct answers for fine-tuning. This results in 1612 instances from WTQ, 1425 instances from TabFact, and 1277 from HiTab. In total, we collect 4314 instances. A.4 Baselines Baselines for Self-Improvement Methods. To collect data for rejected sampling fine-tuning (RFT), we select the reasoning chains leading to the correct answer: Drft={(rw)i}|Drft| i=1. We fine- tune TQA models with the following loss function for self-improvement: LSFT=−E(s0,rw)∼Drft[logπθ(rw|s0)](3) To obtain full reasoning chain DPO data, we selecting reasoning chains leading to correct as pre- ferred traces and those leading to incorrect answers as not preferred: Dfdpo ={(s0, rw, rl)i}|Dfdpo| i=1 , where rwandrlrefer to the preferred and not pre- ferred reasoning traces respectively. The loss func- tion can be represented as: LDPO=−E(u,rw,rl)∼Ddpo logσ βlogπθ(rw|u) πref(rw|u) −βlogπθ(rl|u) πref(rl|u) (4)Datasets #instances Domain WTQ 4344 Wikipedia TabFact 12779 Wikipedia HiTab 1584 statistical reports WikiSQL 15878 Wikipedia SCITAB 1224 scientific paper CRT 728 Wikipedia Table 2: Test data statistics. Methods HiTab WTQ TabFact Total Original 1.6k 2.8k 5k 9.4k RFT 2.2k/1.6k 2.7k/2.4k 3k/2.7k 8k/6.7k FDPO 3.6k/2k 4.7k/4k 4.6k/3.6K 12.9k/9.6k SDPO +MC-B25k/20.8k 46k/47.1k 37k/38.4k 109k/106k +MIX 5.4k/3.5k 9.6k/9.7k 5k/4.9k 20k/18.1k +MC-0.9 6.8k/1.6k 16k/4.4k 11k/2.4k 33.8k/8.4k Table 3: Sampled dataset sizes for different methods. Results for Qwen-2.5-7B and LlaMA-3.1-8B are sepa- rated by “/”. MC-Brefers to using Monte Carlo sampling with binary values as the value function. MIX stands for using both MC-Band an LLM judge (Qwen-2.5-72B) as the value function. MC-0.9 stands for using Monte Carlo sampling with continuous values and setting the selection threshold as 0.9. RFT refers to rejected sam- pling fine-tuning. FDPO and SDPO stand for full-trace DPO and step-wise DPO. Baselines for TQA Systems. TableLlaMA (Zhang et al., 2023a) is an end-to-end TQA sys- tem fine-tuned on 2.6M existing table-related data. It uses LlaMA-2-7B as a base model and directly outputs answers without a reasoning chain. Protrix (Wu and Feng, 2024) is fine-tuned with around 4k table-related data with reasoning chain generated from GPT-4. It also uses LlaMA-2-7B as the base model. MACT (Zhou et al., 2025) is a training-free TQA framework, leveraging tools and agent collab- oration. Specifically, a planning agent outputs step- wise plans to decompose questions, and a coding agent and tools solve each step. TabSQLify (Nahid and Rafiei, 2024) decomposes tables into relevant sub-tables with SQL query generation and execu- tion. Then, sub-tables and questions are passed
https://arxiv.org/abs/2505.17565v1
to LLMs to obtain final answers. A.5 Sampled Dataset Statistics Table 3 shows sampling size for each method. We find MC-Bresults in the most data while RFT the least. A.6 Hyper-parameters Table 4 shows the hyper-parameters used for model fine-tuning. A.7 Additional Results Table 5 shows different models’ performance on the six investigated datasets. A.8 Reasoning Chains Analysis We sample 100 reasoning chains leading to cor- rect answers generated from models using PPT and models using MIX as the value function. We manually examine the correctness of the reasoning chains. Among the 100 instances, we exclude 8 instances where either the answers are incorrect or the questions are ambiguous. We find similar accuracies of the reasoning chains generated from the aforementioned methods, with 95.7 and 94.6, respectively. This suggests the two methods do not differ much in terms of leading to correct reason- ing chains. Nevertheless, wrong reasoning chains leading to correct answers still exist, possibly due to overly complex table inputs. An error case is shown in Figure 9. WTQTabFactHiTab WikiSQLSCITABCRT50607080 DatasetsEMThresholds Comparison with Soft Estimation Soft Estimation ( τ=0.5) Soft Estimation ( τ=0.7) Soft Estimation ( τ=0.9) Figure 4: Thresholds comparisons with different value functions on six TQA datasets. A.9 Threshold Analysis The threshold τdecides the state value differences when sampling a pair of (preferred and not pre- ferred) states. We set τto 0.9 in our study. We experiment with different values of τto investi- gate its impact on the fine-tuned process-supervised Figure 5: Confusion matrix between LLM judge evalua- tion and Monte Carlo estimation with binary labels. models. The experimental settings are the same as described in Section 4 except that we change the values of τ. Figure 4 shows the performances of models fine-tuned with data sampled using differ- entτ. We observe that there is a tendency for higher thresholds to lead to better performance. However, we do not observe big differences in terms of model performances when setting τto 0.7 or 0.9. A.10 Dataset Licenses WTQ (Pasupat and Liang, 2015), HiTab (Cheng et al., 2022) and WikiSQL (Zhong et al., 2017) are under the license of CC-BY-SA-4.01,BSD- 3 C LAUSE2andC-UDA3respectively. TabFact (Chen et al., 2020), CRT (Zhang et al., 2023b) and SCITAB (Lu et al., 2023) are under the MIT4li- cense. 1https://creativecommons.org/licenses/by-sa/4. 0/ 2https://opensource.org/license/bsd-3-clause 3https://github.com/microsoft/HiTab?tab= License-1-ov-file 4https://opensource.org/license/mit Models Method Fine-tuning Learning rate Epoch Batch size LoRA rank DPO β Qwen-2.5-7B supervised fine-tuning full-parameter 5e-6 2 128 - - Qwen-2.5-7B rejected sampling fine-tuning LoRA 1e-5 1 128 64 - Qwen-2.5-7B full chain DPO LoRA 1e-5 3 128 64 0.1 Qwen-2.5-7B step-wise DPO LoRA 1e-5 3 128 64 0.1 LlaMA-3.1-8B supervised fine-tuning full-parameter 5e-6 2 128 - - LlaMA-3.1-8B rejected sampling fine-tuning LoRA 1e-5 1 128 32 - LlaMA-3.1-8B full chain DPO LoRA 1e-5 3 128 32 0.1 LlaMA-3.1-8B step-wise DPO LoRA 1e-5 3 128 32 0.1 Table 4: Hyper-parameters used for model fine-tuning. Models WTQ TabFact HiTab WiKiSQL SCITAB CRT In-domain Out-of-domain Qwen-2.5-7B 28.66 73.77 26.07 47.76 39.46 35.58 42.83 40.93 + TQA training ( Mft) 54.93 82.37 61.36 68.26 54.90 48.49 66.22 57.22 +RFT 56.26 82.44 60.29
https://arxiv.org/abs/2505.17565v1
68.15 52.29 46.15 66.33 55.53 +FDPO 61.10 84.65 63.83 72.04 56.94 52.06 69.86 60.34 PPT(MC) 60.80 82.98 65.72 70.08 52.04 50.24 69.83 57.45 PPT(MIX) 63.86 85.32 64.20 71.44 55.56 52.47 71.63 59.82 PPT(τ= 0.9) 63.10 84.88 67.55 71.97 56.94 51.37 71.84 60.09 LlaMA-3.1-8B 30.64 63.91 26.20 31.87 43.38 32.55 40.25 35.93 + TQA training ( Mft) 64.80 83.36 69.63 69.48 51.63 49.04 72.60 52.76 +RFT 62.06 84.15 67.30 70.73 50.49 48.63 70.84 56.62 +FDPO 65.22 85.91 68.18 71.34 53.35 50.41 73.10 58.37 PPT(MC) 65.54 84.01 69.95 70.67 50.25 49.31 73.17 56.74 PPT(MIX) 62.39 84.98 66.79 68.31 52.94 51.24 72.05 57.50 PPT(τ= 0.9) 65.84 85.50 70.33 70.08 54.44 50.27 73.98 58.26 Table 5: Exact Match accuracies of models fine-tuned with different strategies and value functions, generated with greedy decoding. We bold the best results and underline the second best results for each model type. You are an expert in table question answering. Based on the given question and table, provide a step by step solution to the question. Start each step with 'Step x.' where x is the current step number. Do not carry out verification in each step. Each step should include two parts: a planning part that indicates what to doand a reasoning part that returns the results of the planning part. Separate these two parts via the [SEP] token. Return the result in the last line following 'Therefore, the final answer is: '. Table: {table} Question: {question } Figure 6: Prompt to generate full reasoning trace given a TQA problem. Given the following table, question and past steps to solve the question , continue to generate the steps following past steps to obtain an answer. Each step should include two parts: a planning part that indicates what to do and a reasoning part that returns the results of the planning part. Separate these two parts via the [SEP] token. Start each step with 'Step x.' where x is the current step number. Do not carry out verification in each step. Return the result in the last line following 'Therefore, the final answer is: '. Table: {table} Question: {question} Past steps: {steps} Figure 7: Prompt to complete a reasoning trace given a TQA problem and past steps. I will provide a table question answering(TQA) problem along with a step -by-step reasoning to solve the problem. They will be formatted as follows : [TQA Problem] ...(TQA problem)... [Solution] <step_1 > ...(step 1 of a plan)... </step_1 > ... <step_n > ...(step n of a plan)... </step_n > Your task is to review each step of the plan in sequence, analyzing, verifying, and critiquing a step in details to decide if a step is helpful or not for solving the problem. A helpful step has the following features: - It provides unique information about how to solve a question and does not repeat information appeared in the previous steps. - It is relevant to solving the question. - It is correct in terms of the reasoning. Please provide your analyses, decisions (1 for helpful and 0 not helpful) for each step and confidence
https://arxiv.org/abs/2505.17565v1
for each decision (from 0 to 1). Higher confidence value suggests you are more certain that your decision is correct. In contrast, lower confidence suggests you are more uncertain about your decision and your decision might be incorrect. Please do not be over confident about your decision. You should be honest if you are not sure about your decision. Please return your response strictly following the format below: <analysis_1 > ...(analysis of step 1)... </analysis_1 > <decision_1 > 1/0 </decision_1 > <confidence_1 > (a numerical value from 0 to 1) </confidence_1 > ... <analysis_n > ...(analysis of step n)... </analysis_n > <decision_n > 1/0 </decision_n > <confidence_n > (a numerical value from 0 to 1) </confidence_n > -------------------------------------------------- The following is a TQA problem and a step -by-step solution to solve the problem: [TQA Problem] {problem} [Solution] {solution}Figure 8: LLM judge prompt to generate analysis, decisions and confidence for each step. Table: | # | Date | Location | Winner | Score \nJSU | Score \nTU | Series | | 1 | November 27, 1924 | Jacksonville, AL | Jacksonville State | 14 | 9 | JSU 1 –0 | … | 8 | November 11, 1938 | Jacksonville, AL | Tied | 6 | 6 | TSU 4 –3–1 | … | 21 | October 15, 1955 | Troy, AL | Jacksonville State | 12 | 0 | Tied 10–10–1 | … | 28 | October 6, 1962 | Jacksonville, AL | Jacksonville State | 21 | 14 | JSU 17 –10–1 | … | 38 | November 11, 1972 | Jacksonville, AL | Tied | 14 | 14 | JSU 22 –14–2 | … | | Totals | | | 1086 | 1110 | JSU 32 –29–2 | Question:what istheleast amount of points scored in a tiegame? ----------------------------------------------------------------------------------------------------------------------- Step 1. Identify all thetiegames in thetable bychecking the"Score \nJSU " and "Score \nTU" columns forvalues that arethesame. | # | Date | Location | Winner | Score \nJSU | Score \nTU | Series | | 8 | November 11, 1938 | Jacksonville, AL | Tied | 6 | 6 | TSU 4 –3–1 | | 21 | October 15, 1955 | Troy, AL | Jacksonville State | 12 | 0 | Tied 10–10–1 | | 28 | October 6, 1962 | Jacksonville, AL | Jacksonville State | 21 | 14 | JSU 17–10–1 | | 38 | November 11, 1972 | Jacksonville, AL | Tied | 14 | 14 | JSU 22 –14–2 | Step 2. Extract thescores of thetiegames identified in Step 1. | Score \nJSU | Score \nTU | | 6 | 6 | | 12 | 0 | | 21 | 14 | | 14 | 14 | Step 3. Determine theleast amount of points scored in a tiegame byfinding the minimum value among theextracted scores . The scores are6, 12, 21, and 14. The least amount of points scored in a tiegame is6. Therefore , thefinal answer is: 6. Figure 9: Wrong reasoning chain generated by a self-improved model (Qwen-2.5-7B) using PPT. The first wrong step is
https://arxiv.org/abs/2505.17565v1
arXiv:2505.17571v1 [cs.CL] 23 May 2025Reasoning Meets Personalization: Unleashing the Potential of Large Reasoning Model for Personalized Generation Sichun Luo1,2Guanzhi Deng2Jian Xu3 Xiaojie Zhang4Hanxu Hou1*Linqi Song2∗ 1Dongguan University of Technology2City University of Hong Kong 3Tsinghua University4Guangzhou University sichunluo2@gmail.com Abstract Personalization is a critical task in modern intelligent systems, with applications spanning diverse domains, including interactions with large language models (LLMs). Recent advances in reasoning capabilities have significantly enhanced LLMs, enabling unprecedented performance in tasks such as mathematics and coding. However, their potential for personalization tasks remains underexplored. In this paper, we present the first systematic evaluation of large reasoning mod- els (LRMs) for personalization tasks. Surprisingly, despite generating more to- kens, LRMs do not consistently outperform general-purpose LLMs, especially in retrieval-intensive scenarios where their advantages diminish. Our analysis identifies three key limitations: divergent thinking, misalignment of response for- mats, and ineffective use of retrieved information. To address these challenges, we propose Reinforced Reasoning for Personalization ( R2P), a novel framework that incorporates a hierarchical reasoning thought template to guide LRMs in gen- erating structured outputs. In addition, we introduce an intervention method in the reasoning process to enforce the adherence to the designed reasoning patterns to enhance alignment. We also propose a self referencing mechanism to ensure consistency. Extensive experiments demonstrate that our approach significantly outperforms existing techniques. 1 Introduction Personalization is a cornerstone of modern artificial intelligence, driving transformative advancements in applications such as recommendation systems [ 39,21,20], adaptive user interfaces [ 13,26], and conversational agents [ 15,3,27]. At its core, personalization demands the ability of both deep contextual understanding and nuanced reasoning. Large language models (LLMs) [ 4,2,1,14,28,29] have emerged as powerful tools for this purpose, leveraging their capacity to generate human-like text and process complex linguistic patterns. Recent strides in reasoning capabilities have further elevated LLMs, enabling them to excel in structured tasks such as mathematical problem solving [ 30,38], code generation [ 8,19], and logical inference [ 32,36]. These advancements have fueled expectations that large reasoning models (henceforth referred to as LRMs) could revolutionize personalization by delivering outputs finely attuned to individual user needs. Yet, despite their promise, the efficiency of LRMs in personalization tasks remains largely unexplored, leaving critical questions about their strengths and limitations unanswered. This raises a natural question: can LRMs surpass LLMs in the personalization task? ∗Corresponding Author Preprint. Figure 1: Comparison between non-reasoning model generation, large reasoning model generation, and proposed R2P method. In this work, we present the first systematic evaluation of LRMs for personalization tasks, focusing on their performance across a diverse set of tasks defined by the Language Model Personalization (LaMP) benchmark [ 25]. LaMP is a widely used dataset for personalization evaluation. Our findings reveal a surprising and counterintuitive trend: despite their superior reasoning abilities, LRMs do not consistently outperform general-purpose LLMs in personalization tasks. This performance gap becomes particularly pronounced when retrieval-augmented generation (RAG) [ 17] is employed, suggesting that LRMs struggle to effectively leverage external knowledge for user-specific adaptation. To understand this phenomenon, our detailed analysis reveals three key limitations of LRMs in personalization tasks. First, LRMs excel
https://arxiv.org/abs/2505.17571v1
in convergent reasoning for well-defined problems like equation solving but lack the divergent thinking needed to capture nuanced user preferences, often yielding sub-optimal outputs. Second, they struggle to produce consistent, structured responses aligned with required formats. Finally, LRMs inefficiently leverage retrieved context in RAG settings, prioritizing internal logic over user-specific information, which hampers effective personalization. To address these limitations, we propose Reinforced Reasoning for Personalization ( R2P ), a novel training-free framework that enhances LRMs for personalization tasks. R2P employs a hierarchical Reasoning thought template that systematically decomposes tasks and integrates user-specific context to guide structured reasoning generation, ensuring effective use of retrieved data. It incorporates dynamic intervention to monitor and correct reasoning deviations, enforcing alignment with task spec- ifications and user preferences. Additionally, consistency checks synthesize multiple outputs to ensure coherent and personalized results, collectively enhancing divergent thinking, format consistency, and retrieval integration for consistent user-tailored results. An example comparing non-reasoning model generation, large reasoning model generation, and the proposed R2P method is shown in Figure 1. In a nutshell, our contributions are threefold: •We perform a comprehensive evaluation of general-purpose LLMs and LRMs on the LaMP dataset, uncovering interesting insights into their strengths and limitations for personalization tasks, followed by detailed analysis. •We introduce R2P, a novel framework that incorporates a hierarchical reasoning thought template to guide LRMs in producing structured outputs. Additionally, we propose a reasoning process intervention method to enforce adherence to designed reasoning patterns and a self referencing mechanism to ensure output consistency. •Through extensive experiments, we demonstrate that R2P achieves superior performance on personalization tasks, outperforming baseline methods. 2 Related Work LLM Personalization. Personalizing large language models (LLMs) has gained significant attention as a means to tailor responses to individual user preferences and behaviors [ 5,37,40]. Existing research primarily focuses on designing prompts that incorporate user-generated content, historical interactions, or behavioral patterns to enhance personalization. Early work in LLM personalization leverages in-context learning and few-shot prompting to encode user behavior or preferences directly into the input [ 22]. To address the challenge of growing user data, retrieval-augmented personalization has emerged as a scalable solution [ 11,25]. This approach retrieves the most relevant records from a user’s behavioral history to augment prompts efficiently. Another line of research summarizes user preferences and behaviors into concise natural language profiles for prompt augmentation [23]. 2 Table 1: Performance comparison between Llama3 and DeepSeek-Llama3 on LaMP tasks. The best results are highlighted in bold . Task MetricNon-Personalized (k=0) Random (k=1) RAG (k=1) RAG (k=4) Llama3DeepSeek- Llama3Llama3DeepSeek- Llama3Llama3DeepSeek- Llama3Llama3DeepSeek- Llama3 LaMP-1 Acc ↑ 0.490 0.465 0.640 0.625 0.675 0.650 0.760 0.712 F1↑ 0.480 0.461 0.639 0.625 0.675 0.649 0.760 0.711 LaMP-2N Acc ↑ 0.615 0.605 0.670 0.687 0.640 0.698 0.795 0.697 F1↑ 0.290 0.336 0.341 0.393 0.315 0.421 0.497 0.392 LaMP-2M Acc ↑ 0.415 0.350 0.310 0.295 0.320 0.332 0.455 0.403 F1↑ 0.265 0.228 0.224 0.218 0.199 0.257 0.343 0.287 LaMP-3 MAE ↓ 0.445 0.467 0.495 0.448 0.360 0.383 0.280 0.325 RMSE ↓ 0.745 0.751 0.919 0.767 0.748 0.678 0.656 0.664 LaMP-4 R-1 ↑ 0.093 0.149 0.123 0.142 0.125 0.152 0.179 0.166
https://arxiv.org/abs/2505.17571v1
R-L↑ 0.083 0.131 0.108 0.125 0.110 0.133 0.157 0.148 LaMP-5 R-1 ↑ 0.114 0.381 0.130 0.389 0.148 0.406 0.304 0.413 R-L↑ 0.106 0.308 0.114 0.314 0.132 0.341 0.272 0.344 LaMP-7 R-1 ↑ 0.351 0.426 0.379 0.318 0.370 0.338 0.415 0.415 R-L↑ 0.291 0.370 0.325 0.272 0.320 0.293 0.362 0.362 Table 2: Average token length comparison of Llama3 and DeepSeek-Llama3 on LaMP tasks. TaskNon-Personalized (k=0) Random (k=1) RAG (k=1) RAG (k=4) Llama3DeepSeek- Llama3Llama3DeepSeek- Llama3Llama3DeepSeek- Llama3Llama3DeepSeek- Llama3 LaMP-1 3.00 523.40 3.00 721.57 3.00 671.59 3.00 1007.78 LaMP-2N 1.87 317.02 1.85 327.33 1.85 318.17 1.86 315.60 LaMP-2M 7.76 473.67 2.53 468.70 2.37 476.00 2.00 437.53 LaMP-3 1.00 347.90 1.12 365.36 1.00 355.40 1.00 350.80 LaMP-4 99.81 436.96 65.62 446.96 64.80 450.77 22.68 460.37 LaMP-5 168.48 426.67 151.59 469.33 145.28 453.20 94.22 548.12 LaMP-7 21.22 307.84 22.12 395.40 22.66 395.74 25.90 386.19 Avg. 43.31 404.78 35.40 456.38 34.42 445.84 21.52 500.91 Large Reasoning Model. The quest to enhance reasoning in LLMs has led to the emergence of Large Reasoning Models (LRMs), designed to tackle complex tasks requiring multi-step inference, logical reasoning, and contextual understanding [ 10,34]. A foundational approach to improving reasoning is the development of prompting techniques that guide models toward structured thinking. Chain-of-Thought (CoT) prompting [ 32] encourages step-by-step reasoning, while extensions like Tree-of-Thoughts (ToT) [ 36] explore multiple reasoning paths to enhance solution robustness. These methods have been proven effective across tasks such as arithmetic reasoning and commonsense inference. However, their reliance on carefully crafted prompts can limit generalizability, particularly for novel or underspecified tasks. To address this, researchers have explored fine-tuning LLMs on specialized reasoning datasets for advanced mathematical problems [ 30]. Recent models exemplify the LRM paradigm. OpenAI’s o1 [ 12] demonstrates strong performance in mathematical, logical, and programming tasks by integrating specialized reasoning modules and extended inference-time computation. Similarly, Deepseek R1 [ 9] leverage optimized architectures and training objectives to prioritize reasoning over rote memorization. These models highlight a trend toward balancing scale with reasoning-specific design, though challenges remain in ensuring robustness across diverse domains and mitigating computational overhead. Limitation. While LRMs demonstrate strong performance in tasks such as mathematical and logical reasoning, their effectiveness in personalization tasks remains underexplored, leaving a significant research gap. To address this, we provide the first systematic evaluation of LRMs compared to general-purpose LLMs on the LaMP benchmark, shedding light on their capabilities and limitations in adapting to user-specific contexts. 3 Table 3: Performance comparison between Qwen and DeepSeek-Qwen on LaMP tasks with RAG (k= 1). The best results are highlighted in bold . Task Metric Qwen1.5 Qwen7 Qwen14 Qwen32DeepSeek- Qwen1.5DeepSeek- Qwen7DeepSeek- Qwen14DeepSeek- Qwen32 LaMP-1 Acc ↑ 0.615 0.710 0.670 0.680 0.593 0.642 0.665 0.670 F1↑ 0.576 0.706 0.669 0.674 0.593 0.640 0.664 0.668 LaMP-2N Acc ↑ 0.645 0.690 0.675 0.720 0.578 0.650 0.700 0.720 F1↑ 0.299 0.384 0.374 0.422 0.291 0.348 0.426 0.424 LaMP-2M Acc ↑ 0.245 0.315 0.360 0.410 0.213 0.280 0.443 0.438 F1↑ 0.188 0.260 0.274 0.296 0.143 0.227 0.349 0.355 LaMP-3 MAE ↓ 0.525 0.560 0.380 0.355 0.690 0.455 0.280 0.328 RMSE ↓ 0.875 0.870 0.510 0.485 1.190 0.672 0.330
https://arxiv.org/abs/2505.17571v1
0.378 LaMP-4 R-1 ↑ 0.153 0.137 0.151 0.136 0.129 0.147 0.162 0.157 R-L↑ 0.134 0.126 0.132 0.117 0.115 0.131 0.142 0.138 LaMP-5 R-1 ↑ 0.455 0.444 0.395 0.304 0.363 0.414 0.444 0.435 R-L↑ 0.385 0.374 0.329 0.264 0.302 0.344 0.376 0.364 LaMP-7 R-1 ↑ 0.381 0.461 0.460 0.415 0.317 0.350 0.406 0.447 R-L↑ 0.332 0.408 0.398 0.358 0.270 0.305 0.356 0.392 Table 4: Performance comparison between Qwen and DeepSeek-Qwen on LaMP tasks with RAG (k= 4). The best results are highlighted in bold . Task Metric Qwen1.5 Qwen7 Qwen14 Qwen32DeepSeek- Qwen1.5DeepSeek- Qwen7DeepSeek- Qwen14DeepSeek- Qwen32 LaMP-1 Acc ↑ 0.595 0.765 0.740 0.750 0.605 0.723 0.742 0.733 F1↑ 0.506 0.764 0.738 0.743 0.602 0.720 0.741 0.729 LaMP-2N Acc ↑ 0.690 0.750 0.760 0.815 0.567 0.717 0.763 0.787 F1↑ 0.373 0.444 0.474 0.512 0.263 0.421 0.480 0.477 LaMP-2M Acc ↑ 0.320 0.410 0.470 0.460 0.202 0.300 0.470 0.487 F1↑ 0.231 0.256 0.363 0.367 0.141 0.252 0.359 0.385 LaMP-3 MAE ↓ 0.480 0.360 0.240 0.245 0.715 0.332 0.212 0.200 RMSE ↓ 0.760 0.450 0.280 0.285 1.398 0.392 0.232 0.217 LaMP-4 R-1 ↑ 0.174 0.171 0.159 0.157 0.130 0.151 0.174 0.170 R-L↑ 0.145 0.153 0.144 0.140 0.117 0.132 0.155 0.150 LaMP-5 R-1 ↑ 0.469 0.464 0.374 0.368 0.356 0.402 0.449 0.433 R-L↑ 0.404 0.398 0.309 0.313 0.291 0.329 0.382 0.364 LaMP-7 R-1 ↑ 0.407 0.450 0.455 0.419 0.307 0.393 0.448 0.464 R-L↑ 0.357 0.392 0.398 0.361 0.260 0.339 0.394 0.410 3 Preliminary Evaluation 3.1 Evaluation Setup In this section, we evaluate the performance of a general-purpose LLM and an LRM on the widely used LaMP dataset [ 25], including (1) Personalized Citation Identification (LaMP-1), (2) Personalized News Categorization (LaMP-2N), (3) Personalized Movie Tagging (LaMP-2M), (4) Personalized Product Rating (LaMP-3), (5) Personalized News Headline Generation (LaMP-4), (6) Personalized Scholarly Title Generation (LaMP-5), and (7) Personalized Tweet Paraphrasing (LaMP-7). We exclude the LaMP-6 task since this dataset is not publicly available. Similar to [ 40], we follow the user-based separation setting provided by the LaMP benchmark and randomly select 200 users for evaluation. Additional experimental details are provided in Appendix A. We select Llama-3.1-8B-Instruct [ 7] as the general-purpose LLM and DeepSeek-R1-Distill-Llama- 8B [9] as the LRM. For Llama-3.1-8B-Instruct, we set the temperature to 0 to ensure greedy decoding and consistent output. For DeepSeek-R1-Distill-Llama-8B, we set the temperature to 0.6, aligning with the official guidelines [ 6]. We repeat the experiment three times and calculate the average. We evaluate the models under four settings: non-personalized ( k= 0), random context ( k= 1), and retrieval-augmented generation (RAG) [ 25] with k= 1andk= 4examples, where kdenotes the number of user-specific examples provided. We adopt BM25 [ 24] for all retrieval operations to ensure efficient and consistent comparisons. 4 Table 5: Average token length comparison of Qwen and DeepSeek-Qwen on LaMP Tasks with RAG (k= 1). Task Qwen1.5 Qwen7 Qwen14 Qwen32DeepSeek- Qwen1.5DeepSeek- Qwen7DeepSeek- Qwen14DeepSeek- Qwen32 LaMP-1 3.0 3.0 3.0 4.2 537.8 730.9 449.5 351.6 LaMP-2N 2.1 1.8 3.3 1.8 300.3 283.4 244.6 258.1 LaMP-2M 3.4 2.4 5.1 2.7 363.9 460.7 339.7 384.7 LaMP-3 1.0 1.0 1.0 1.0 398.8 348.3 295.5 314.6
https://arxiv.org/abs/2505.17571v1
LaMP-4 16.9 38.3 26.8 42.5 448.0 446.7 412.8 386.9 LaMP-5 19.9 22.8 35.5 49.2 477.1 381.4 410.8 376.1 LaMP-7 24.8 24.6 25.5 27.0 394.9 402.8 432.0 377.6 Avg. 10.2 13.4 14.3 18.3 417.3 436.3 369.3 349.9 Table 6: Average token length comparison of Qwen and DeepSeek-Qwen on LaMP Tasks with RAG (k= 4). Task Qwen1.5 Qwen7 Qwen14 Qwen32DeepSeek- Qwen1.5DeepSeek- Qwen7DeepSeek- Qwen14DeepSeek- Qwen32 LaMP-1 3.0 3.0 3.0 3.0 650.4 597.1 359.8 374.6 LaMP-2N 1.9 1.9 1.8 1.8 313.6 326.6 254.0 263.1 LaMP-2M 2.7 2.1 2.8 2.1 446.6 498.1 353.8 374.6 LaMP-3 1.0 1.0 1.0 1.0 437.1 397.6 343.7 370.1 LaMP-4 17.5 21.9 30.4 31.2 505.5 470.4 435.2 393.1 LaMP-5 18.9 21.0 40.9 40.7 573.0 432.1 512.6 444.0 LaMP-7 25.4 24.7 25.6 27.5 554.7 463.5 408.8 363.2 Avg. 10.1 10.8 15.1 15.3 497.3 455.1 381.1 369.0 Following LaMP [ 25], we evaluate performance using accuracy and F1-score for classification tasks (LaMP-1, LaMP-2N, and LaMP-2M), and MAE and RMSE for regression tasks (LaMP-3). For text generation tasks (LaMP-4, LaMP-5, and LaMP-7), we adopt ROUGE-1 (R-1) and ROUGE-L (R-L) [18]. Note that higher values indicate better performance for all metrics, except for RMSE and MAE, where lower values are better. 3.2 Comparison between LLM and LRM We compare Llama-3.1-8B-Instruct (Llama3) and DeepSeek-R1-Distill-Llama-8B (DeepSeek- Llama3) under various settings. An example of answers generated by Llama3 and DeepSeek- Llama3 is shown in Figure 6. The results are presented in Table 1. Surprisingly, we observe the general-purpose LLM (Llama3) outperforms the LRM (DeepSeek-Llama) across most tasks in retrieval-intensive settings (RAG with k= 4). For instance, in LaMP-1, Llama3 achieves an accuracy of 0.760 with RAG ( k= 4), compared to DeepSeek’s 0.712. However, in the RAG with k= 1setting, DeepSeek outperforms Llama3 in most tasks. We hypothesize that general-purpose LLMs benefit more from in-context learning when provided with more relevant user-specific examples, leveraging these examples to adapt flexibly to diverse tasks. In contrast, LRMs may prioritize divergent or exploratory reasoning, potentially overlooking critical user profile details or failing to align outputs with expected formats. To investigate this performance gap, our detailed analysis uncovers three critical limitations of LRMs in personalization tasks. •Limited Divergent Thinking : LRMs are optimized for convergent reasoning, excelling in tasks with well-defined solutions (e.g., solving equations). However, personalization often demands divergent thinking, which is the ability to explore multiple intermediate hypotheses and synthesize user-specific nuances. LRMs frequently fail to generate the creative, exploratory steps necessary to capture subtle user preferences, leading to suboptimal outputs. •Poor Response Format Alignment : Personalization tasks often require outputs to adhere to specific formats. LRMs, trained primarily for correctness in reasoning, tend to produce inconsistent or unstructured responses, undermining their utility in practical settings. 5 •Inefficient Utilization of Retrieved Knowledge : While RAG frameworks enhance LLMs by providing external context, LRMs struggle to integrate retrieved information effectively. Their reasoning processes often prioritize internal logic over contextual relevance, resulting in outputs that fail to reflect user-specific insights embedded in the retrieved data. TAKEAWAY I:General-purpose LLMs surpass LRMs on more tasks under retrieval-intensive setting, likely due to stronger in-context learning ability. 3.3
https://arxiv.org/abs/2505.17571v1
Comparison between Model Scaling To investigate the impact of model size on personalization performance, we evaluate Qwen2.5- Instruct series models [ 35] (with 1.5B, 7B, 14B, 32B parameters) and their DeepSeek-Distill-Qwen counterparts [ 6], which are optimized for reasoning, on LaMP tasks under RAG settings ( k= 1and k= 4). The Qwen2.5-Instruct-1.5B model is abbreviated as Qwen 1.5, and similarly for other sizes. Results are reported in Tables 3 and 4, respectively. We could observe neither Qwen models nor DeepSeek-Distill-Qwen models consistently dominate all tasks. Qwen models often outperform DeepSeek-Distill-Qwen models in generation tasks, particularly in the few-shot setting. Conversely, DeepSeek-Qwen models tend to excel in classification and regression tasks, especially with increased context. For the DeepSeek-Qwen models, larger models generally outperform smaller ones across most tasks. This verifies that the enhanced reasoning ability of larger models positively impacts the personalization tasks. Moreover, increasing context from k= 1tok= 4generally improves performance for both base and DeepSeek-Qwen models across most tasks, with more pronounced gains in classification and regression tasks. TAKEAWAY II:Larger LRMs generally outperform smaller ones due to enhanced reasoning capabil- ity. 3.4 Impact of Generated Token Length We analyze the trends in average token length by comparing the Llama, Qwen, and DeepSeek series models under different settings. The results are presented in Table 2, 5, and 6. Our findings indicate that larger reasoning models generate significantly longer token lengths com- pared to their base counterparts (Qwen and Llama) across all tasks and settings. Additionally, token lengths vary considerably across different tasks, with generation tasks typically yielding longer outputs than classification or regression tasks for both the base models and DeepSeek models. In Tables 5 and 6, we observe that larger language models tend to produce slightly longer token lengths than smaller ones, while the opposite trend is noted for the larger reasoning models (LRM). An increase in context size from k= 1tok= 4generally results in slightly longer token lengths for DeepSeek models, particularly for smaller LRMs. TAKEAWAY III: Larger general LLMs produce longer responses, while larger LRMs tend to generate shorter yet more effective reasoning paths. 4 Methodology We propose Reinforced Reasoning for Personalization (R2P) , a novel framework designed to enhance the performance of LRMs in personalization tasks. R2P combines hierarchical reasoning thought template , which decomposes complex personalization tasks into structured subtasks, with a reasoning process intervention that reinforces the reasoning process to ensure alignment with the desired reasoning path. The self referencing module is introduced to ensure consistency. The pipeline of the proposed R2P is presented in Figure 2. 4.1 Hierarchical Reasoning Thought Template As analyzed in Section 3.2, LRMs often struggle with personalization tasks due to their tendency toward divergent thinking and inefficient use of RAG information. Without a structured framework, LRMs may produce outputs that lack focus, deviate from user-specific requirements, or underutilize critical contextual data, leading to suboptimal personalization performance. 6 Figure 2: Pipeline of the proposed R2P. For a given query, the model integrates a reasoning thought template to guide structured reasoning, employs self-referencing to ensure consistency, and incorporates reasoning process intervention to maintain
https://arxiv.org/abs/2505.17571v1
alignment with task requirements. To solve this, we propose a hierarchical reasoning thought template, which offers a structured approach to guide LRMs in personalization tasks. By providing a clear framework, it ensures critical RAG information is utilized effectively, enhancing the focus and consistency of outputs. The template, which can be handcrafted or generated by an LLM, streamlines the reasoning process into a concise pipeline: first, analyzing the input to identify user-specific requirements; then, synthesizing a user profile from preferences, interactions, or contextual cues; and finally, generating a tailored, contextually aligned response. This approach curbs divergent thinking, ensures adherence to user constraints, and boosts personalization performance by fully leveraging RAG information. A prompt example is shown in Appendix B. The HRT addresses the limitations of LRMs by enforcing a structured pipeline that curbs excessive divergent thinking and ensures alignment with task specifications. By explicitly prioritizing retrieved context, the HRT also improves the utilization of RAG data, overcoming the inefficiencies observed in baseline LRMs. 4.2 Reasoning Process Intervention Even with a structured template, LRMs may deviate from the intended reasoning path due to their tendency to prioritize internal logic over the provided template. To address this, we introduce a Reasoning Process Intervention (RPI) mechanism that dynamically monitors and corrects the reasoning process. RPI operates as a feedback loop, ensuring that all subtasks of the HRT are executed correctly and that the output aligns with personalization goals. The RPI mechanism functions as follows: after the LRM completes the reasoning chain, we evaluate the output against a checklist derived from the HRT. This checklist verifies whether each subtask (e.g., query decomposition, profile synthesis) has been adequately addressed and whether the response adheres to the required format and user preferences. If deviations are detected (e.g., missing user profile analysis), RPI intervenes at the end of the reasoning process and injects an instruction such as, “Wait, let me analyze the user profile.” The LRM is then instructed to revise its reasoning chain, starting from the flagged subtask, while preserving valid components of the previous output. In this way, RPI enhances model performance by preventing LRMs from producing misaligned outputs. 4.3 Self Referencing Module Personalization tasks often require consistency across multiple outputs to ensure coherence and reliability. However, LRMs may produce varied responses for similar inputs due to their exploratory reasoning tendencies. For classification tasks, majority voting [ 31] can be effectively employed. However, for generative tasks, majority voting cannot be directly applied. To address this challenge, we propose a Self-Referencing Module (SRM) to refine outputs. The SRM operates as follows: for a given query, the LRM generates multiple candidate responses (e.g., n outputs) using the HRT and RPI. Each response incorporates slight variations in the reasoning chain to explore different personalization angles. These responses are then integrated into the prompt sent 7 Table 7: Main experiment results on the LaMP benchmark. The best scores for each task are highlighted in boldface . * denotes for p < 0.05based on a paired t-test. SettingTask (→) LaMP-1 LaMP-2N LaMP-2M LaMP-3 LaMP-4 LaMP-5 Method ( ↓) Acc ↑ F1↑ Acc↑ F1↑
https://arxiv.org/abs/2505.17571v1
Acc↑ F1↑ MAE↓RMSE ↓ R-1↑ R-L↑R-1↑R-L↑ k= 1RAG 0.650 0.649 0.698 0.421 0.332 0.257 0.383 0.678 0.152 0.133 0.406 0.341 PAG 0.648 0.648 0.715 0.398 0.298 0.227 0.395 0.717 0.143 0.125 0.300 0.253 Self-Verification 0.613 0.611 0.713 0.425 0.307 0.209 0.432 0.746 0.117 0.103 0.249 0.210 R2P 0.673* 0.673 0.697 0.394 0.282 0.200 0.345 0.637 0.155 0.135 0.412 0.342 k= 4RAG 0.712 0.711 0.697 0.392 0.403 0.287 0.325 0.664 0.166 0.148 0.413 0.344 PAG 0.710 0.710 0.722 0.401 0.437 0.318 0.280 0.619 0.164 0.147 0.344 0.289 Self-Verification 0.695 0.694 0.713 0.406 0.395 0.284 0.322 0.633 0.125 0.109 0.266 0.225 R2P 0.743* 0.742* 0.720 0.417 0.427 0.319 0.257* 0.559* 0.174* 0.154 0.429 0.356 Figure 3: Ablation study of R2P across four LaMP tasks. to the LRM for generating a new output. The prompt is structured as: “Possible answers might be: [Response 1], ..., [Response n].” The LRM is provided with all candidate responses and instructed to synthesize a final output that balances the strengths of each while adhering to the HRT. We employ a collective self-referencing approach, which allows for parallelization. Alternatively, an iterative self-referencing method could be used, where for each round, the LRM generates answers for the next round of reference. In this way, the SRM ensures that the final output is both personalized and coherent, effectively addressing the divergent thinking limitations of LRMs. 5 Experiment 5.1 Experiment Setup We adopt the same experimental setup as described in Section 3.1. We compare our approach with several baseline methods, including retrieval-augmented prompting (RAG) [ 25], profile-augmented prompting (PAG) [ 23], as well as self-verification [ 33]. Following [ 40], we exclude LaMP-7 due to its inconsistent user history and query format. The reference number is set to n= 2. By default, we employ the RAG style across all experiments. 5.2 Main Results We evaluate R2P against baseline methods on few-shot ( k= 1) and multi-shot ( k= 4) settings. The main results are reported in Table 7. We can observe proposed R2P method achieves the best performance on most of the tasks. With more context, R2P consistently outperforms baselines across more tasks and achieves greater performance improvements. Notably, R2P excels in regression and generation tasks, where it consistently achieves the lowest errors and highest ROUGE scores. For classification tasks, R2P dominates LaMP-1 and shows competitive performance on LaMP-2N and LaMP-2M, particularly in the multi-shot setting. The baselines, while occasionally competitive, generally lag behind R2P, especially as context increases. These results highlight R2P’s effectiveness in personalization tasks, leveraging context to improve performance across diverse evaluation metrics. 5.3 Ablation Study We conduct an ablation study using the full R2P alongside three variants: without reasoning template, without reasoning process intervention, and without self-referencing. The results are presented in Figure 3. We observe that the full R2P achieves the best results in most cases, which verifies the effectiveness and importance of each module. Additionally, we note that removing the reasoning template leads to a significant performance drop, validating the importance of the guided LRM in 8 Figure 4: Performance under different nacross four LaMP tasks. Figure
https://arxiv.org/abs/2505.17571v1
5: Mean token lengths of generated outputs across four LaMP tasks under different configura- tions. generating structured output. Moreover, removing the reasoning process intervention can degrade model performance. This may be because the model does not align well with the instructions and needs intervention to better focus on the given template. 5.4 Further Analysis Analysis of Hyperparameter non Model Performance. We analyze the impact of the hyperpa- rameter n, ranging from 0 to 3, on our proposed method across selected tasks. The result is shown in Figure 4. We could observe that the hyperparameter n= 2emerges as the most effective setting, achieving the best performance in three out of four tasks and competitive results in LaMP-1. This suggests that n= 2 provides an optimal configuration for balancing model complexity and task- specific performance across classification, regression, and generation tasks. Notably, n= 1performs best for LaMP-1, indicating task-specific sensitivity to n, particularly for simpler classification tasks. Higher values ( n= 3) generally lead to slight performance degradation, while n= 0consistently underperforms, underscoring the importance of self referencing. Analysis of Computation Cost of R2P .We analyze the mean token lengths of outputs generated by LRM across four LaMP tasks under different configurations, including standard RAG, generation with hierarchical reasoning thought template, generation with template and self-referencing, and generation with reasoning process intervention. Results are presented in Figure 5. Our findings reveal that incorporating a reasoning template reduces the mean token length compared to original RAG. Adding self-referencing further decreases token length, and the additional com- putational cost of reasoning intervention remains low. This observation is interesting, as standard RAG, without structured guidance, often leads to inefficient thinking, resulting in longer outputs and poorer performance. In contrast, generation with reasoning template configurations produce shorter, more focused outputs with improved performance. These results suggest that generating fewer tokens correlates with better outcomes, challenging the assumption that longer outputs are inherently better. However, as our approach involves multiple steps, the overall computational cost may exceed that of standard RAG, despite achieving superior performance. In resource constrained scenarios, using the reasoning template configuration alone strikes an effective balance between efficiency and performance, offering a practical solution for personalization tasks. 6 Conclusion In this work, we conduct the first systematic evaluation of large reasoning models (LRMs) for personalization tasks, revealing their unexpected underperformance compared to general-purpose LLMs, particularly in retrieval-intensive scenarios. Through detailed analysis, we identify three core limitations: limited divergent thinking, poor response format alignment, and inefficient use of 9 retrieved knowledge, which hinder LRMs in personalization contexts. To address these limitations, we introduce Reinforced Reasoning for Personalization ( R2P), a novel training-free framework that leverages a hierarchical reasoning thought template, reasoning process intervention, and self referencing mechanism to guide LRMs toward generating structured outputs. Through extensive experiments, R2P demonstrates superior performance compared with baseline methods. References [1]J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. [2]R. Anil, A. M. Dai, O. Firat, M. Johnson, D.
https://arxiv.org/abs/2505.17571v1
Lepikhin, A. Passos, S. Shakeri, E. Taropa, P. Bailey, Z. Chen, et al. Palm 2 technical report. arXiv preprint arXiv:2305.10403 , 2023. [3]T. Araujo and N. Bol. From speaking like a person to being personal: The effects of personalized, regular interactions with conversational agents. Computers in Human Behavior: Artificial Humans , 2(1):100030, 2024. [4]T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. Language models are few-shot learners. Advances in neural information processing systems , 33:1877–1901, 2020. [5]J. Chen, Z. Liu, X. Huang, C. Wu, Q. Liu, G. Jiang, Y . Pu, Y . Lei, X. Chen, X. Wang, et al. When large language models meet personalization: Perspectives of challenges and opportunities. World Wide Web , 27(4):42, 2024. [6] DeepSeek. Deepseek-r1. https://huggingface.co/deepseek-ai/DeepSeek-R1 , 2025. [7]A. Grattafiori, A. Dubey, A. Jauhri, A. Pandey, A. Kadian, A. Al-Dahle, A. Letman, A. Mathur, A. Schelten, A. Vaughan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. [8]Q. Gu. Llm-based code generation method for golang compiler testing. In Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering , pages 2201–2203, 2023. [9]D. Guo, D. Yang, H. Zhang, J. Song, R. Zhang, R. Xu, Q. Zhu, S. Ma, P. Wang, X. Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [10] J. Huang and K. C.-C. Chang. Towards reasoning in large language models: A survey. In Findings of the Association for Computational Linguistics: ACL 2023 , pages 1049–1065, 2023. [11] Q. Huang, S. Fu, X. Liu, W. Wang, T. Ko, Y . Zhang, and L. Tang. Learning retrieval augmenta- tion for personalized dialogue generation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 2523–2540. Association for Computational Linguistics. [12] A. Jaech, A. Kalai, A. Lerer, A. Richardson, A. El-Kishky, A. Low, A. Helyar, A. Madry, A. Beutel, A. Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720 , 2024. [13] A. Jameson. Adaptive interfaces and agents. In The human-computer interaction handbook , pages 459–484. CRC press, 2007. [14] A. Q. Jiang, A. Sablayrolles, A. Roux, A. Mensch, B. Savary, C. Bamford, D. S. Chaplot, D. d. l. Casas, E. B. Hanna, F. Bressand, et al. Mixtral of experts. arXiv preprint arXiv:2401.04088 , 2024. [15] A. B. Kocaballi, S. Berkovsky, J. C. Quiroz, L. Laranjo, H. L. Tong, D. Rezazadegan, A. Briatore, and E. Coiera. The personalization of conversational agents in health care: systematic review. Journal of medical Internet research , 21(11):e15360, 2019. 10 [16] W. Kwon, Z. Li, S. Zhuang, Y . Sheng, L. Zheng, C. H. Yu, J. Gonzalez, H. Zhang, and I. Stoica. Efficient memory management for large language model serving with pagedattention. In Proceedings of the 29th Symposium on Operating Systems Principles , pages 611–626, 2023. [17] P. Lewis, E. Perez, A. Piktus, F. Petroni, V . Karpukhin, N. Goyal, H. Küttler, M. Lewis, W.-t. Yih, T. Rocktäschel, et al. Retrieval-augmented generation for
https://arxiv.org/abs/2505.17571v1
knowledge-intensive nlp tasks. Advances in neural information processing systems , 33:9459–9474, 2020. [18] C.-Y . Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out , pages 74–81, 2004. [19] F. Liu, Y . Liu, L. Shi, H. Huang, R. Wang, Z. Yang, L. Zhang, Z. Li, and Y . Ma. Exploring and evaluating hallucinations in llm-powered code generation. arXiv preprint arXiv:2404.00971 , 2024. [20] S. Luo, Y . Xiao, and L. Song. Personalized federated recommendation via joint representation learning, user clustering, and model adaptation. In Proceedings of the 31st ACM international conference on information & knowledge management , pages 4289–4293, 2022. [21] S. Luo, Y . Xiao, X. Zhang, Y . Liu, W. Ding, and L. Song. Perfedrec++: Enhancing personalized federated recommendation with self-supervised pre-training. ACM Transactions on Intelligent Systems and Technology , 15(5):1–24, 2024. [22] S. Luo, Y . Yao, B. He, Y . Huang, A. Zhou, X. Zhang, Y . Xiao, M. Zhan, and L. Song. Integrating large language models into recommendation via mutual augmentation and adaptive aggregation. arXiv preprint arXiv:2401.13870 , 2024. [23] C. Richardson, Y . Zhang, K. Gillespie, S. Kar, A. Singh, Z. Raeesy, O. Z. Khan, and A. Sethy. Integrating summarization and retrieval for enhanced personalization via large language models. arXiv preprint arXiv:2310.20081 , 2023. [24] S. Robertson, H. Zaragoza, et al. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends® in Information Retrieval , 3(4):333–389, 2009. [25] A. Salemi, S. Mysore, M. Bendersky, and H. Zamani. Lamp: When large language models meet personalization. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 7370–7392, 2024. [26] S. Schiaffino and A. Amandi. User–interface agent interaction: personalization issues. Interna- tional Journal of Human-Computer Studies , 60(1):129–148, 2004. [27] C. A. Thompson, M. H. Goker, and P. Langley. A personalized system for conversational recommendations. Journal of Artificial Intelligence Research , 21:393–428, 2004. [28] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 , 2023. [29] H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y . Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 , 2023. [30] K. Wang, H. Ren, A. Zhou, Z. Lu, S. Luo, W. Shi, R. Zhang, L. Song, M. Zhan, and H. Li. Mathcoder: Seamless code integration in llms for enhanced mathematical reasoning. In The Twelfth International Conference on Learning Representations . [31] X. Wang, J. Wei, D. Schuurmans, Q. V . Le, E. H. Chi, S. Narang, A. Chowdhery, and D. Zhou. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations . [32] J. Wei, X. Wang, D. Schuurmans, M. Bosma, F. Xia, E. Chi, Q. V . Le, D. Zhou, et al. Chain-of- thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837, 2022.
https://arxiv.org/abs/2505.17571v1
11 [33] Y . Weng, M. Zhu, F. Xia, B. Li, S. He, S. Liu, B. Sun, K. Liu, and J. Zhao. Large language models are better reasoners with self-verification. In Findings of the Association for Computational Linguistics: EMNLP 2023 , pages 2550–2575, 2023. [34] F. Xu, Q. Hao, Z. Zong, J. Wang, Y . Zhang, J. Wang, X. Lan, J. Gong, T. Ouyang, F. Meng, et al. Towards large reasoning models: A survey of reinforced reasoning with large language models. arXiv preprint arXiv:2501.09686 , 2025. [35] A. Yang, B. Yang, B. Zhang, B. Hui, B. Zheng, B. Yu, C. Li, D. Liu, F. Huang, H. Wei, et al. Qwen2.5 technical report. arXiv preprint arXiv:2412.15115 , 2024. [36] S. Yao, D. Yu, J. Zhao, I. Shafran, T. Griffiths, Y . Cao, and K. Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. Advances in neural information processing systems , 36:11809–11822, 2023. [37] Z. Zhang, R. A. Rossi, B. Kveton, Y . Shao, D. Yang, H. Zamani, F. Dernoncourt, J. Barrow, T. Yu, S. Kim, et al. Personalization of large language models: A survey. arXiv preprint arXiv:2411.00027 , 2024. [38] A. Zhou, K. Wang, Z. Lu, W. Shi, S. Luo, Z. Qin, S. Lu, A. Jia, L. Song, M. Zhan, et al. Solving challenging math word problems using gpt-4 code interpreter with code-based self-verification. InThe Twelfth International Conference on Learning Representations . [39] X. Zhou, Y . Xu, Y . Li, A. Josang, and C. Cox. The state-of-the-art in personalized recommender systems for social networking. Artificial Intelligence Review , 37:119–132, 2012. [40] Y . Zhuang, H. Sun, Y . Yu, R. Qiang, Q. Wang, C. Zhang, and B. Dai. Hydra: Model factorization framework for black-box llm personalization. In The Thirty-eighth Annual Conference on Neural Information Processing Systems . 12 A Evaluation Details Hardware and Software. We conduct all LLM personalization experiments on a server with CPU: Intel(R) Xeon(R) Gold 6354 CPU @ 3.00GHz and GPU: NVIDIA A40 48GB using Python 3.10.14. We use vLLM [16] for inference acceleration. LaMP Dataset Introduction Personalized Citation Identification (LaMP-1): This task focuses on identifying relevant citations tailored to a user’s specific research interests and past citation patterns. Personalized News Categorization (LaMP-2N): In this task, the model categorizes news articles based on user preferences, such as past reading history and topical interests. Personalized Movie Tagging (LaMP-2M): The model assigns personalized tags to movies based on user viewing history and taste. Personalized Product Rating (LaMP-3): This task evaluates the model’s ability to predict product ratings that align with individual user preferences and prior feedback. Personalized News Headline Generation (LaMP-4): The goal is to generate news headlines that are tailored to a user’s interests, ensuring relevance and engagement. Personalized Scholarly Title Generation (LaMP-5): This task involves generating scholarly article titles that reflect a user’s research focus and academic style. Personalized Tweet Paraphrasing (LaMP-7): The model paraphrases tweets, taking into account the user’s writing style and preferred tone. B Prompt Template Handcrafted Prompt for Hierarchical Reasoning Thought Template Please follow these steps to handle user queries and ensure the
https://arxiv.org/abs/2505.17571v1
output meets requirements: 1. Question Analysis: Carefully analyze the user input to clarify the intent and specific needs of the query. Extract key information such as task type (e.g., judgment, generation, summary) and constraints (e.g., format, scope). 2. User Profile Integration: Use the background infor- mation provided by the user (e.g., research field, interests, past queries) to infer preferences and personalize the content. If no specific profile is available, assume the user needs accurate and professional answers. 3. Retrieval Augmented Generation: If the external information (e.g., references, data) is referenced, prioritize using the context provided by the user to avoid irrelevant assumptions. 4. Content Generation: Based on the question analysis, user profile, and retrieved information, generate a preliminary answer that meets the requirements. Ensure the content is accurate, relevant, and logically clear. 5. Format Alignment and Checking: Verify that the output conforms to any specified format requirements (e.g., "answer only" or "no explanation"). If no format is specified, default to a concise and readable style. 6. Con- ciseness Optimization: Remove redundant information and avoid overthinking or irrelevant details. Ensure the answer directly addresses the core of the query. 7. Final Output: Present the final answer clearly and concisely, ensuring it meets user needs. C Limitation Due to computational constraints, our evaluation is limited to a subset of large reasoning models (LRMs), specifically Llama, Qwen, and DeepSeek-R1-Distill series, leaving other prominent LRMs, such as OpenAI o1, untested. This restricts the generalizability of our findings across the broader spectrum of reasoning models. Additionally, our study primarily focuses on training-free methods, such as the proposed R2P framework, while supervised fine tuning approaches remain underexplored. Future work could address these gaps by evaluating a wider range of models and incorporating alignment strategies to further enhance personalization performance. Lastly, the LaMP benchmark, while comprehensive, may not capture all real-world personalization scenarios, particularly those involving dynamic user interactions or multimodal data, which could limit the applicability of our findings in more complex settings. D Broad Impact This work advances the personalization capabilities of large reasoning models (LRMs). By improving the ability of LRMs to generate user-specific outputs, our proposed framework can enhance user 13 (a) Question Generate a title for the following abstract of a paper: Recent years have witnessed the explosive growth of online social networks (OSNs). They provide powerful IT-innovations for online social activities such as organizing contacts, publishing content, and sharing interests between friends who may never meet before. As more and more people become active users of OSNs, one may ponder questions such as (1) Do OSNs indeed improve our sociability? (2) To what extent can we expand our offline social spectrum in OSNs? (3) Can we identify some interesting user behaviors in OSNs? Our work in this paper attempts to answer these interesting questions. First, we systematically validate the existence of a new Dunbar@?s number in OSNs, which is ranging from 200 to 300 empirically. To reach this, we conduct local-structure analysis as well as user-interaction analysis on extensive real-world OSNs. Second, based on this new number, we divide OSN users into two
https://arxiv.org/abs/2505.17571v1
categories: the rational and the aggressive, and find that rational users intend to develop close and reciprocated relationship, whereas aggressive users have no consistent behaviors. Third, we propose a simple model to highlight the constraints of time and cognition that may affect the evolution of OSNs heavily. Finally, we discuss the potential use of our findings for viral marketing and privacy management in OSNs. (b) Answer generated by Llama 3: Characterizing User Behavior in Online Social Networks: A Study on Sociability, User Categories, and Evolution (c) Answer generated by DeepSeek-Llama3: <think> Okay, the user has provided an abstract of a paper and wants me to generate a title for it. Let me read through the abstract carefully to understand the key points. The paper discusses the growth of online social networks (OSNs) and addresses three main questions about their impact on sociability, expanding the offline social spectrum, and identifying user behaviors. First, the study validates a new Dunbar number for OSNs, which is empirically found to range from 200 to 300. They did this through local-structure and user-interaction analyses on real-world OSNs. So, the title should probably mention something about the Dunbar number or the empirical findings. Next, the paper categorizes users into rational and aggressive. Rational users form close, reciprocated relationships, while aggressive users don’t show consistent behavior. This suggests the title should reflect these categories or the implications of user behavior on OSNs. They also propose a model about constraints of time and cognition affecting OSN evolution. This adds another layer to the title, possibly focusing on the factors influencing the network’s growth or dynamics. Furthermore, the findings are discussed in the context of viral marketing and privacy management. The title should indicate the practical applications or implications of their study. Putting this together, I need a title that encapsulates all these aspects: the empirical discovery of a new Dunbar number, the categorization of users, the proposed model, and the applications. Maybe something like Empirical Insights into Online Social Networks: The Dunbar Number, User Behavior, and Implications for Marketing and Privacy. I think this covers all the main points without being too wordy. It starts with the methodology (empirical), then the specific finding (Dunbar number), the user behavior aspect, and ends with the applications. That should make the title comprehensive and informative.</think> Empirical Insights into Online Social Networks: The Dunbar Number, User Behavior, and Implications for Marketing and Privacy Figure 6: An example of answers generated by Llama3 and DeepSeek-Llama3. experiences in applications such as personalized education, recommendations, and conversational agents. However, these advancements also introduce potential negative societal impacts. Enhanced personalization may exacerbate privacy concerns, as models rely on detailed user profiles that could be misused if not properly safeguarded. For example, unauthorized access to user data could lead to targeted manipulation or discrimination. 14
https://arxiv.org/abs/2505.17571v1