text string | source string |
|---|---|
bibliography, one on each line. Copy the references verbatim. Respond only with the list of references, no other text. Do not include bullet points or numbered lists, but edit the formatting so each citation fits on exactly one line. Do not include parenthetical citations. Just focus on the bibliography section. A.2 Few-shot annotation No chain-of-thought system prompt You are a helpful assistant with an expertise in annotating bibliographic references in JATS XML format. You will be given a plaintext citation and you should respond with the annotated reference. ONLY respond with the annotation and nothing else. # Examples ### Citation Stone NJ, Robinson JG, Lichtenstein AH, et al. 2013 ACC/AHA Guideline on the Treatment of Blood Cholesterol to Reduce Atherosclerotic Cardiovascular Risk in Adults: A Report of the American College of Cardiology/American Heart Associ- ation Task Force on Practice Guidelines. Circulation 2014;129(25 Suppl 2):S1-S45. doi:10.1161/01.cir.0000437738.63853.7a. ### Annotation <mixed-citation publication-type="journal"><person-group person- group-type="author"><string-name><surname>Stone</surname> <given-names>NJ</given-names></string-name>, <string- name><surname>Robinson</surname> <given-names>JG</given- names></string-name>, <string-name><surname>Lichtenstein</surname> <given-names>AH</given-names></string-name>, <etal>et al</etal></person-group>. <article-title>2013 ACC/AHA Guide- line on the Treatment of Blood Cholesterol to Reduce Atheroscle- rotic Cardiovascular Risk in Adults: A Report of the American Col- lege of Cardiology/American Heart Association Task Force on Practice Guidelines</article-title>. <source><italic>Circulation</italic></source> <year>2014</year>;<volume>129</volume>(<issue>25</issue> <supplement>Suppl 2</supplement>):<fpage>S1</fpage>- <lpage>S45</lpage>. <comment>doi</comment>:<pub-id pub-id- type="doi">10.1161/01.cir.0000437738.63853.7a</pub-id>.</mixed-citation> ### Citation Sagel Z, Tutluer Mİ, Peskircioglu H, et al.: Determination of Effect of Chemical Mutagen EMS on TAEK A-3 and TAEK C-10 Mutant Soybean Varieties in M1 Generation. Ekin Journal of Crop Breeding and Genetics. 2017; 3(1): 19–24. Ref- erence Source ### Annotation 9 Presented at the Workshop on Open Citations & Open Scholarly Metadata 2025 <mixed-citation publication-type="journal"><person-group person-group- type="author"><name name-style="western"><surname>Sagel</surname> <given-names>Z</given-names></name>, <name name- style="western"><surname>Tutluer</surname> <given- names>Mİ</given-names></name>, <name name- style="western"><surname>Peskircioglu</surname>, <given-names>H</given- names></name>, <etal /></person-group>:<article-title>Determination of Effect of Chemical Mutagen EMS on TAEK A-3 and TAEK C-10 Mutant Soybean Varieties in M<sub>1</sub> Generation.</article-title>. <source><italic tog- gle="yes">Ekin Journal of Crop Breeding and Genetics</italic></source>. <year>2017</year>;<volume>3</volume>(<issue>1</issue>): <fpage>19</fpage>–<lpage>24</lpage>. <ext-link>Reference Source</ext- link></mixed-citation> Chain-of-thought system prompt You are a helpful assistant with an expertise in annotating bibliographic references in JATS XML format. You will be given a plaintext citation and you should respond with the anno- tated reference. Before providing the final XML annotation, give your step-by-step thinking. After your explanation, provide the final XML annotation. # Examples ### Citation Stone NJ, Robinson JG, Lichtenstein AH, et al. 2013 ACC/AHA Guideline on the Treatment of Blood Cholesterol to Reduce Atherosclerotic Cardiovascular Risk in Adults: A Report of the American College of Cardiology/American Heart Associ- ation Task Force on Practice Guidelines. Circulation 2014;129(25 Suppl 2):S1-S45. doi:10.1161/01.cir.0000437738.63853.7a. ### Annotation <mixed-citation publication-type="journal"><person-group person- group-type="author"><string-name><surname>Stone</surname> <given-names>NJ</given-names></string-name>, <string- name><surname>Robinson</surname> <given-names>JG</given- names></string-name>, <string-name><surname>Lichtenstein</surname> <given-names>AH</given-names></string-name>, <etal>et al</etal></person-group>. <article-title>2013 ACC/AHA Guide- line on the Treatment of Blood Cholesterol to Reduce Atheroscle- rotic Cardiovascular Risk in Adults: A Report of the American Col- lege of Cardiology/American Heart Association Task Force on Practice Guidelines</article-title>. <source><italic>Circulation</italic></source> <year>2014</year>;<volume>129</volume>(<issue>25</issue> <supplement>Suppl 2</supplement>):<fpage>S1</fpage>- <lpage>S45</lpage>. <comment>doi</comment>:<pub-id pub-id- type="doi">10.1161/01.cir.0000437738.63853.7a</pub-id>.</mixed-citation> ### Citation Sagel Z, Tutluer Mİ, Peskircioglu H, et al.: Determination of Effect of Chemical Mutagen EMS on TAEK A-3 and TAEK C-10 Mutant Soybean Varieties in M1 Generation. Ekin Journal of Crop Breeding and Genetics. 2017; 3(1): 19–24. Ref- erence Source ### Annotation | https://arxiv.org/abs/2505.15948v1 |
10 Presented at the Workshop on Open Citations & Open Scholarly Metadata 2025 <mixed-citation publication-type="journal"><person-group person-group- type="author"><name name-style="western"><surname>Sagel</surname> <given-names>Z</given-names></name>, <name name- style="western"><surname>Tutluer</surname> <given- names>Mİ</given-names></name>, <name name- style="western"><surname>Peskircioglu</surname>, <given-names>H</given- names></name>, <etal /></person-group>:<article-title>Determination of Effect of Chemical Mutagen EMS on TAEK A-3 and TAEK C-10 Mutant Soybean Varieties in M<sub>1</sub> Generation.</article-title>. <source><italic tog- gle="yes">Ekin Journal of Crop Breeding and Genetics</italic></source>. <year>2017</year>;<volume>3</volume>(<issue>1</issue>): <fpage>19</fpage>–<lpage>24</lpage>. <ext-link>Reference Source</ext- link></mixed-citation> 11 Presented at the Workshop on Open Citations & Open Scholarly Metadata 2025 B Energy consumption NVIDIA lists the max thermal-design power (TDP) of the H200 chip to be up to 700W (NVIDIA Corporation, 2024). Assuming that it was run at its full TDP for six hours, it would have used 0.7kW×6h= 4.2kWh during its use. In 2024, the average power use effectiveness (PUE) for a U.S.-based center was 1.56(Donnellan et al., 2024). Assuming this PUE, we can estimate that our experiments actually required 4.2kWh×1.56≈6.6kWh. The average carbon intensity for the U.S. power grid is estimated to be 823.1lb CO 2per mWh (U.S. Environmental Protection Agency, 2024). This is equivalent to 373.35kg CO 2 per mWH, or 0.373kg CO 2per kWh . Thus, we estimate the overall energy consumption of these experiments to be 6.6kWh× 0.373kg CO 2/kWh≈2.46kg CO 2. According to the EPA, the average U.S. gasoline-powered passenger vehicle (including cars and trucks) emits 3.93×10−4metric tons CO 2per mile, or 0.393kg CO 2per mile (U.S. Environmental Protection Agency, 2024). This means that the total emissions for our experiments are roughly equivalent to driving this vehicle 2.46kg CO 2 0.393kg CO 2per mile≈6.26miles . 12 | https://arxiv.org/abs/2505.15948v1 |
Towards Holistic Evaluation of Large Audio-Language Models: A Comprehensive Survey Chih-Kai Yang1, Neo S. Ho, Hung-yi Lee2 National Taiwan University 1chihkaiyang1124@gmail.com ,2hungyilee@ntu.edu.tw /githubhttps://github.com/ckyang1124/LALM-Evaluation-Survey Abstract With advancements in large audio-language models (LALMs), which enhance large lan- guage models (LLMs) with auditory capabil- ities, these models are expected to demon- strate universal proficiency across various audi- tory tasks. While numerous benchmarks have emerged to assess LALMs’ performance, they remain fragmented and lack a structured tax- onomy. To bridge this gap, we conduct a com- prehensive survey and propose a systematic taxonomy for LALM evaluations, categoriz- ing them into four dimensions based on their objectives: (1) General Auditory Awareness and Processing, (2) Knowledge and Reasoning, (3) Dialogue-oriented Ability, and (4) Fairness, Safety, and Trustworthiness. We provide de- tailed overviews within each category and high- light challenges in this field, offering insights into promising future directions. To the best of our knowledge, this is the first survey specif- ically focused on the evaluations of LALMs, providing clear guidelines for the community. We will release the collection of the surveyed papers and actively maintain it to support ongo- ing advancements in the field. 1 Introduction Recent advancements in large language models (LLMs) (Zhao et al., 2023; Grattafiori et al., 2024; Hurst et al., 2024) have expanded their impact be- yond natural language processing (NLP) to mul- timodal domains (Yin et al., 2024; Team et al., 2024). Among these, large audio-language mod- els (LALMs) (Lakhotia et al., 2021; Tang et al., 2024; Chu et al., 2024; Lu et al., 2024; Défossez et al., 2024; Fang et al., 2025) have attracted signif- icant attention in the auditory-processing commu- nity. LALMs are multimodal LLMs that process auditory and/or textual input, such as speech, audio, and music, and generate textual and/or auditory out- put. They can be trained from scratch or fine-tuned from text LLM backbones with auditory modali- Figure 1: LALMs’ diverse capabilities and modalities covered. Icons from https://www.flaticon.com . ties inserted. By integrating auditory modalities with language understanding, they show potential in auditory processing (Huang et al., 2024a), multi- modal reasoning (Sakshi et al., 2025), and human- computer interaction (Lin et al., 2024a). As LALMs evolve, expectations for their capa- bilities have expanded from basic tasks like speech recognition to more complex ones such as audio- grounded reasoning (Sakshi et al., 2025) and inter- active dialogue (Lin et al., 2025a). Figure 1 illus- trates this multifaceted nature, emphasizing the di- verse input and output modalities involved and the wide range of abilities these models are expected to demonstrate. To evaluate these capabilities, a vari- ety of benchmarks have been developed (Lin et al., 2025a; Yang et al., 2024c; Cheng et al., 2025). However, the evaluation landscape remains frag- mented and lacks systematic organization. Existing surveys (Wu et al., 2024a; Peng et al., 2024; Cui et al., 2024; Arora et al., 2025a) focus primarily on model architectures and training methodologies,arXiv:2505.15957v2 [eess.AS] 23 May 2025 with less emphasis on the equally important role of evaluation in assessing LALMs’ capabilities. This gap makes it challenging for researchers to find suit- able benchmarks | https://arxiv.org/abs/2505.15957v2 |
for their models or to pinpoint the field’s progress. Therefore, a structured overview of LALM evaluation frameworks is needed. This paper presents a comprehensive survey of LALM evaluation frameworks and introduces a tax- onomy categorizing evaluation dimensions. To the best of our knowledge, this is the first in-depth sur- vey and taxonomy specifically focused on LALM evaluation. We organize the frameworks into four primary categories: General Auditory Aware- ness and Processing (§3), Knowledge and Rea- soning (§4), Dialogue-oriented Ability (§5), and Fairness, Safety, and Trustworthiness (§6). We also highlight challenges in LALM evaluation (§7), such as data contamination and insufficient con- sideration of human diversity, while suggesting promising future directions. Overall, our contributions are threefold: (1) pre- senting the first comprehensive survey of LALM evaluations, (2) proposing a structured taxonomy for LALM evaluation that offers clear guidelines for researchers, and (3) identifying key challenges and future directions to improve evaluation cover- age and robustness. 2 Taxonomy of Evaluation Frameworks for Large Audio-Language Models As LALMs integrate multimodal understanding, they tackle tasks across speech, audio, and music. Despite numerous benchmarks for LALMs emerg- ing, the evaluation landscape remains fragmented. To address this, we present the first structured tax- onomy of LALM evaluations. Figure 2 shows our taxonomy, with some works included1. The full categorization of the surveyed works is in Appendix A. We organize the surveyed works into four categories by evaluation objectives: •General Auditory Awareness and Process- ingevaluates the auditory awareness and fun- damental processing tasks, e.g., speech recog- nition and audio captioning. •Knowledge and Reasoning assesses LALMs’ knowledge acquisition and advanced reason- ing skills, examining their intelligence. •Dialogue-oriented Ability focuses on natural conversational skills, including affective and 1Some span many categories due to their complexity.contextual interaction, dialogue management, and instruction following. •Fairness, Safety and Trustworthiness exam- ines bias, toxicity, and reliability for ethical, safe, and trustworthy deployment. Each category is further divided into subcate- gories, as shown in Figure 2. The following sec- tions provide a detailed overview, highlighting the current progress, limitations, and future directions. 3 General Auditory Awareness and Processing A distinctive strength of LALMs over cascaded systems (Huang et al., 2024c; Kuan et al., 2024b) is their inherent ability to directly interpret audi- tory signals, capturing crucial non-verbal cues such as speaker identity, emotion, and ambient context, without relying on separate components like speech recognition or emotion recognition systems con- nected to an LLM. This section reviews works eval- uating both acoustic awareness and foundational auditory processing, emphasizing these core capa- bilities that set LALMs apart from LLMs. 3.1 Auditory Awareness Benchmarks for auditory awareness examine how effectively LALMs realize acoustic cues like emotion, prosody, and environmental sounds. SALMon (Maimon et al., 2025) specifically eval- uates sensitivity to acoustic inconsistencies (e.g., sudden speaker or emotional changes) and mis- alignments between acoustic signals and semantic content (e.g., conveying sad content with a cheer- ful tone). These evaluations reveal significant gaps between LALMs and human-level perception. EmphAssess (Seyssel et al., 2024) measures LALMs’ awareness of prosodic emphasis by re- quiring speech-to-speech paraphrasing or transla- tion that accurately preserves and transfers | https://arxiv.org/abs/2505.15957v2 |
empha- sis on specific parts of the input utterance. This evaluates LALMs’ ability to capture and maintain fine-grained prosodic features. These benchmarks highlight challenges in fine- grained auditory awareness among current models, underscoring the need for improved modeling of subtle acoustic and paralinguistic information (Mai- mon et al., 2025; Seyssel et al., 2024). 3.2 Auditory Processing Building on auditory awareness, LALMs must also excel in fundamental auditory tasks, such as speech TaxonomyGeneral Auditory Awareness and Processing (§3)Awareness (§3.1) SALMon (Maimon et al., 2025); EmphAssess (Seyssel et al., 2024) Processing (§3.2)Dynamic-SUPERB (Huang et al., 2024a); Dynamic-SUPERB Phase-2 (Huang et al., 2025a); AIR-Bench (Yang et al., 2024c); AudioBench (Wang et al., 2025a); MuChoMusic (Weck et al., 2024) Knowledge and Reasoning (§4)Linguistic Knowledge (§4.1)ZeroSpeech 2021 (Nguyen et al., 2020); CSZS (Huang et al., 2024b); sStoryCloze (Hassid et al., 2023); tStoryCloze (Hassid et al., 2023) World Knowledge Assessment (§4.2)MMAU (Sakshi et al., 2025); Audiopedia (Penamakuri et al., 2025); VoiceBench (Chen et al., 2024c); VoxEval (Cui et al., 2025) Reasoning (§4.3)CompA (Ghosh et al., 2024b); MMAU (Sakshi et al., 2025); SAKURA (Yang et al., 2025a); URO-Bench (Yan et al., 2025); Audio Entailment (Deshmukh et al., 2025a); Wang et al. (2025d) Dialogue-oriented Ability (§5)Conversational Ability (§5.1)StyleTalk (Lin et al., 2024a); SD-Eval (Ao et al., 2024); VoxDialogue (Cheng et al., 2025); Talking Turns (Arora et al., 2025b); Full-Duplex-Bench (Lin et al., 2025a) Instruction Following (§5.2)VoiceBench (Chen et al., 2024c); URO-Bench (Yan et al., 2025); Speech-IFeval (Lu et al., 2025) Fairness, Safety, and Trustworthiness (§6)Fairness and Bias (§6.1) Lin et al. (2024c); Spoken Stereoset (Lin et al., 2024b) Safety (§6.2) VoiceBench (Chen et al., 2024c); Yang et al. (2025b); Roh et al. (2025) Hallucination (§6.3) Kuan et al. (2024a); CMM (Leng et al., 2024) Figure 2: The taxonomy of LALM evaluation frameworks, including selected works as representative examples. The complete version is in Appendix A. recognition, audio classification, and music anal- ysis, to support advanced real-world applications. A list of commonly evaluated tasks and their cor- responding datasets is provided in Appendix B for reference. Initially driven by representation learn- ing models (Baevski et al., 2020; Hsu et al., 2021; Li et al., 2022), enriched datasets (Pratap et al., 2020; Piczak, 2015a; Hawthorne et al., 2019), and existing benchmarks (Yang et al., 2021; Turian et al., 2022; Yuan et al., 2023), recent works adapt these resources into instruction-oriented evaluation frameworks tailored for LALMs. Dynamic-SUPERB (Huang et al., 2024a) initi- ated this direction, constructing 55 multiple-choice question-answering (QA) tasks spanning speech, audio, and music modalities. Subsequent efforts, such as AIR-Bench (Yang et al., 2024c) and Au- dioBench (Wang et al., 2025a), extend to open- ended QA formats. MuChoMusic (Weck et al., 2024) specifically emphasizes music-related tasks, while Dynamic-SUPERB Phase-2 (Huang et al., 2025a) significantly enlarges the benchmark to 180 tasks, forming the largest evaluation suite for LALMs’ general processing abilities to date. Given the task diversity, various evaluation met- rics are adopted depending on the task specificity, such as word error rate for speech recognition and BLEU score (Papineni et al., 2002) for translation. There is also an emerging trend that | https://arxiv.org/abs/2505.15957v2 |
includes LLM- as-a-judge (Gu et al., 2024) for scalable, automatic evaluation of open-ended responses (Huang et al.,2025a; Yang et al., 2024c; Wang et al., 2025a). Despite achieving promising results in certain areas, these benchmarks demonstrate that current LALMs still fall short of universally robust per- formance across auditory-processing tasks (Huang et al., 2025a), highlighting substantial room for im- provement toward truly auditory foundation mod- els. 4 Knowledge and Reasoning Intelligent LALMs should demonstrate extensive knowledge and advanced reasoning to tackle com- plex real-world tasks. Current evaluations empha- size these abilities through three categories: Lin- guistic Knowledge ,World Knowledge Assess- ment , and Reasoning . Each category targets dis- tinct but complementary skills, collectively provid- ing a comprehensive evaluation. These assessments reveal key challenges LALMs face in mastering knowledge and reasoning for advanced tasks. 4.1 Linguistic Knowledge Linguistic knowledge refers to understanding and effectively using spoken language. Evaluat- ing LALMs’ linguistic proficiency typically use likelihood-based benchmarks where models choose the more linguistically plausible option from paired speech samples. These tests cover lexical knowl- edge, syntax, and semantic coherence. Representative works include the ZeroSpeech 2021 benchmark (Nguyen et al., 2020), which con- sists of multiple tracks for evaluating linguistic capabilities. The lexical-level assessment track, sWUGGY , tests models’ ability to distinguish be- tween real words and phonotactically similar non- words, while the syntactic sensitivity evaluation track, sBLIMP, focuses on differentiating grammat- ical from ungrammatical sentences. CSZS (Huang et al., 2024b) extends syntactic evaluation to mul- tilingual and code-switched scenarios. Narrative and semantic coherence are evaluated by tasks like sStoryCloze and tStoryCloze (Hassid et al., 2023), where models are tasked with selecting semanti- cally appropriate continuations to spoken stories. 4.2 World Knowledge Assessment Real-world tasks often demand integrating external knowledge beyond basic auditory understanding. World knowledge assessment evaluates LALMs on two main aspects: (1) auditory expertise like music structure and medical sound diagnosis, and (2) general commonsense and factual knowledge. Benchmarks that evaluate auditory expertise include MuChoMusic (Weck et al., 2024) and MMAU (Sakshi et al., 2025), which focus on musi- cal understanding, such as melodic structure, har- mony, instrument identification, and contextual mu- sic interpretation. Additionally, SAGI (Bu et al., 2024) assesses medical expertise, such as recogniz- ing illnesses from audio cues like coughing. Commonsense and factual knowledge evalua- tions often convert established text benchmarks into spoken form using text-to-speech (TTS). V ox- Eval (Cui et al., 2025) and V oiceBench serve as spoken counterparts to MMLU (Hendrycks et al., 2021) and MMLU-Pro (Wang et al., 2024), test- ing models across diverse factual domains like so- cial science and humanities. Audiopedia (Pena- makuri et al., 2025) uses knowledge graphs from Wikidata (Vrande ˇci´c and Krötzsch, 2014) to create audio-based, knowledge-intensive QA tasks that evaluate models’ knowledge of well-known enti- ties, such as brands, mentioned in audio. These benchmarks thoroughly assess LALMs’ knowledge acquisition, revealing challenges such as limited auditory expertise (Weck et al., 2024) and inconsistent performance across domains. Dif- ferent LALMs excel in different domains, each with their own strengths, but their performance often noticeably declines outside their own special- ized areas (Cui | https://arxiv.org/abs/2505.15957v2 |
et al., 2025). Overall, there remains substantial room to improve LALMs’ auditory ex- pertise and factual knowledge.4.3 Reasoning Reasoning over auditory inputs falls into two types. Content-based reasoning tests a model’s ability to understand spoken semantic content and answer questions. Acoustic-based reasoning requires uti- lizing acoustic features like speaker traits and envi- ronmental sounds beyond semantics. We provide an overview of these two evaluation paradigms. 4.3.1 Content-based Reasoning Content-based reasoning assesses LALMs’ abil- ity to reason over the semantic content of audi- tory queries. Current benchmarks for this capa- bility typically transform NLP reasoning bench- marks into spoken questions via TTS and require LALMs to provide answers. For instance, V ox- Eval (Cui et al., 2025), URO-Bench (Yan et al., 2025), and ADU-Bench (Gao et al., 2024) convert NLP datasets like GSM8K (Cobbe et al., 2021) and MMLU (Hendrycks et al., 2021) into speech, evaluating LALMs’ mathematical reasoning based on spoken questions. During synthesis, various speaking styles (e.g., mispronunciation, disfluen- cies, and accents) may be introduced to test models’ robustness (Cui et al., 2025). These benchmarks reveal gaps in current LALMs’ content-based reasoning abilities, even with chain-of-thought (Wei et al., 2022; Kojima et al., 2022). Moreover, model performance varies significantly across speaking styles (Cui et al., 2025), indicating instability in their reasoning. 4.3.2 Acoustic-based Reasoning Acoustic-based reasoning requires LALMs to infer from acoustic cues in auditory input, often involv- ing reasoning across multiple auditory modalities or combining auditory understanding with cogni- tive skills such as compositional, temporal, logical, and multi-hop reasoning. Cross-auditory Modality Reasoning demands joint reasoning over multiple auditory modalities, like speech and non-speech sounds. Wang et al. (2025d) propose an open-ended QA benchmark as- sessing co-reasoning on speech and environmental sounds, requiring reasoning over cues from dis- tinct auditory sources to infer speakers’ activities. Their findings show that current LALMs frequently neglect non-speech cues, leading to failures. Compositional and Temporal Reasoning in- volves comprehending structured acoustic events, their temporal relationships, and attribute bind- ing. Benchmarks like CompA (Ghosh et al., 2024b) evaluate these abilities through specific tasks: CompA-order challenges models to identify correct event sequences or align audio temporal structures with textual descriptions, while CompA- attribute focuses on associating sound events with their sources and attributes. MMAU (Sakshi et al., 2025) assesses temporal reasoning via event count- ing and duration comparison. Logical reasoning covers structured inference, including deductive and causal reasoning. De- ductive reasoning can be tested by Audio Entail- ment (Deshmukh et al., 2025a), which evaluates whether a textual hypothesis logically follows from auditory input based on acoustic attributes like sound sources. MMAU (Sakshi et al., 2025) exam- ines LALMs’ causal reasoning on cause-and-effect relationships of events. Multi-hop reasoning is the ability to recall and integrate multiple information to answer complex queries, enabling models to connect stored knowl- edge without explicit reasoning steps (Yang et al., 2024d,e; Biran et al., 2024). SAKURA (Yang et al., 2025a) evaluates LALMs’ multi-hop reasoning by requiring integration of auditory attributes (e.g., speaker gender and emotion) with stored knowl- edge. Results show that LALMs struggle to com- bine auditory information with stored knowledge for | https://arxiv.org/abs/2505.15957v2 |
reasoning, even when both types of information are extracted and known by the models. 5 Dialogue-oriented Ability While foundational skills such as auditory aware- ness (§3.1), fundamental processing (§3.2), language proficiency (§4.1), advanced knowl- edge (§4.2), and reasoning (§4.3) are essential for LALMs, natural human-AI interactions addition- ally require affective and contextual interaction, flu- ent dialogue management, and precise instruction following. This category targets these integrative skills, focusing on naturalness and controllability, which we group as Conversational Ability and Instruction Following . 5.1 Conversational Ability Effective conversational ability in LALMs relies on generating contextually appropriate responses and smoothly managing dialogues in real time. Current evaluations address this via two complementary frameworks: affective and contextual interaction, and full-duplex dialogue management.5.1.1 Affective and Contextual Interaction Evaluations of affective and contextual interac- tion typically adopt half-duplex settings, focus- ing on fully turn-by-turn conversations without speaker overlaps. These benchmarks emphasize LALMs’ ability to respond using both content and non-content cues such as emotional tone, speak- ing style, and speaker traits. StyleTalk (Lin et al., 2024a) presents models with a dialogue history and the user’s current speech segment, intention- ally leaving the user’s intent underspecified when relying solely on the content. Consequently, mod- els are required to leverage paralinguistic cues to respond appropriately. Subsequent works, such as SD-Eval (Ao et al., 2024) and V oxDialogue (Cheng et al., 2025), broaden the evaluation by incorpo- rating more acoustic and contextual variables, in- cluding speaker age, accent, and environmental conditions. These benchmarks combine objec- tive metrics (e.g., ROUGE-L (Lin, 2004), ME- TEOR (Banerjee and Lavie, 2005)), LLM-based judgment (Gu et al., 2024), and human evaluation for comprehensive assessment. While these benchmarks rely on static data, Li et al. (2025) proposes an interactive framework inspired by Chatbot Arena (Chiang et al., 2024), where real users converse with models on topics of their choice and provide pairwise model prefer- ences, enabling dynamic, user-centered evaluation. 5.1.2 Full-duplex Dialogue Management Full-duplex evaluation examines LALMs in real- time, dynamic dialogues with complex behav- iors like turn-taking (Duncan, 1972; Gravano and Hirschberg, 2011), backchanneling (Sche- gloff, 1982), and speaker interruptions and over- laps (Gravano and Hirschberg, 2012; Schegloff, 2000). These behaviors are detailed in Appendix C. Representative works, such as Talking Turns (Arora et al., 2025b) and Full-Duplex- Bench (Lin et al., 2025a), commonly evaluate four key dimensions: •Timing for speaking up or interrupting : As- sesses LALMs’ ability to distinguish meaning- ful pauses from turn-yielding moments, avoid- ing undesired interruptions and taking over turns appropriately. •Backchanneling : Evaluates whether LALMs backchannel at proper moments with suitable frequency, reflecting their active listening. •Turn taking : Examines whether LALMs tran- sition smoothly between turns by recognizing boundaries, managing latency, and signaling their intent to maintain or yield the floor. •User interruption handling : Assesses LALMs’ handling of interruption, e.g., paus- ing and smoothly resuming the conversation. Both use automatic evaluation metrics. Talking Turns uses supervised models trained on human di- alogues (Godfrey et al., 1992) as a reference, while Full-Duplex-Bench uses metrics like response la- tency. However, these methods often rely on heuris- tics, which may | https://arxiv.org/abs/2505.15957v2 |
be inaccurate in some cases. Their results show that LALMs struggle with full-duplex management, especially with interrup- tions (Arora et al., 2025b) and seamless turn transi- tions (Lin et al., 2025a), highlighting current limi- tations in dynamic spoken interaction. 5.2 Instruction Following Instruction following is the ability to follow user- specified instructions, e.g., requirements for per- forming particular actions, adhering to constraints, and adjusting response styles. Effective instruction following is essential for model controllability. LALM instruction-following evaluations typi- cally involve three approaches: (1) adding con- straints to existing LALM benchmarks not orig- inally for instruction following, (2) synthesiz- ing LLM instruction-following benchmarks into speech, or (3) creating new dedicated datasets. For instance, Speech-IFeval (Lu et al., 2025) introduces constraints into LALM benchmarks such as Dynamic-SUPERB Phase-2 (Huang et al., 2025a); V oiceBench (Chen et al., 2024c) synthe- sizes IFEval (Zhou et al., 2023a), a text-based LLM instruction-following benchmark, into speech; and URO-Bench (Yan et al., 2025) creates custom eval- uation datasets. Evaluating instruction adherence helps distin- guish limitations in following instructions and defi- ciencies in auditory understanding or knowledge. Common evaluated constraints include length (e.g., a minimum number of words), format (e.g., re- sponses in JSON or all caps), action (e.g., chain- of-thought reasoning (Wei et al., 2022)), style (e.g., responses in a humorous tone), and content (e.g., including a specific word). During evaluation, instruction-following rates, i.e., the frequency with which instructions are correctly followed, are mea-sured with rule-based (Zhou et al., 2023a) or LLM- as-a-judge methods (Gu et al., 2024). Benchmark results reveal significant gaps in LALMs compared to their LLM backbones in in- struction following (Lu et al., 2025), indicating catastrophic forgetting when adapting LLMs to au- ditory modalities. 6 Fairness, Safety, and Trustworthiness Despite the advancements of LALMs, their real- world deployment may pose social risks, such as perpetuating biases, generating harmful content, or spreading misinformation, if not properly evalu- ated and regulated. Therefore, fairness, safety, and trustworthiness must be thoroughly assessed. This section reviews works that quantify these risks to ensure the responsible and ethical use of LALMs. 6.1 Fairness and Bias Fairness and bias are key ethical concerns for LALMs, ensuring they do not reinforce societal inequalities, discrimination, stereotypes, or biases. Such issues can be triggered by either the speech content or its non-content acoustic cues. For exam- ple, content-triggered bias may arise when LALMs translate occupation-related terms in the speech content into stereotypical gendered terms, inde- pendent of acoustic characteristics. In contrast, acoustic-triggered bias may arise when vocal cues lead the model to associate a speaker’s gender with certain occupations. Lin et al. (2024c) quantifies LALMs’ content- triggered gender biases via four tasks: speech-to- text translation, coreference resolution, sentence continuation, and question answering. In each task, gender biases and stereotypes are measured based on the models’ responses. Conversely, Spoken Stereoset (Lin et al., 2024b) assesses acoustic-triggered bias on speakers’ gen- der and age. The authors sampled sentences from NLP datasets like Stereoset (Nadeem et al., 2021) and BBQ (Parrish et al., 2022), which were then rewritten in the first-person perspective with ex- plicit gender or age | https://arxiv.org/abs/2505.15957v2 |
indicators (e.g., “mother”) removed to ensure bias would be triggered by speaker characteristics rather than content. The modified sentences were synthesized into speech using TTS with voices of different genders and ages. These spoken sentences served as the con- text, and LALMs were tasked with selecting contin- uations from options that were stereotypical, anti- stereotypical, or unrelated to the context. These works highlight LALMs’ social biases, which may be inherited from their training data or LLM backbones. Additionally, since social bi- ases are multifaceted, current benchmarks cannot include all possible societal factors, emphasizing the need for further research into both model devel- opment and benchmarks to enhance fairness. 6.2 Safety Unlike fairness and bias, which expose societal prejudices in LALMs, safety concerns focus on preventing harmful or unsafe outputs that may negatively impact individuals or society, includ- ing user discomfort or illegal activities. Current studies typically use NLP datasets with malicious queries and convert them into speech via TTS. For example, V oiceBench (Chen et al., 2024c) and Roh et al. (2025) synthesize datasets like Ad- vBench (Zou et al., 2023) into spoken queries, eval- uating LALMs on their ability to reject them. During evaluation, jailbreaking techniques may be employed to test models’ resistance to adversar- ial inputs. These include modifying speech con- tent by inserting fictional scenarios (Shen et al., 2024) and applying auditory manipulations such as silence (Yang et al., 2025b), noise (Yang et al., 2025b; Xiao et al., 2025), accents (Roh et al., 2025; Xiao et al., 2025), and audio edits (Xiao et al., 2025; Gupta et al., 2025). Ideally, LALMs should remain robust to adversarially modified inputs and consistently reject malicious requests. However, evaluations show that LALMs often accept malicious spoken inputs even when they can refuse similar textual ones (Chen et al., 2024c). Moreover, LALMs show considerable safety degra- dation compared to their LLM backbones (Yang et al., 2025b). Several jailbreaking methods can easily bypass these models (Roh et al., 2025; Xiao et al., 2025), highlighting the need for better multi- modal safety alignment. 6.3 Hallucination Hallucination occurs when a model generates non- factual or unsupported outputs, reducing reliability and misleading users. In LALMs, hallucinations can originate from both auditory and textual modal- ities. While textual hallucinations can be assessed with NLP benchmarks (Li et al., 2023; Chen et al., 2024a; Bang et al., 2025), we focus on auditory- induced hallucinations.Kuan et al. (2024a) explores LALMs’ object hallucination, where the models falsely identify objects or events absent from the auditory input. They evaluate this via two tasks: a discriminative task where LALMs determine whether a specified object exists in the audio, and a generative task where LALMs generate captions describing the au- dio. These captions are then evaluated for accuracy in reflecting the actual content of the audio. De- spite generating accurate captions, LALMs strug- gle with object identification in the discriminative task, revealing challenges in object hallucination for question-answering tasks. Leng et al. (2024) further analyzes object hallu- cination using the CMM benchmark, showing that overrepresented objects or events in the training data can lead LALMs | https://arxiv.org/abs/2505.15957v2 |
to incorrectly predict their presence, even when they are absent. Additionally, the frequent co-occurrence of objects and events during training exacerbates these hallucinations. These works highlight hallucination challenges in LALMs and call for improved training, model- ing, and data handling to enhance trustworthiness. 7 Challenges and Future Directions 7.1 Data Leakage and Contamination Creating and curating high-quality auditory data is far more difficult than for text. Consequently, many LALM benchmarks rely on existing audi- tory corpora (Panayotov et al., 2015a; Kim et al., 2019; Gemmeke et al., 2017) rather than collecting new data. This raises concerns about data leakage, since models may have seen these datasets during training (Deng et al., 2024; Zhou et al., 2023b; Ja- covi et al., 2023), undermining evaluation reliabil- ity. The risk grows when large-scale web-crawled data (Radford et al., 2023; He et al., 2024) are used for training without rigorous filtering. Thus, alongside creating or collecting custom data, developing methods to detect and miti- gate contamination (Golchin and Surdeanu, 2024; Samuel et al., 2025) will be a crucial direction for more reliable LALM evaluations. 7.2 Inclusive Evaluation Across Linguistic, Cultural, and Communication Diversity While current benchmarks cover major languages like English and Mandarin (Huang et al., 2025a; Yan et al., 2025), many overlook crucial aspects such as low-resource languages (Magueresse et al., 2020) and code-switching (Do ˘gruöz et al., 2021; Sitaram et al., 2019). Although these have been explored in traditional speech technologies (Khare et al., 2021; Bhogale et al., 2024; Liu et al., 2024; Yang et al., 2024b), they remain underexamined in LALMs. This limited coverage fails to capture the full linguistic diversity of human communication, as different languages possess unique characteris- tics (Evans and Levinson, 2009; Bickel, 2014). Cultural factors, shaped by historical and so- cial contexts, influence dimensions like moral norms (Graham et al., 2016; Saucier, 2018) and are essential for evaluation. As LALMs extend to diverse cultures (Yang et al., 2024a; Wang et al., 2025b), evaluation frameworks must also expand. Along with language and culture, communica- tion patterns also matter. While some work covers speech variations like accents, underrepresented groups such as people with speech disorders (e.g., dysarthria (Kent et al., 1999; Kim et al., 2008)) are often overlooked, as current LALMs have limited familiarity with their unique speech patterns, which affects fair and accurate understanding. To develop fair and broadly applicable LALMs, future evaluations should carefully consider linguis- tic, cultural, and communicative diversity. 7.3 Safety Evaluation Unique to Auditory Modalities Current LALM safety evaluations (§6.2) mainly target harmful content in model outputs, often over- looking risks inherent to auditory modalities. Audi- tory cues such as tone, emotion, and voice quality can also influence user experience and raise con- cerns if uncontrolled. For instance, even harmless content can discomfort users if spoken harshly or sarcastically, and the presence of annoying noises can also cause irritation. Thus, safety should cover auditory comfort, not just content harmlessness. Most benchmarks focus on content toxicity but seldom assess auditory-specific safety. Addressing these issues is vital for applications like voice as- sistants (Pias et al., 2024; Mari | https://arxiv.org/abs/2505.15957v2 |
et al., 2024), where vocal manner greatly affects user trust and comfort. Future work should jointly consider vocal tone, noise, and other paralinguistic factors to ensure safe, user-friendly interactions. 7.4 Unified Evaluation of Harmlessness and Helpfulness Harmlessness and helpfulness in LALMs refer to safety and fairness, and the ability to assist users, respectively. Ideally, these two properties shouldbe enhanced together; however, in practice, they of- ten conflict (Bai et al., 2022). For example, a model that always refuses to answer is safe but unhelpful, as it fails to assist users. A recent study (Lin et al., 2025b) shows that post-training aimed at enhanc- ing harmlessness can reduce helpfulness, causing models to reject queries even when no safety or pri- vacy issues exist. This tension highlights the need for a unified evaluation framework that considers both aspects simultaneously. Existing harmlessness benchmarks (§6) rarely include helpfulness, limiting understanding of their trade-offs and offering limited guidance for bal- ancing them effectively. Thus, developing a joint evaluation framework is a key future direction. 7.5 Personalization Evaluation Personalization enables models to adapt to individ- ual users by incorporating private information like users’ voices and preferences, supporting applica- tions such as personalized voice assistants. While traditional speech technologies have ex- plored personalization (Lee et al., 2024; Joseph and Baby, 2024), it remains underdeveloped for LALMs. Unlike recent progress in LLM personal- ization (Tan et al., 2024, 2025; Zhang et al., 2024), LALM personalization is more complex due to the auditory dimension: LALMs must adapt to user- specific knowledge, as text LLMs do, but also be- come familiar with users’ voice characteristics and speaking habits, and adjust their own speaking style to match user preferences. Such complexity neces- sitates the development of specialized evaluations to fully assess LALM personalization, making it a valuable area for future investigation. 8 Conclusion Holistic evaluation of LALMs is as crucial as mod- eling and training in advancing the field. This sur- vey reviews existing evaluation frameworks and proposes a taxonomy categorizing current progress into four important research areas, reflecting the diverse expectations of LALM capabilities. We present a thorough overview of the literature, high- lighting challenges and future directions, such as data contamination, inclusivity, auditory-specific safety, and personalization. We hope this survey provides clear guidelines for researchers and stim- ulates further advancements in LALM evaluation. Limitations We acknowledge a few limitations in this paper. First, the scope of our taxonomy is based on exist- ing evaluation frameworks and benchmarks, mean- ing it does not cover all possible real-world audi- tory tasks. The auditory modalities are inherently complex, with a wide range of tasks and appli- cations that cannot be exhaustively covered. As LALMs continue to evolve, new capabilities and applications will emerge, leading to growing ex- pectations for these models. Consequently, the evaluation landscape will likely expand and shift, requiring our taxonomy to be updated and adapted to include these new tasks and applications. We will continue to follow the advancements in this field and adjust our taxonomy accordingly to reflect these developments. Second, this survey primarily focuses on cur- rent benchmarks used | https://arxiv.org/abs/2505.15957v2 |
to evaluate LALMs’ per- formance across various aspects. As a result, it does not put much emphasis on more basic or tra- ditional evaluation methods, such as subjective as- sessments of speech generation quality (e.g., Mean Opinion Score), which are commonly used to eval- uate model-generated audio. While these methods are valuable in certain applications, they fall out- side the scope of this paper, which aims to provide a comprehensive overview of more advanced and specialized benchmarks. References Andrea Agostinelli, Timo I Denk, Zalán Borsos, Jesse Engel, Mauro Verzetti, Antoine Caillon, Qingqing Huang, Aren Jansen, Adam Roberts, Marco Tagliasacchi, and 1 others. 2023. Musi- clm: Generating music from text. arXiv preprint arXiv:2301.11325 . Junyi Ao, Yuancheng Wang, Xiaohai Tian, Dekun Chen, Jun Zhang, Lu Lu, Yuxuan Wang, Haizhou Li, and Zhizheng Wu. 2024. SD-eval: A benchmark dataset for spoken dialogue understanding beyond words. In The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track . Rosana Ardila, Megan Branson, Kelly Davis, Michael Kohler, Josh Meyer, Michael Henretty, Reuben Morais, Lindsay Saunders, Francis Tyers, and Gre- gor Weber. 2020. Common voice: A massively- multilingual speech corpus. In Proceedings of the Twelfth Language Resources and Evaluation Confer- ence, pages 4218–4222, Marseille, France. European Language Resources Association. Siddhant Arora, Kai-Wei Chang, Chung-Ming Chien, Yifan Peng, Haibin Wu, Yossi Adi, EmmanuelDupoux, Hung-Yi Lee, Karen Livescu, and Shinji Watanabe. 2025a. On the landscape of spoken lan- guage models: A comprehensive survey. arXiv preprint arXiv:2504.08528 . Siddhant Arora, Zhiyun Lu, Chung-Cheng Chiu, Ruom- ing Pang, and Shinji Watanabe. 2025b. Talking turns: Benchmarking audio foundation models on turn-taking dynamics. In The Thirteenth Interna- tional Conference on Learning Representations . Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in neural information processing systems , 33:12449–12460. Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, and 1 others. 2022. Training a helpful and harmless assis- tant with reinforcement learning from human feed- back. arXiv preprint arXiv:2204.05862 . Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved cor- relation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summariza- tion, pages 65–72. Yejin Bang, Ziwei Ji, Alan Schelten, Anthony Hartshorn, Tara Fowler, Cheng Zhang, Nicola Cancedda, and Pascale Fung. 2025. Hallulens: Llm hallucination benchmark. arXiv preprint arXiv:2504.17550 . Kaushal Santosh Bhogale, Deovrat Mehendale, Ni- harika Parasa, Sathish Kumar Reddy G, Tahir Javed, Pratyush Kumar, and Mitesh M. Khapra. 2024. Em- powering low-resource language asr via large-scale pseudo labeling. In Interspeech 2024 , pages 2519– 2523. Balthasar Bickel. 2014. Linguistic diversity and univer- sals. The Cambridge handbook of linguistic anthro- pology , pages 101–124. Eden Biran, Daniela Gottesman, Sohee Yang, Mor Geva, and Amir Globerson. 2024. Hopping too late: Explor- ing the limitations of large language models on multi- hop queries. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Process- ing, pages 14113–14130, Miami, Florida, USA. As- | https://arxiv.org/abs/2505.15957v2 |
sociation for Computational Linguistics. Fan Bu, Yuhao Zhang, Xidong Wang, Benyou Wang, Qun Liu, and Haizhou Li. 2024. Roadmap towards superhuman speech understanding using large lan- guage models. arXiv preprint arXiv:2410.13268 . Hui Bu, Jiayu Du, Xingyu Na, Bengu Wu, and Hao Zheng. 2017. Aishell-1: An open-source mandarin speech corpus and a speech recognition baseline. In2017 20th conference of the oriental chapter of the international coordinating committee on speech databases and speech I/O systems and assessment (O-COCOSDA) , pages 1–5. IEEE. Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jean- nette N. Chang, Sungbok Lee, and Shrikanth S. Narayanan. 2008. IEMOCAP: interactive emotional dyadic motion capture database. Language Re- sources and Evaluation , 42(4):335–359. Houwei Cao, David G. Cooper, Michael K. Keutmann, Ruben C. Gur, Ani Nenkova, and Ragini Verma. 2014. Crema-d: Crowd-sourced emotional multimodal ac- tors dataset. IEEE Transactions on Affective Comput- ing, 5(4):377–390. Yupeng Cao, Haohang Li, Yangyang Yu, Shashid- har Reddy Javaji, Yueru He, Jimin Huang, Zining Zhu, Qianqian Xie, Xiao-yang Liu, Koduvayur Sub- balakshmi, and 1 others. 2025. Finaudio: A bench- mark for audio large language models in financial applications. arXiv preprint arXiv:2503.20990 . Kedi Chen, Qin Chen, Jie Zhou, He Yishen, and Liang He. 2024a. DiaHalu: A dialogue-level hallucination evaluation benchmark for large language models. In Findings of the Association for Computational Lin- guistics: EMNLP 2024 , pages 9057–9079, Miami, Florida, USA. Association for Computational Lin- guistics. Yiming Chen, Xianghu Yue, Xiaoxue Gao, Chen Zhang, Luis Fernando D’Haro, Robby Tan, and Haizhou Li. 2024b. Beyond single-audio: Advancing multi- audio processing in audio large language models. InFindings of the Association for Computational Linguistics: EMNLP 2024 , pages 10917–10930. Yiming Chen, Xianghu Yue, Chen Zhang, Xiaoxue Gao, Robby T Tan, and Haizhou Li. 2024c. V oicebench: Benchmarking llm-based voice assistants. arXiv preprint arXiv:2410.17196 . Xize Cheng, Ruofan Hu, Xiaoda Yang, Jingyu Lu, Dongjie Fu, Zehan Wang, Shengpeng Ji, Rongjie Huang, Boyang Zhang, Tao Jin, and Zhou Zhao. 2025. V oxdialogue: Can spoken dialogue systems under- stand information beyond words? In The Thirteenth International Conference on Learning Representa- tions . Wei-Lin Chiang, Lianmin Zheng, Ying Sheng, Anasta- sios Nikolas Angelopoulos, Tianle Li, Dacheng Li, Banghua Zhu, Hao Zhang, Michael Jordan, Joseph E Gonzalez, and 1 others. 2024. Chatbot arena: An open platform for evaluating llms by human pref- erence. In Forty-first International Conference on Machine Learning . Yunfei Chu, Jin Xu, Qian Yang, Haojie Wei, Xipin Wei, Zhifang Guo, Yichong Leng, Yuanjun Lv, Jinzheng He, Junyang Lin, and 1 others. 2024. Qwen2-audio technical report. arXiv preprint arXiv:2407.10759 . Joon Son Chung, Arsha Nagrani, and Andrew Zisser- man. 2018. V oxceleb2: Deep speaker recognition. In Proc. Interspeech 2018 , pages 1086–1090.Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, and 1 others. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 . Alexis Conneau, Min Ma, Simran Khanuja, Yu Zhang, Vera Axelrod, Siddharth Dalmia, Jason Riesa, Clara Rivera, and Ankur Bapna. 2023. Fleurs: Few-shot learning evaluation of universal representations of speech. In 2022 IEEE | https://arxiv.org/abs/2505.15957v2 |
Spoken Language Technology Workshop (SLT) , pages 798–805. IEEE. Joris Cosentino, Manuel Pariente, Samuele Cornell, Antoine Deleforge, and Emmanuel Vincent. 2020. Librimix: An open-source dataset for generalizable speech separation. arXiv preprint arXiv:2005.11262 . Wenqian Cui, Xiaoqi Jiao, Ziqiao Meng, and Irwin King. 2025. V oxeval: Benchmarking the knowledge under- standing capabilities of end-to-end spoken language models. arXiv preprint arXiv:2501.04962 . Wenqian Cui, Dianzhi Yu, Xiaoqi Jiao, Ziqiao Meng, Guangyan Zhang, Qichao Wang, Yiwen Guo, and Ir- win King. 2024. Recent advances in speech language models: A survey. arXiv preprint arXiv:2410.03751 . Michaël Defferrard, Kirell Benzi, Pierre Vandergheynst, and Xavier Bresson. 2017. Fma: A dataset for music analysis. In 18th International Society for Music Information Retrieval Conference . Alexandre Défossez, Laurent Mazaré, Manu Orsini, Amélie Royer, Patrick Pérez, Hervé Jégou, Edouard Grave, and Neil Zeghidour. 2024. Moshi: a speech- text foundation model for real-time dialogue. arXiv preprint arXiv:2410.00037 . Chunyuan Deng, Yilun Zhao, Xiangru Tang, Mark Ger- stein, and Arman Cohan. 2024. Investigating data contamination in modern benchmarks for large lan- guage models. In Proceedings of the 2024 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies (Volume 1: Long Papers) , pages 8706–8719, Mexico City, Mexico. Association for Computational Linguistics. Soham Deshmukh, Shuo Han, Hazim Bukhari, Ben- jamin Elizalde, Hannes Gamper, Rita Singh, and Bhiksha Raj. 2025a. Audio entailment: Assessing deductive reasoning for audio understanding. In Pro- ceedings of the AAAI Conference on Artificial Intelli- gence , volume 39, pages 23769–23777. Soham Deshmukh, Shuo Han, Rita Singh, and Bhiksha Raj. 2025b. ADIFF: Explaining audio difference us- ing natural language. In The Thirteenth International Conference on Learning Representations . Mattia A. Di Gangi, Roldano Cattoni, Luisa Bentivogli, Matteo Negri, and Marco Turchi. 2019. MuST-C: a Multilingual Speech Translation Corpus. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers) , pages 2012–2017, Min- neapolis, Minnesota. Association for Computational Linguistics. A Seza Do ˘gruöz, Sunayana Sitaram, Barbara Bullock, and Almeida Jacqueline Toribio. 2021. A survey of code-switching: Linguistic and social perspectives for language technologies. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers) , pages 1654–1666. Konstantinos Drossos, Samuel Lipping, and Tuomas Virtanen. 2020. Clotho: An audio captioning dataset. InICASSP 2020-2020 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP) , pages 736–740. IEEE. Starkey Duncan. 1972. Some signals and rules for tak- ing speaking turns in conversations. Journal of per- sonality and social psychology , 23(2):283. Starkey Duncan and Donald W Fiske. 2015. Face- to-face interaction: Research, methods, and theory . Routledge. Jesse Engel, Cinjon Resnick, Adam Roberts, Sander Dieleman, Mohammad Norouzi, Douglas Eck, and Karen Simonyan. 2017. Neural audio synthesis of musical notes with WaveNet autoencoders. In Pro- ceedings of the 34th International Conference on Machine Learning , volume 70 of Proceedings of Ma- chine Learning Research , pages 1068–1077. PMLR. Nicholas Evans and | https://arxiv.org/abs/2505.15957v2 |
Stephen C Levinson. 2009. The myth of language universals: Language diversity and its importance for cognitive science. Behavioral and brain sciences , 32(5):429–448. Y . Fan, J.W. Kang, L.T. Li, K.C. Li, H.L. Chen, S.T. Cheng, P.Y . Zhang, Z.Y . Zhou, Y .Q. Cai, and D. Wang. 2020. Cn-celeb: A challenging chinese speaker recognition dataset. In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , pages 7604–7608. Qingkai Fang, Shoutao Guo, Yan Zhou, Zhengrui Ma, Shaolei Zhang, and Yang Feng. 2025. LLaMA-omni: Seamless speech interaction with large language mod- els. In The Thirteenth International Conference on Learning Representations . Shinya Fujie, Kenta Fukushima, and Tetsunori Kobayashi. 2005. Back-channel feedback genera- tion using linguistic and nonlinguistic information and its application to spoken dialogue system. In INTERSPEECH , pages 889–892. Kuofeng Gao, Shu-Tao Xia, Ke Xu, Philip Torr, and Jindong Gu. 2024. Benchmarking open-ended au- dio dialogue understanding for large audio-language models. arXiv preprint arXiv:2412.05167 .Jort F. Gemmeke, Daniel P. W. Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R. Channing Moore, Manoj Plakal, and Marvin Ritter. 2017. Audio set: An ontology and human-labeled dataset for audio events. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , pages 776–780. Felix Gervits and Matthias Scheutz. 2018. Towards a conversation-analytic taxonomy of speech overlap. InProceedings of the Eleventh International Confer- ence on Language Resources and Evaluation (LREC 2018) . Sreyan Ghosh, Sonal Kumar, Ashish Seth, Chandra Ki- ran Reddy Evuru, Utkarsh Tyagi, S Sakshi, Oriol Nieto, Ramani Duraiswami, and Dinesh Manocha. 2024a. GAMA: A large audio-language model with advanced audio understanding and complex reason- ing abilities. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Process- ing, pages 6288–6313, Miami, Florida, USA. Associ- ation for Computational Linguistics. Sreyan Ghosh, Ashish Seth, Sonal Kumar, Utkarsh Tyagi, Chandra Kiran Reddy Evuru, Ramaneswaran S, S Sakshi, Oriol Nieto, Ramani Duraiswami, and Dinesh Manocha. 2024b. Compa: Addressing the gap in compositional reasoning in audio-language models. In The Twelfth International Conference on Learning Representations . John J Godfrey, Edward C Holliman, and Jane Mc- Daniel. 1992. Switchboard: Telephone speech cor- pus for research and development. In Acoustics, speech, and signal processing, ieee international con- ference on , volume 1, pages 517–520. IEEE Com- puter Society. Shahriar Golchin and Mihai Surdeanu. 2024. Time travel in LLMs: Tracing data contamination in large language models. In The Twelfth International Con- ference on Learning Representations . Julia A Goldberg. 1990. Interrupting the discourse on interruptions: An analysis in terms of relationally neutral, power-and rapport-oriented acts. Journal of pragmatics , 14(6):883–903. Kaixiong Gong, Kaituo Feng, Bohao Li, Yibing Wang, Mofan Cheng, Shijia Yang, Jiaming Han, Benyou Wang, Yutong Bai, Zhuoran Yang, and 1 others. 2024a. Av-odyssey bench: Can your multimodal llms really understand audio-visual information? arXiv preprint arXiv:2412.02611 . Yuan Gong, Hongyin Luo, Alexander H Liu, Leonid Karlinsky, and James Glass. 2024b. Listen, think, and understand. In International Conference on Learning Representations . Yuan Gong, Jin Yu, and James Glass. 2022. V ocal- sound: A dataset for improving human vocal sounds recognition. In | https://arxiv.org/abs/2505.15957v2 |
ICASSP 2022-2022 IEEE Interna- tional Conference on Acoustics, Speech and Signal Processing (ICASSP) , pages 151–155. IEEE. Jesse Graham, Peter Meindl, Erica Beall, Kate M John- son, and Li Zhang. 2016. Cultural differences in moral judgment and behavior, across and within soci- eties. Current Opinion in Psychology , 8:125–130. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al- Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, and 1 others. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 . Agustín Gravano and Julia Hirschberg. 2011. Turn- taking cues in task-oriented dialogue. Computer Speech & Language , 25(3):601–634. Agustín Gravano and Julia Hirschberg. 2012. A corpus- based study of interruptions in spoken dialogue. In Interspeech 2012 , pages 855–858. Jiawei Gu, Xuhui Jiang, Zhichao Shi, Hexiang Tan, Xuehao Zhai, Chengjin Xu, Wei Li, Yinghan Shen, Shengjie Ma, Honghao Liu, and 1 others. 2024. A survey on llm-as-a-judge. arXiv preprint arXiv:2411.15594 . Isha Gupta, David Khachaturov, and Robert Mullins. 2025. " i am bad": Interpreting stealthy, universal and robust audio jailbreaks in audio-language models. arXiv preprint arXiv:2502.00718 . Michael Hassid, Tal Remez, Tu Anh Nguyen, Itai Gat, Alexis Conneau, Felix Kreuk, Jade Copet, Alexan- dre Defossez, Gabriel Synnaeve, Emmanuel Dupoux, and 1 others. 2023. Textually pretrained speech lan- guage models. Advances in Neural Information Pro- cessing Systems , 36:63483–63501. Curtis Hawthorne, Andriy Stasyuk, Adam Roberts, Ian Simon, Cheng-Zhi Anna Huang, Sander Dieleman, Erich Elsen, Jesse Engel, and Douglas Eck. 2019. Enabling factorized piano music modeling and gener- ation with the MAESTRO dataset. In International Conference on Learning Representations . Haorui He, Zengqiang Shang, Chaoren Wang, Xuyuan Li, Yicheng Gu, Hua Hua, Liwei Liu, Chen Yang, Jiaqi Li, Peiyang Shi, and 1 others. 2024. Emilia: An extensive, multilingual, and diverse speech dataset for large-scale speech generation. In 2024 IEEE Spoken Language Technology Workshop (SLT) , pages 885– 890. IEEE. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language under- standing. In International Conference on Learning Representations . Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdel- rahman Mohamed. 2021. Hubert: Self-supervised speech representation learning by masked prediction of hidden units. IEEE/ACM transactions on audio, speech, and language processing , 29:3451–3460.Chien-yu Huang, Wei-Chih Chen, Shu wen Yang, Andy T. Liu, Chen-An Li, Yu-Xiang Lin, Wei- Cheng Tseng, Anuj Diwan, Yi-Jen Shih, Jiatong Shi, William Chen, Xuanjun Chen, Chi-Yuan Hsiao, Puyuan Peng, Shih-Heng Wang, Chun-Yi Kuan, Ke- Han Lu, Kai-Wei Chang, Chih-Kai Yang, and 57 others. 2025a. Dynamic-SUPERB phase-2: A collab- oratively expanding benchmark for measuring the ca- pabilities of spoken language models with 180 tasks. InThe Thirteenth International Conference on Learn- ing Representations . Chien-yu Huang, Ke-Han Lu, Shih-Heng Wang, Chi- Yuan Hsiao, Chun-Yi Kuan, Haibin Wu, Siddhant Arora, Kai-Wei Chang, Jiatong Shi, Yifan Peng, Roshan Sharma, Shinji Watanabe, Bhiksha Ramakr- ishnan, Shady Shehata, and Hung-Yi Lee. 2024a. Dynamic-superb: Towards a dynamic, collaborative, and comprehensive instruction-tuning benchmark for speech. In ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Process- ing (ICASSP) , | https://arxiv.org/abs/2505.15957v2 |
pages 12136–12140. Chien-yu Huang, Min-Han Shih, Ke-Han Lu, Chi- Yuan Hsiao, and Hung-yi Lee. 2025b. Speech- caps: Advancing instruction-based universal speech models with multi-talker speaking style captioning. InICASSP 2025-2025 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP) , pages 1–5. IEEE. Kuan-Po Huang, Chih-Kai Yang, Yu-Kuan Fu, Ewan Dunbar, and Hung-Yi Lee. 2024b. Zero resource code-switched speech benchmark using speech ut- terance pairs for multiple spoken languages. In ICASSP 2024 - 2024 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP) , pages 10006–10010. Rongjie Huang, Mingze Li, Dongchao Yang, Jia- tong Shi, Xuankai Chang, Zhenhui Ye, Yuning Wu, Zhiqing Hong, Jiawei Huang, Jinglin Liu, and 1 oth- ers. 2024c. Audiogpt: Understanding and generating speech, music, sound, and talking head. In Proceed- ings of the AAAI Conference on Artificial Intelligence , volume 38, pages 23802–23804. John Hughes, Sara Price, Aengus Lynch, Rylan Schaef- fer, Fazl Barez, Sanmi Koyejo, Henry Sleight, Erik Jones, Ethan Perez, and Mrinank Sharma. 2024. Best- of-n jailbreaking. arXiv preprint arXiv:2412.03556 . Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, and 1 others. 2024. Gpt-4o system card. arXiv preprint arXiv:2410.21276 . Keith Ito and Linda Johnson. 2017. The lj speech dataset. https://keithito.com/ LJ-Speech-Dataset/ . Alon Jacovi, Avi Caciularu, Omer Goldman, and Yoav Goldberg. 2023. Stop uploading test data in plain text: Practical strategies for mitigating data contam- ination by evaluation benchmarks. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 5075–5084. Gail Jefferson. 1986. Notes on’latency’in overlap onset. Human studies , pages 153–183. Feng Jiang, Zhiyu Lin, Fan Bu, Yuhao Du, Benyou Wang, and Haizhou Li. 2025. S2s-arena, evaluat- ing speech2speech protocols on instruction follow- ing with paralinguistic information. arXiv preprint arXiv:2503.05085 . George Joseph and Arun Baby. 2024. Speaker per- sonalization for automatic speech recognition using weight-decomposed low-rank adaptation. In Proc. Interspeech 2024 , pages 2875–2879. Mintong Kang, Chejian Xu, and Bo Li. 2025. Ad- vwave: Stealthy adversarial jailbreak attack against large audio-language models. In The Thirteenth In- ternational Conference on Learning Representations . Ray D Kent, Gary Weismer, Jane F Kent, Houri K V orpe- rian, and Joseph R Duffy. 1999. Acoustic studies of dysarthric speech: Methods, progress, and potential. Journal of communication disorders , 32(3):141–186. Shreya Khare, Ashish Mittal, Anuj Diwan, Sunita Sarawagi, Preethi Jyothi, and Samarth Bharadwaj. 2021. Low resource asr: The surprising effectiveness of high resource transliteration. In Proc. Interspeech 2021 , pages 1529–1533. Chris Dongjoo Kim, Byeongchang Kim, Hyunmin Lee, and Gunhee Kim. 2019. AudioCaps: Generating cap- tions for audios in the wild. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long and Short Papers) , pages 119–132, Minneapolis, Min- nesota. Association for Computational Linguistics. Heejin Kim, Mark Hasegawa-Johnson, Adrienne Perl- man, Jon R Gunderson, Thomas S Huang, Ken- neth L Watkin, Simone Frame, and 1 others. 2008. Dysarthric speech database for universal access re- search. In Interspeech , volume 2008, pages 1741– 1744. | https://arxiv.org/abs/2505.15957v2 |
Heeseung Kim, Che Hyun Lee, Sangkwon Park, Ji- heum Yeom, Nohil Park, Sangwon Yu, and Sungroh Yoon. 2025. Does your voice assistant remember? analyzing conversational context recall and utiliza- tion in voice interaction models. arXiv preprint arXiv:2502.19759 . Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yu- taka Matsuo, and Yusuke Iwasawa. 2022. Large lan- guage models are zero-shot reasoners. Advances in neural information processing systems , 35:22199– 22213. Chun-Yi Kuan, Wei-Ping Huang, and Hung-yi Lee. 2024a. Understanding sounds, missing the ques- tions: The challenge of object hallucination in large audio-language models. In Interspeech 2024 , pages 4144–4148.Chun-Yi Kuan and Hung-yi Lee. 2025. Can large audio- language models truly hear? tackling hallucinations with multi-task assessment and stepwise audio rea- soning. In ICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and Signal Process- ing (ICASSP) , pages 1–5. IEEE. Chun-Yi Kuan, Chih-Kai Yang, Wei-Ping Huang, Ke- Han Lu, and Hung-yi Lee. 2024b. Speech-copilot: Leveraging large language models for speech pro- cessing via task decomposition, modularization, and program generation. In 2024 IEEE Spoken Lan- guage Technology Workshop (SLT) , pages 1060– 1067. IEEE. Kushal Lakhotia, Eugene Kharitonov, Wei-Ning Hsu, Yossi Adi, Adam Polyak, Benjamin Bolte, Tu-Anh Nguyen, Jade Copet, Alexei Baevski, Abdelrahman Mohamed, and 1 others. 2021. On generative spo- ken language modeling from raw audio. Transac- tions of the Association for Computational Linguis- tics, 9:1336–1354. Marvin Lavechin, Yaya Sy, Hadrien Titeux, María An- drea Cruz Blandón, Okko Räsänen, Hervé Bredin, Emmanuel Dupoux, and Alejandrina Cristia. 2023. Babyslm: language-acquisition-friendly benchmark of self-supervised spoken language models. In Proc. Interspeech 2023 , pages 4588–4592. Chae-Won Lee, Jae-Hong Lee, and Joon-Hyuk Chang. 2024. Language model personalization for speech recognition: A clustered federated learning approach with adaptive weight average. IEEE Signal Process- ing Letters , 31:2710–2714. Sicong Leng, Yun Xing, Zesen Cheng, Yang Zhou, Hang Zhang, Xin Li, Deli Zhao, Shijian Lu, Chun- yan Miao, and Lidong Bing. 2024. The curse of multi-modalities: Evaluating hallucinations of large multimodal models across language, visual, and au- dio. arXiv . Junyi Li, Xiaoxue Cheng, Wayne Xin Zhao, Jian-Yun Nie, and Ji-Rong Wen. 2023. Halueval: A large- scale hallucination evaluation benchmark for large language models. In Proceedings of the 2023 Con- ference on Empirical Methods in Natural Language Processing , pages 6449–6464. Minzhi Li, William Barr Held, Michael J Ryan, Kunat Pipatanakul, Potsawee Manakul, Hao Zhu, and Diyi Yang. 2025. Mind the gap! static and interactive evaluations of large audio models. arXiv preprint arXiv:2502.15919 . Mohan Li, Cong-Thanh Do, Simon Keizer, Youmna Farag, Svetlana Stoyanchev, and Rama Doddipatla. 2024. Whisma: A speech-llm to perform zero-shot spoken language understanding. In 2024 IEEE Spo- ken Language Technology Workshop (SLT) , pages 1115–1122. IEEE. Yizhi Li, Ruibin Yuan, Ge Zhang, Yinghao MA, Chenghua Lin, Xingran Chen, Anton Ragni, Hanzhi Yin, Zhijie Hu, Haoyu He, and 1 others. 2022. Map- music2vec: A simple and effective baseline for self- supervised music audio representation learning. In Ismir 2022 Hybrid Conference . Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out , pages 74–81. Guan-Ting Lin, Cheng-Han Chiang, and Hung-Yi Lee. 2024a. Advancing | https://arxiv.org/abs/2505.15957v2 |
large language models to cap- ture varied speaking styles and respond properly in spoken conversations. In Proceedings of the 62nd An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 6626– 6642. Guan-Ting Lin, Jiachen Lian, Tingle Li, Qirui Wang, Gopala Anumanchipalli, Alexander H Liu, and Hung-yi Lee. 2025a. Full-duplex-bench: A bench- mark to evaluate full-duplex spoken dialogue mod- els on turn-taking capabilities. arXiv preprint arXiv:2503.04721 . Yi-Cheng Lin, Wei-Chih Chen, and Hung-yi Lee. 2024b. Spoken stereoset: on evaluating social bias toward speaker in speech large language models. In 2024 IEEE Spoken Language Technology Workshop (SLT) , pages 871–878. IEEE. Yi-Cheng Lin, Tzu-Quan Lin, Chih-Kai Yang, Ke-Han Lu, Wei-Chih Chen, Chun-Yi Kuan, and Hung-yi Lee. 2024c. Listen and speak fairly: a study on seman- tic gender bias in speech integrated large language models. In 2024 IEEE Spoken Language Technology Workshop (SLT) , pages 439–446. IEEE. Yu-Xiang Lin, Chih-Kai Yang, Wei-Chih Chen, Chen- An Li, Chien-yu Huang, Xuanjun Chen, and Hung-yi Lee. 2025b. A preliminary exploration with gpt-4o voice mode. arXiv preprint arXiv:2502.09940 . Samuel Lipping, Parthasaarathy Sudarsanam, Konstanti- nos Drossos, and Tuomas Virtanen. 2022. Clotho- aqa: A crowdsourced dataset for audio question an- swering. In 2022 30th European Signal Processing Conference (EUSIPCO) , pages 1140–1144. IEEE. Hexin Liu, Leibny Paola Garcia, Xiangyu Zhang, Andy WH Khong, and Sanjeev Khudanpur. 2024. Enhancing code-switching speech recognition with interactive language biases. In ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , pages 10886– 10890. IEEE. Ke-Han Lu, Zhehuai Chen, Szu-Wei Fu, He Huang, Boris Ginsburg, Yu-Chiang Frank Wang, and Hung- yi Lee. 2024. Desta: Enhancing speech language models through descriptive speech-text alignment. InProc. Interspeech 2024 , pages 4159–4163. Ke-Han Lu, Chun-Yi Kuan, and Hung-yi Lee. 2025. Speech-ifeval: Evaluating instruction-following and quantifying catastrophic forgetting in speech-aware language models. Interspeech 2025 .Alexandre Magueresse, Vincent Carles, and Evan Heet- derks. 2020. Low-resource languages: A review of past work and future challenges. arXiv preprint arXiv:2006.07264 . Gallil Maimon, Amit Roth, and Yossi Adi. 2025. Salmon: A suite for acoustic language model eval- uation. In ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Process- ing (ICASSP) , pages 1–5. Ilaria Manco, Benno Weck, Seungheon Doh, Minz Won, Yixiao Zhang, Dmitry Bogdanov, Yusong Wu, Ke Chen, Philip Tovstogan, Emmanouil Benetos, and 1 others. 2023. The song describer dataset: a corpus of audio captions for music-and-language evaluation. InWorkshop on Machine Learning for Audio, Neural Information Processing Systems (NeurIPS) . Neural Information Processing Systems. Alex Mari, Andreina Mandelli, and René Algesheimer. 2024. Empathic voice assistants: Enhancing con- sumer responses in voice commerce. Journal of Busi- ness Research , 175:114566. Jan Melechovsky, Abhinaba Roy, and Dorien Herre- mans. 2024. Midicaps: A large-scale midi dataset with text captions. In Proceedings of the 25th In- ternational Society for Music Information Retrieval Conference , pages 858–865. ISMIR. Moin Nadeem, Anna Bethke, and Siva Reddy. 2021. Stereoset: Measuring stereotypical bias in pretrained language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Lin- guistics and the 11th International Joint Conference on | https://arxiv.org/abs/2505.15957v2 |
Natural Language Processing (Volume 1: Long Papers) , pages 5356–5371. Tu Anh Nguyen, Maureen de Seyssel, Patricia Rozé, Morgane Rivière, Evgeny Kharitonov, Alexei Baevski, Ewan Dunbar, and Emmanuel Dupoux. 2020. The zero resource speech benchmark 2021: Metrics and baselines for unsupervised spoken lan- guage modeling. In NeuRIPS Workshop on Self- Supervised Learning for Speech and Audio Process- ing. James D Orcutt and Lynn Kenneth Harvey. 1985. De- viance, rule-breaking and male dominance in conver- sation. Symbolic Interaction , 8(1):15–32. Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015a. Librispeech: An asr corpus based on public domain audio books. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , pages 5206–5210. Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015b. Librispeech: an asr cor- pus based on public domain audio books. In 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP) , pages 5206–5210. IEEE. Prabhat Pandey, Rupak Vignesh Swaminathan, KV Girish, Arunasish Sen, Jian Xie, Grant P Strimel, and Andreas Schwarz. 2025. Sift-50m: A large-scale multilingual dataset for speech instruction fine-tuning. arXiv preprint arXiv:2504.09081 . Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computa- tional Linguistics , pages 311–318. Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Thompson, Phu Mon Htut, and Samuel Bowman. 2022. BBQ: A hand-built bias benchmark for question answering. InFindings of the Association for Computational Linguistics: ACL 2022 , pages 2086–2105, Dublin, Ireland. Association for Computational Linguistics. Abhirama Subramanyam Penamakuri, Kiran Chhatre, and Akshat Jain. 2025. Audiopedia: Audio qa with knowledge. In ICASSP 2025 - 2025 IEEE Interna- tional Conference on Acoustics, Speech and Signal Processing (ICASSP) , pages 1–5. Jing Peng, Yucheng Wang, Yu Xi, Xu Li, Xizhuo Zhang, and Kai Yu. 2024. A survey on speech large language models. arXiv preprint arXiv:2410.18908 . Sabid Bin Habib Pias, Alicia Freel, Ran Huang, Donald Williamson, Minjeong Kim, and Apu Kapadia. 2024. Building trust through voice: How vocal tone impacts user perception of attractiveness of voice assistants. arXiv preprint arXiv:2409.18941 . Karol J. Piczak. 2015a. ESC: Dataset for Environmental Sound Classification. In Proceedings of the 23rd Annual ACM Conference on Multimedia , pages 1015– 1018. ACM Press. Karol J Piczak. 2015b. Esc: Dataset for environmental sound classification. In Proceedings of the 23rd ACM international conference on Multimedia , pages 1015– 1018. Vineel Pratap, Qiantong Xu, Anuroop Sriram, Gabriel Synnaeve, and Ronan Collobert. 2020. Mls: A large- scale multilingual dataset for speech research. In Proc. Interspeech 2020 , pages 2757–2761. Alec Radford, Jong Wook Kim, Tao Xu, Greg Brock- man, Christine McLeavey, and Ilya Sutskever. 2023. Robust speech recognition via large-scale weak su- pervision. In International conference on machine learning , pages 28492–28518. PMLR. David Robinson, Marius Miron, Masato Hagiwara, and Olivier Pietquin. 2025. NatureLM-audio: an audio- language foundation model for bioacoustics. In The Thirteenth International Conference on Learning Representations . Jaechul Roh, Virat Shejwalkar, and Amir Houmansadr. 2025. Multilingual and multi-accent jailbreaking of audio llms. | https://arxiv.org/abs/2505.15957v2 |
arXiv preprint arXiv:2504.01094 .Harvey Sacks, Emanuel A Schegloff, and Gail Jefferson. 1974. A simplest systematics for the organization of turn-taking for conversation. language , 50(4):696– 735. S Sakshi, Utkarsh Tyagi, Sonal Kumar, Ashish Seth, Ramaneswaran Selvakumar, Oriol Nieto, Ramani Du- raiswami, Sreyan Ghosh, and Dinesh Manocha. 2025. MMAU: A massive multi-task audio understanding and reasoning benchmark. In The Thirteenth Interna- tional Conference on Learning Representations . Vinay Samuel, Yue Zhou, and Henry Peng Zou. 2025. Towards data contamination detection for modern large language models: Limitations, inconsistencies, and oracle challenges. In Proceedings of the 31st International Conference on Computational Linguis- tics, pages 5058–5070. Gerard Saucier. 2018. Culture, morality and in- dividual differences: comparability and incompa- rability across species. Philosophical Transac- tions of the Royal Society B: Biological Sciences , 373(1744):20170170. Emanuel A Schegloff. 1982. Discourse as an interac- tional achievement: Some uses of ‘uh huh’and other things that come between sentences. Analyzing dis- course: Text and talk , 71(93). Emanuel A Schegloff. 2000. Overlapping talk and the organization of turn-taking for conversation. Lan- guage in society , 29(1):1–63. Maureen Seyssel, Antony D’Avirro, Adina Williams, and Emmanuel Dupoux. 2024. Emphassess: a prosodic benchmark on assessing emphasis transfer in speech-to-speech models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 495–507. Xinyue Shen, Yixin Wu, Michael Backes, and Yang Zhang. 2024. V oice jailbreak attacks against gpt-4o. arXiv preprint arXiv:2405.19103 . Jack Sidnell. 2007. Comparative studies in conversation analysis. Annu. Rev. Anthropol. , 36(1):229–244. Sunayana Sitaram, Khyathi Raghavi Chandu, Sai Kr- ishna Rallabandi, and Alan W Black. 2019. A survey of code-switched speech and language processing. arXiv preprint arXiv:1904.00784 . Juntao Tan, Liangwei Yang, Zuxin Liu, Zhiwei Liu, Rithesh Murthy, Tulika Manoj Awalgaonkar, Jianguo Zhang, Weiran Yao, Ming Zhu, Shirley Kokane, and 1 others. 2025. Personabench: Evaluating ai models on understanding personal information through ac- cessing (synthetic) private user data. arXiv preprint arXiv:2502.20616 . Zhaoxuan Tan, Zheyuan Liu, and Meng Jiang. 2024. Personalized pieces: Efficient personalized large lan- guage models through collaborative efforts. In Pro- ceedings of the 2024 Conference on Empirical Meth- ods in Natural Language Processing , pages 6459– 6475. Changli Tang, Wenyi Yu, Guangzhi Sun, Xianzhao Chen, Tian Tan, Wei Li, Lu Lu, Zejun MA, and Chao Zhang. 2024. SALMONN: Towards generic hearing abilities for large language models. In The Twelfth International Conference on Learning Representa- tions . Gemini Team, Petko Georgiev, Ving Ian Lei, Ryan Burnell, Libin Bai, Anmol Gulati, Garrett Tanzer, Damien Vincent, Zhufeng Pan, Shibo Wang, and 1 others. 2024. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv preprint arXiv:2403.05530 . Joseph Turian, Jordie Shier, Humair Raj Khan, Bhiksha Raj, Björn W Schuller, Christian J Steinmetz, Colin Malloy, George Tzanetakis, Gissel Velarde, Kirk Mc- Nally, and 1 others. 2022. Hear: Holistic evaluation of audio representations. In NeurIPS 2021 Compe- titions and Demonstrations Track , pages 125–145. PMLR. George Tzanetakis and Perry Cook. 2002. Musical genre classification of audio signals. IEEE Trans- actions on speech and audio processing , 10(5):293– 302. Jörgen Valk and Tanel Alumäe. 2021. V oxlingua107: a | https://arxiv.org/abs/2505.15957v2 |
dataset for spoken language recognition. In 2021 IEEE Spoken Language Technology Workshop (SLT) , pages 652–658. IEEE. Christophe Veaux, Junichi Yamagishi, Kirsten MacDon- ald, and 1 others. 2017. Cstr vctk corpus: English multi-speaker corpus for cstr voice cloning toolkit. University of Edinburgh. The Centre for Speech Tech- nology Research (CSTR) , 6:15. Denny Vrande ˇci´c and Markus Krötzsch. 2014. Wiki- data: a free collaborative knowledgebase. Communi- cations of the ACM , 57(10):78–85. Bin Wang, Xunlong Zou, Geyu Lin, Shuo Sun, Zhuohan Liu, Wenyu Zhang, Zhengyuan Liu, AiTi Aw, and Nancy F Chen. 2025a. Audiobench: A universal benchmark for audio large language models. NAACL . Bin Wang, Xunlong Zou, Shuo Sun, Wenyu Zhang, Yingxu He, Zhuohan Liu, Chengwei Wei, Nancy F Chen, and AiTi Aw. 2025b. Advancing singlish un- derstanding: Bridging the gap with datasets and mul- timodal models. arXiv preprint arXiv:2501.01034 . Changhan Wang, Anne Wu, Jiatao Gu, and Juan Pino. 2021. Covost 2 and massively multilingual speech translation. In Interspeech 2021 , pages 2247–2251. Siyin Wang, Wenyi Yu, Xianzhao Chen, Xiaohai Tian, Jun Zhang, Yu Tsao, Junichi Yamagishi, Yuxuan Wang, and Chao Zhang. 2025c. Qualispeech: A speech quality assessment dataset with natural lan- guage reasoning and descriptions. arXiv preprint arXiv:2503.20290 .Yingzhi Wang, Pooneh Mousavi, Artem Ploujnikov, and Mirco Ravanelli. 2025d. What are they doing? joint audio-speech co-reasoning. In ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , pages 1–5. Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, Shiguang Guo, Weiming Ren, Aaran Arulraj, Xuan He, Ziyan Jiang, Tianle Li, Max Ku, Kai Wang, Alex Zhuang, Rongqi Fan, Xiang Yue, and Wenhu Chen. 2024. MMLU-pro: A more robust and challenging multi-task language under- standing benchmark. In The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track . Pete Warden. 2018. Speech commands: A dataset for limited-vocabulary speech recognition. arXiv preprint arXiv:1804.03209 . Benno Weck, Ilaria Manco, Emmanouil Benetos, Elio Quinton, György Fazekas, and Dmitry Bogdanov. 2024. Muchomusic: Evaluating music understand- ing in multimodal audio-language models. In Pro- ceedings of the 25th International Society for Music Information Retrieval Conference (ISMIR) . Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, and 1 others. 2022. Chain-of-thought prompting elic- its reasoning in large language models. Advances in neural information processing systems , 35:24824– 24837. Victor Junqiu Wei, Weicheng Wang, Di Jiang, Yuanfeng Song, and Lu Wang. 2024. Asr-ec benchmark: Eval- uating large language models on chinese asr error correction. arXiv preprint arXiv:2412.03075 . Candace West. 1979. Against our will: Male interrup- tions of females in cross-sex conversation. Annals of the New York Academy of Sciences . Haibin Wu, Xuanjun Chen, Yi-Cheng Lin, Kai-wei Chang, Ho-Lam Chung, Alexander H Liu, and Hung- yi Lee. 2024a. Towards audio language modeling–an overview. arXiv preprint arXiv:2402.13236 . Junkai Wu, Xulin Fan, Bo-Ru Lu, Xilin Jiang, Nima Mesgarani, Mark Hasegawa-Johnson, and Mari Os- tendorf. 2024b. Just asr+ llm? a study on speech large language models’ ability to identify and un- derstand speaker in spoken dialogue. In 2024 IEEE Spoken Language Technology Workshop | https://arxiv.org/abs/2505.15957v2 |
(SLT) , pages 1137–1143. IEEE. Erjia Xiao, Hao Cheng, Jing Shao, Jinhao Duan, Kaidi Xu, Le Yang, Jindong Gu, and Renjing Xu. 2025. Tune in, act up: Exploring the impact of audio modality-specific edits on large audio language mod- els in jailbreak. arXiv preprint arXiv:2501.13772 . Liumeng Xue, Ziya Zhou, Jiahao Pan, Zixuan Li, Shuai Fan, Yinghao Ma, Sitong Cheng, Dongchao Yang, Haohan Guo, Yujia Xiao, and 1 others. 2025. Audio-flan: A preliminary release. arXiv preprint arXiv:2502.16584 . Ruiqi Yan, Xiquan Li, Wenxi Chen, Zhikang Niu, Chen Yang, Ziyang Ma, Kai Yu, and Xie Chen. 2025. Uro-bench: A comprehensive benchmark for end-to-end spoken dialogue models. arXiv preprint arXiv:2502.17810 . Chih-Kai Yang, Yu-Kuan Fu, Chen-An Li, Yi-Cheng Lin, Yu-Xiang Lin, Wei-Chih Chen, Ho Lam Chung, Chun-Yi Kuan, Wei-Ping Huang, Ke-Han Lu, and 1 others. 2024a. Building a taiwanese mandarin spo- ken language model: A first attempt. arXiv preprint arXiv:2411.07111 . Chih-Kai Yang, Neo Ho, Yen-Ting Piao, and Hung-yi Lee. 2025a. Sakura: On the multi-hop reasoning of large audio-language models based on speech and audio information. Interspeech 2025 . Chih-Kai Yang, Kuan-Po Huang, Ke-Han Lu, Chun- Yi Kuan, Chi-Yuan Hsiao, and Hung-Yi Lee. 2024b. Investigating zero-shot generalizability on mandarin-english code-switched asr and speech-to- text translation of recent foundation models with self- supervision and weak supervision. In 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW) , pages 540– 544. Hao Yang, Lizhen Qu, Ehsan Shareghi, and Gholamreza Haffari. 2025b. Audio is the achilles’ heel: Red team- ing audio large multimodal models. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Lin- guistics: Human Language Technologies (Volume 1: Long Papers) , pages 9292–9306, Albuquerque, New Mexico. Association for Computational Linguistics. Qian Yang, Jin Xu, Wenrui Liu, Yunfei Chu, Ziyue Jiang, Xiaohuan Zhou, Yichong Leng, Yuanjun Lv, Zhou Zhao, Chang Zhou, and Jingren Zhou. 2024c. AIR-bench: Benchmarking large audio-language models via generative comprehension. In Proceed- ings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers) , pages 1979–1998, Bangkok, Thailand. Associ- ation for Computational Linguistics. Shu-wen Yang, Po-Han Chi, Yung-Sung Chuang, Cheng-I Jeff Lai, Kushal Lakhotia, Yist Y Lin, Andy T Liu, Jiatong Shi, Xuankai Chang, Guan-Ting Lin, and 1 others. 2021. Superb: Speech processing universal performance benchmark. In Proc. Inter- speech 2021 , pages 1194–1198. Sohee Yang, Elena Gribovskaya, Nora Kassner, Mor Geva, and Sebastian Riedel. 2024d. Do large lan- guage models latently perform multi-hop reasoning? InProceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 10210–10229. Sohee Yang, Nora Kassner, Elena Gribovskaya, Se- bastian Riedel, and Mor Geva. 2024e. Do large language models perform latent multi-hop reason- ing without exploiting shortcuts? arXiv preprint arXiv:2411.16679 .Shukang Yin, Chaoyou Fu, Sirui Zhao, Ke Li, Xing Sun, Tong Xu, and Enhong Chen. 2024. A survey on multimodal large language models. National Science Review , 11(12). Ruibin Yuan, Yinghao Ma, Yizhi Li, Ge Zhang, Xingran Chen, Hanzhi Yin, Yiqi Liu, Jiawen Huang, Zeyue Tian, Binyue Deng, and 1 others. 2023. Marble: | https://arxiv.org/abs/2505.15957v2 |
Music audio representation benchmark for universal evaluation. Advances in Neural Information Process- ing Systems , 36:39626–39647. Yongyi Zang, Sean O’Brien, Taylor Berg-Kirkpatrick, Julian McAuley, and Zachary Novack. 2025. Are you really listening? boosting perceptual aware- ness in music-qa benchmarks. arXiv preprint arXiv:2504.00369 . Heiga Zen, Viet Dang, Rob Clark, Yu Zhang, Ron J Weiss, Ye Jia, Zhifeng Chen, and Yonghui Wu. 2019. Libritts: A corpus derived from librispeech for text- to-speech. In Proc. Interspeech 2019 , pages 1526– 1530. Zhehao Zhang, Ryan A Rossi, Branislav Kveton, Yijia Shao, Diyi Yang, Hamed Zamani, Franck Dernon- court, Joe Barrow, Tong Yu, Sungchul Kim, and 1 others. 2024. Personalization of large language mod- els: A survey. arXiv preprint arXiv:2411.00027 . Mengjie Zhao, Zhi Zhong, Zhuoyuan Mao, Shiqi Yang, Wei-Hsiang Liao, Shusuke Takahashi, Hiromi Wakaki, and Yuki Mitsufuji. 2024. Openmu: Your swiss army knife for music understanding. arXiv preprint arXiv:2410.15573 . Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, and 1 others. 2023. A survey of large language models. arXiv preprint arXiv:2303.18223 . Jeffrey Zhou, Tianjian Lu, Swaroop Mishra, Sid- dhartha Brahma, Sujoy Basu, Yi Luan, Denny Zhou, and Le Hou. 2023a. Instruction-following evalu- ation for large language models. arXiv preprint arXiv:2311.07911 . Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, and Jiawei Han. 2023b. Don’t make your llm an evaluation benchmark cheater. arXiv preprint arXiv:2311.01964 . Andy Zou, Zifan Wang, Nicholas Carlini, Milad Nasr, J Zico Kolter, and Matt Fredrikson. 2023. Univer- sal and transferable adversarial attacks on aligned language models. arXiv preprint arXiv:2307.15043 . A Detailed Categorization of the Surveyed Papers The complete categorization of the surveyed papers, based on the proposed taxonomy (§2), is presented in Figure 3. Please note that widely used corpora TaxonomyGeneral Auditory Awareness and Processing (§3)Awareness (§3.1)SALMon (Maimon et al., 2025); EmphAssess (Seyssel et al., 2024); Deshmukh et al. (2025b); SAGI (Bu et al., 2024) Processing (§3.2)Dynamic-SUPERB (Huang et al., 2024a); Dynamic-SUPERB Phase-2 (Huang et al., 2025a); AIR-Bench (Yang et al., 2024c); AudioBench (Wang et al., 2025a); MuChoMusic (Weck et al., 2024); FinAudio (Cao et al., 2025); SAGI (Bu et al., 2024); MAE (Chen et al., 2024b); RUListening (Zang et al., 2025); OpenMU-Bench (Zhao et al., 2024); Wang et al. (2025b); Gong et al. (2024a); Audio-FLAN (Xue et al., 2025); QualiSpeech (Wang et al., 2025c); EvalSIFT (Pandey et al., 2025); OpenAQA (Gong et al., 2024b); Clotho-AQA (Lipping et al., 2022); SpeechCaps (Huang et al., 2025b); ASR-EC (Wei et al., 2024); SLU-GLUE (Li et al., 2024); BEANS-Zero (Robinson et al., 2025); Wu et al. (2024b) Knowledge and Reasoning (§4)Linguistic Knowledge (§4.1)ZeroSpeech 2021 (Nguyen et al., 2020); CSZS (Huang et al., 2024b); sStoryCloze (Hassid et al., 2023); tStoryCloze (Hassid et al., 2023) BabySLM (Lavechin et al., 2023) World Knowledge Assessment (§4.2)MMAU (Sakshi et al., 2025); Audiopedia (Penamakuri et al., 2025); VoiceBench (Chen et al., 2024c); VoxEval (Cui et al., 2025); URO-Bench (Yan et al., 2025); ADU-Bench (Gao et al., 2024); SAGI (Bu et al., 2024); MuChoMusic (Weck et al., | https://arxiv.org/abs/2505.15957v2 |
2024); RUListening (Zang et al., 2025); OpenMU-Bench (Zhao et al., 2024); Reasoning (§4.3)CompA (Ghosh et al., 2024b); MMAU (Sakshi et al., 2025); SAKURA (Yang et al., 2025a); URO-Bench (Yan et al., 2025); Audio Entailment (Deshmukh et al., 2025a); Wang et al. (2025d); ADU-Bench (Gao et al., 2024); VoxEval (Cui et al., 2025); OpenMU-Bench (Zhao et al., 2024); Gong et al. (2024a); CompA-R (Ghosh et al., 2024a); OpenAQA (Gong et al., 2024b); Clotho-AQA (Lipping et al., 2022); SpeechCaps (Huang et al., 2025b); SLU-GLUE (Li et al., 2024); Deshmukh et al. (2025b) Dialogue-oriented Ability (§5)Conversational Ability (§5.1)StyleTalk (Lin et al., 2024a); SD-Eval (Ao et al., 2024); VoxDialogue (Cheng et al., 2025); Talking Turns (Arora et al., 2025b); Full-Duplex-Bench (Lin et al., 2025a); Li et al. (2025); ContextDialog (Kim et al., 2025); ADU-Bench (Gao et al., 2024); URO-Bench (Yan et al., 2025) Instruction Following (§5.2)VoiceBench (Chen et al., 2024c); URO-Bench (Yan et al., 2025); Speech-IFeval (Lu et al., 2025); S2S-Arena (Jiang et al., 2025); EvalSIFT (Pandey et al., 2025) Fairness, Safety, and Trustworthiness (§6)Fairness and Bias (§6.1) Lin et al. (2024c); Spoken Stereoset (Lin et al., 2024b) Safety (§6.2)VoiceBench (Chen et al., 2024c); Yang et al. (2025b); Roh et al. (2025); AdvBench-Audio (Kang et al., 2025); Xiao et al. (2025); Gupta et al. (2025); Hughes et al. (2024); URO-Bench (Yan et al., 2025) Hallucination (§6.3) Kuan et al. (2024a); CMM (Leng et al., 2024); Kuan and Lee (2025) Figure 3: The complete categorization of the surveyed papers based on the proposed taxonomy. for fundamental auditory processing tasks, such as speech recognition and audio captioning, are excluded from this categorization due to the ex- tremely large number of such resources. Including them would make the figure overly detailed and cumbersome. For reference, we provide examples of these fundamental tasks and their corresponding resources in Appendix B. From Figure 3, it is evident that the current fo- cus of LALM evaluations predominantly centers on auditory processing tasks (§3.2), underscoring their importance to the research community. While these tasks are undeniably valuable, they should not be seen as the sole consideration when evaluating models for real-world applications. A more diverse and comprehensive evaluation scope is crucial to ensure a fuller understanding of their potential and shortcomings.B Examples of General Auditory Processing Tasks Table 1 lists representative auditory processing tasks and their associated resources. As founda- tional components of auditory processing, these tasks are well-suited for adaptation in LALM eval- uation, as discussed in (§3.2). C Dynamics in Full-Duplex Dialogues In this section, we briefly introduce the dynamics discussed in (§5.1.2). Turn-taking (Sacks et al., 1974) is a fundamental aspect of conversational or- ganization, where speakers alternate turns to speak, ensuring only one person talks at a time. This pro- cess is complex, involving various behaviors that help facilitate smooth transitions between speakers. For example, speakers often signal the end of their turn through clear cues, allowing the listener to Auditory Tasks Common Datasets Audio Tasks Audio CaptioningAudioCaps (Kim et al., 2019) Clotho (Drossos et al., 2020) Audio ClassificationESC-50 (Piczak, 2015b) AudioSet (Gemmeke et al., 2017) V ocal Sound | https://arxiv.org/abs/2505.15957v2 |
Classification V ocalSound (Gong et al., 2022) Speech Tasks Automatic Speech RecognitionLibriSpeech (Panayotov et al., 2015b) AISHELL-1 (Bu et al., 2017) Common V oice (Ardila et al., 2020) Speaker IdentificationV oxCeleb2 (Chung et al., 2018) CN-Celeb (Fan et al., 2020) Text-to-SpeechLJSpeech (Ito and Johnson, 2017) VCTK (Veaux et al., 2017) LibriTTS (Zen et al., 2019) Speech Emotion RecognitionIEMOCAP (Busso et al., 2008) CREMA-D (Cao et al., 2014) Language IdentificationV oxLingua107 (Valk and Alumäe, 2021) FLEURS (Conneau et al., 2023) Speech TranslationCoV oST 2 (Wang et al., 2021) MuST-C (Di Gangi et al., 2019) Speech Diarization LibriMix (Cosentino et al., 2020) Keyword Spotting Speech Command (Warden, 2018) Music Tasks Music Captioning Text-to-MusicMusicCaps (Agostinelli et al., 2023) Song Describer Dataset (Manco et al., 2023) MidiCaps (Melechovsky et al., 2024) Music Transcription MAESTRO (Hawthorne et al., 2019) Instrument Classification NSynth (Engel et al., 2017) Genre ClassificationFMA (Defferrard et al., 2017) GTZAN (Tzanetakis and Cook, 2002) Table 1: Commonly used datasets for various auditory tasks. This overview covers key tasks in audio, speech, and music processing and the datasets that are widely adopted in academic and industrial research. recognize when they are yielding the floor (Dun- can, 1972; Duncan and Fiske, 2015). Furthermore, turn-taking conventions may be shaped by cultural factors (Sidnell, 2007), which influence how and when speakers take their turns due to linguistic and social differences. Understanding and modeling these behaviors are essential steps toward achiev- ing natural and effective communication in both human-human and human-AI interactions. Backchanneling involves the listener’s use of phatic expressions that signal active listening and attentiveness to the speaker (Fujie et al., 2005). These brief verbal cues, such as “yeah,” “I see,” or “uh-huh,” along with non-verbal cues like nod- ding, serve as feedback, demonstrating sympathy, agreement, or understanding. By offering such responses, listeners help maintain the flow of con- versation without interrupting the speaker. This behavior not only fosters a sense of connection but also enhances the speaker’s feeling of being heard and understood, contributing to a more interactiveand supportive dialogue. As such, backchannel- ing plays a crucial role in sustaining conversation dynamics and promoting positive communicative exchanges. Speaker overlap refers to the simultaneous speech of multiple speakers, while speaker inter- ruption occurs when one speaker interjects dur- ing another’s turn, which breaks the turn-taking principles (Gravano and Hirschberg, 2012). These phenomena are complex: they can be competitive, reflecting hostility or dominance (West, 1979; Or- cutt and Harvey, 1985), or they can be neutral or supportive, helping to maintain and coordinate the flow of dialogue (Goldberg, 1990; Jefferson, 1986; Gervits and Scheutz, 2018). Despite their varying forms, both overlap and interruption are natural components of human conversation. D Input/Output Modalities of the Surveyed Works Our proposed taxonomy (§2) is organized by the evaluation objectives of the surveyed works rather than by the modalities they cover. Nevertheless, modality information is essential for researchers seeking benchmarks suited to models specialized in particular modalities. Thus, we provide the input/output modality details in Tables 2, 3, 4, and 5, corresponding to the categories of General Auditory Awareness and Processing (§3), Knowl- edge | https://arxiv.org/abs/2505.15957v2 |
and Reasoning (§4), Dialogue-oriented Abil- ity (§5), and Fairness, Safety, and Trustworthi- ness (§6), respectively. These tables are compiled based on the original papers of the surveyed bench- marks. Please note that due to unique evaluation designs, some benchmarks do not produce explicit “outputs” but instead rely on input likelihood comparisons or similarity measures with specific instances. This absence of outputs is clearly indicated in the tables. E Information of AI Assistance in Revision We acknowledge the assistance of GPT-4.1-mini in refining the paper and improving its clarity. General Auditory Awareness and Processing BenchmarkInput Modalities Output Modalities Text Audio Speech Music Text Audio Speech Music SALMon (Maimon et al., 2025) ✓ ✓Likelihood-based evaluation. No output modality. Wu et al. (2024b) ✓ ✓ ✓ EmphAssess (Seyssel et al., 2024) ✓ ✓ Deshmukh et al. (2025b) ✓ ✓ ✓ ✓ Dynamic-SUPERB (Huang et al., 2024a) ✓ ✓ ✓ ✓ ✓ Dynamic-SUPERB Phase-2 (Huang et al., 2025a) ✓ ✓ ✓ ✓ ✓ AIR-Bench (Yang et al., 2024c) ✓ ✓ ✓ ✓ ✓ AudioBench (Wang et al., 2025a) ✓ ✓ ✓ ✓ MuChoMusic (Weck et al., 2024) ✓ ✓ ✓ FinAudio (Cao et al., 2025) ✓ ✓ ✓ SAGI (Bu et al., 2024) ✓ ✓ ✓ ✓ ✓ MAE (Chen et al., 2024b) ✓ ✓ ✓ ✓ RUListening (Zang et al., 2025) ✓ ✓ ✓ OpenMU-Bench (Zhao et al., 2024) ✓ ✓ ✓ Wang et al. (2025b) ✓ ✓ ✓ Gong et al. (2024a) ✓ ✓ ✓ ✓ ✓ Audio-FLAN (Xue et al., 2025) ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ QualiSpeech (Wang et al., 2025c) ✓ ✓ ✓ EvalSIFT (Pandey et al., 2025) ✓ ✓ ✓ ✓ OpenAQA (Gong et al., 2024b) ✓ ✓ ✓ Clotho-AQA (Lipping et al., 2022) ✓ ✓ ✓ SpeechCaps (Huang et al., 2025b) ✓ ✓ ✓ ASR-EC (Wei et al., 2024) ✓ ✓ ✓ SLU-GLUE (Li et al., 2024) ✓ ✓ ✓ BEANS-Zero (Robinson et al., 2025) ✓ ✓ ✓ Table 2: Input and output modalities of benchmarks in the General Auditory Awareness and Processing category shown in Figure 3. Knowledge and Reasoning Benchmark Input Modalities Output Modalities Text Audio Speech Music Text Audio Speech Music ZeroSpeech 2021 (Nguyen et al., 2020) ✓Likelihood-based evaluation. No output modality. CSZS (Huang et al., 2024b) ✓Likelihood-based evaluation. No output modality. sStoryCloze (Hassid et al., 2023) ✓Likelihood-based evaluation. No output modality. tStoryCloze (Hassid et al., 2023) ✓Likelihood-based evaluation. No output modality. BabySLM (Lavechin et al., 2023) ✓ ✓Likelihood-based evaluation. No output modality. CompA (Ghosh et al., 2024b) ✓ ✓Similarity-based evaluation on text and audio inputs. MMAU (Sakshi et al., 2025) ✓ ✓ ✓ ✓ ✓ Audiopedia (Penamakuri et al., 2025) ✓ ✓ ✓ V oiceBench (Chen et al., 2024c) ✓ ✓ ✓ V oxEval (Cui et al., 2025) ✓ ✓ SAKURA (Yang et al., 2025a) ✓ ✓ ✓ ✓ URO-Bench (Yan et al., 2025) ✓ ✓ ✓ ✓ Audio Entailment (Deshmukh et al., 2025a) ✓ ✓ ✓ ADU-Bench (Gao et al., 2024) ✓ ✓ ✓ SAGI (Bu et al., 2024) ✓ ✓ ✓ ✓ ✓ MuChoMusic (Weck et al., 2024) ✓ ✓ ✓ | https://arxiv.org/abs/2505.15957v2 |
RUListening (Zang et al., 2025) ✓ ✓ ✓ OpenMU-Bench (Zhao et al., 2024) ✓ ✓ ✓ Gong et al. (2024a) ✓ ✓ ✓ ✓ ✓ CompA-R (Ghosh et al., 2024a) ✓ ✓ ✓ OpenAQA (Gong et al., 2024b) ✓ ✓ ✓ Clotho-AQA (Lipping et al., 2022) ✓ ✓ ✓ SLU-GLUE (Li et al., 2024) ✓ ✓ ✓ SpeechCaps (Huang et al., 2025b) ✓ ✓ ✓ Wang et al. (2025d) ✓ ✓ ✓ ✓ Deshmukh et al. (2025b) ✓ ✓ ✓ ✓ Table 3: Input and output modalities of benchmarks in the Knowledge and Reasoning category shown in Figure 3. Dialogue-oriented Ability Benchmark Input Modalities Output Modalities Text Audio Speech Music Text Audio Speech Music StyleTalk (Lin et al., 2024a) ✓ ✓ ✓ SD-Eval (Ao et al., 2024) ✓ ✓ ✓ ✓ V oxDialogue (Cheng et al., 2025) ✓ ✓ ✓ ✓ Talking Turns (Arora et al., 2025b) ✓ ✓ Full-Duplex-Bench (Lin et al., 2025a) ✓ ✓ Li et al. (2025) ✓ ✓ ContextDialog (Kim et al., 2025) ✓ ✓ ✓ ADU-Bench (Gao et al., 2024) ✓ ✓ ✓ V oiceBench (Chen et al., 2024c) ✓ ✓ ✓ URO-Bench (Yan et al., 2025) ✓ ✓ ✓ ✓ Speech-IFeval (Lu et al., 2025) ✓ ✓ ✓ S2S-Arena (Jiang et al., 2025) ✓ ✓ EvalSIFT (Pandey et al., 2025) ✓ ✓ ✓ ✓ Table 4: Input and output modalities of benchmarks in the Dialogue-oriented Ability category shown in Figure 3. Fairness, Safety, and Trustworthiness Benchmark Input Modalities Output Modalities Text Audio Speech Music Text Audio Speech Music Lin et al. (2024c) ✓ ✓ ✓ Spoken Stereoset (Lin et al., 2024b) ✓ ✓ ✓ V oiceBench (Chen et al., 2024c) ✓ ✓ ✓ Yang et al. (2025b) ✓ ✓ ✓ ✓ Roh et al. (2025) ✓ ✓ ✓ AdvBench-Audio (Kang et al., 2025) ✓ ✓ ✓ Xiao et al. (2025) ✓ ✓ ✓ Gupta et al. (2025) ✓ ✓ ✓ Hughes et al. (2024) ✓ ✓ ✓ URO-Bench (Yan et al., 2025) ✓ ✓ ✓ ✓ Kuan et al. (2024a) ✓ ✓ ✓ ✓ CMM (Leng et al., 2024) ✓ ✓ ✓ Kuan and Lee (2025) ✓ ✓ ✓ Table 5: Input and output modalities of benchmarks in the Fairness, Safety, and Trustworthiness category shown in Figure 3. | https://arxiv.org/abs/2505.15957v2 |
arXiv:2505.15960v1 [cs.CL] 21 May 2025Training Step-Level Reasoning Verifiers with Formal Verification Tools Ryo Kamoi Yusen Zhang Nan Zhang Sarkar Snigdha Sarathi Das Rui Zhang Penn State University {ryokamoi, rmz5227}@psu.edu Abstract Process Reward Models (PRMs), which provide step-by-step feedback on the reasoning generated by Large Language Models (LLMs), are receiving increasing attention. However, two key research gaps remain: collecting accurate step-level error labels for training typically requires costly human annotation, and existing PRMs are limited to math reasoning problems. In response to these gaps, this paper aims to address the challenges of automatic dataset creation and the generalization of PRMs to diverse reasoning tasks. To achieve this goal, we propose FOVER, an approach for training PRMs on step-level error labels automatically annotated by formal verification tools, such as Z3 for formal logic and Isabelle for theorem proof, which provide automatic and accurate verification for symbolic tasks. Using this approach, we synthesize a training dataset with error labels on LLM responses for formal logic and theorem proof tasks without human annotation. Although this data synthesis is feasible only for tasks compatible with formal verification, we observe that LLM-based PRMs trained on our dataset exhibit cross-task generalization, improving verification across diverse reasoning tasks. Specifically, PRMs trained with FOVERsignificantly outperform baseline PRMs based on the original LLMs and achieve competitive or superior results compared to state-of-the-art PRMs trained on labels annotated by humans or stronger models, as measured by step- level verification on ProcessBench and Best-of-K performance across 12 reasoning benchmarks, including MATH, AIME, ANLI, MMLU, and BBH. The datasets, models, and code are provided at https://github.com/psunlpgroup/FoVer . Training Inference have "original = discounted / 0.75" then have "original = 26"✅ ❌Automatically annotated step-level error labels on tasks that can be verified using formal verification tools LLM-based PRM (Verifier)Out-of-distribution reasoning tasks Emily buy lunch 8am-10am. Emily was free 10am-1pm.✅ ❌Formal Theorem Proof Automatic V erifier (e.g., Isabelle)({A} → {B}) & {A} → {B} {B} & ({C} → {B}) → {C}✅ ❌Formal Logic LLMAutomatic V erifier (e.g., Z3) Cross-task T ransfer of Reasoning V erificationIn total they had 32 + 42 = 74. After eating 35, they had 29. Train PRMs to predict error labels from Formal V erification ToolsLLMLLM✅ ❌ LLMMath Reasoning Big Bench Hard LLM-based PRM (Verifier)Premise implies that he was on TV. Hypothesis contradicts the premise.LLM✅ ❌NLI Figure 1: Overview of our approach, FOVER. We train PRMs on step-level error labels automatically annotated by formal verification tools on LLM responses to tasks compatible with formal verification. We observe that the resulting LLM-based PRMs improve verification across broad reasoning tasks. Math ReasoningLogical ReasoningNLI MMLU-Pro NoMathBBH (3 tasks)Original Ours Original Ours Original Ours Original Ours Original Ours4050607080 49.3 48.250.456.473.3 49.854.4 55.8 57.276.4(a) PRMs based on Llama 3.1 8B Math ReasoningLogical ReasoningNLI MMLU-Pro NoMathBBH (3 tasks)Original Ours Original Ours Original Ours Original Ours Original Ours4050607080 61.6 54.458.4 58.869.3 63.7 57.258.860.870.8 (b) PRMs based on Qwen 2.5 7B Figure 2: Best-of-K (K=5) performance of PRMs on 12 reasoning tasks. FOVERcreates training datasets for PRMs on tasks where we can automatically annotate step-level error labels using formal verification tools. | https://arxiv.org/abs/2505.15960v1 |
FOVERsignificantly improves verification performance on broad out-of-distribution reasoning tasks, compared to baseline PRMs based on the original LLMs. Details are in Table 3. 1 Introduction Reasoning verifiers for large language models (LLMs) have been increasingly studied in the context of enhancing the reasoning capabilities of LLMs via reinforcement learning and inference-time improvement [ 32,50,41]. In this work, we study process reward models (PRMs) [ 49,17,18], which provide step-level verification for LLM-generated reasoning. Compared to outcome reward models (ORMs), which generate a score for the entire solution, PRMs provide fine-grained feedback and often achieve better performance in downstream applications to improve LLMs [49, 18]. This paper tackles two research gaps in PRMs (Table 1). First, it is difficult to collect accurate step-level error labels for training PRMs. While human annotation is a primary approach [ 49,18], it involves costly step-level annotations. To create training data without human annotation, previous studies propose approaches to generating estimated step-level error labels synthetically, such as Monte Carlo roll-outs from each step in the solution [ 50,22,65]. However, these approaches can yield inaccurate error labels. Second, PRMs have often been studied only on mathematical reasoning. The verification capabilities of PRMs outside of math reasoning are not well studied, and popular methods of training dataset creation for PRMs [ 50] are primarily designed for math reasoning tasks. To address these gaps, this work tackles two challenges in PRMs: automatic creation of training datasets and generalization to diverse reasoning tasks. For this purpose, we propose FOVER, an approach to train PRMs on step-level error labels automatically annotated using formal verification tools, such as Z3 [ 7] and Isabelle [ 30], which provide automatic and accurate verification for symbolic logic tasks like formal theorem proof. (1) Automatic creation of training datasets: LLM responses to tasks compatible with formal verification tools can be verified automatically, and we can acquire accurate step-level error labels for training PRMs without human annotation. In this work, we synthesize a training dataset with tool-annotated error labels on formal logic and theorem proof tasks. (2) Generalization to diverse reasoning tasks: Although this data synthesis is feasible only for tasks compatible with formal verification, our goal is to improve PRMs across broad reasoning tasks. We investigate whether LLM-based PRMs trained on labels annotated by formal verification tools can transfer error detection capabilities to other reasoning tasks. This direction is motivated by recent work demonstrating that symbolic and synthetic training datasets can enhance the general reasoning abilities of LLMs [ 27,56]. Building on this insight, we hypothesize that LLM-based PRMs can acquire generalizable step-level verification capabilities through training on symbolic tasks. To analyze our approach, we train PRMs based on Llama 3.1 8B [ 20] and Qwen 2.5 7B [ 36] on our dataset and evaluate them in two settings, Best-of-K [ 6,17,64] performance on reasoning tasks (§4.1) and step-level verification performance on ProcessBench [ 66] (§4.2), both of which are standard practices for assessing PRMs [ 18,50,66,65]. Our experiments demonstrate that FOVERsignificantly improves the verification capabilities of PRMs on diverse reasoning tasks, compared with baseline PRMs based on the | https://arxiv.org/abs/2505.15960v1 |
original LLMs, as measured in the two settings. Surprisingly, our experiments on 12 reasoning benchmarks show that training on our dataset—comprising of error labels on formal logic and theorem proof tasks—enhances Best-of-K performance of LLM-based PRMs across diverse reasoning tasks, including MATH [ 11], AIME [ 2], ANLI [ 29], MMLU-PRO [ 52], and BBH [ 45] (Figure 2). We also compare our models with PRMs proposed in prior work, and observe that PRMs trained with FOVERare competitive with or better than state-of-the-art PRMs built on the same LLMs [ 59,66,65]. This is a notable result because these PRMs are trained on error labels annotated by humans or stronger models, whereas F OVERdoes not rely on such labels. 2 Table 1: Comparison of research on process reward models (PRMs). Training datasets in prior work involve human annotation or inaccurate synthetic labels. In addition, prior work often trains and evaluates PRMs only on math reasoning tasks. To address these research gaps, we create a training dataset with accurate and automatic annotation and evaluate our PRMs on diverse reasoning tasks. PRMsStep-level Error Annotation Training Evaluation Automatic AccurateMath (informal)Formal LogicFormal ProofMath (informal)Academic ExamsLogic ReasoningBBH [45] PRM800K [18] – ✓ ✓ – – ✓ ✓ – – Math-Shepherd [50] ✓ – ✓ – – ✓ – – – Qwen2.5-Math-PRM [65] ✓ – ✓ – – ✓ ✓ – – FOVER(Ours) ✓ ✓ – ✓ ✓ ✓ ✓ ✓ ✓ This work aligns with two fundamental needs in the development of reasoning verifiers. First, accurate automatic annotation offers a practical solution for adapting PRMs to errors made by new models. Automatic annotation is required to update datasets periodically because training data based on errors from older models becomes outdated. Second, our analysis of cross-task generalization provides a foundation for applying PRMs to broad reasoning tasks, including low-resource tasks. Our main contributions are as follows: •We propose FOVER, an approach to train PRMs on step-level error labels automatically annotated by formal verification tools, such as Z3 and Isabelle. FOVERaims to address two core challenges in PRMs: automatic training dataset creation and generalization to diverse reasoning tasks. •We synthesize a training dataset for PRMs consisting of accurate step-level error labels on LLM responses for formal logic and theorem proof tasks, without relying on human annotation. •Although this data synthesis is feasible only for tasks compatible with formal verification, we empirically demonstrate that training on our dataset improves LLM-based PRMs across broad reasoning tasks, as measured in Best-of-K performance on 12 reasoning benchmarks. •We show that PRMs trained with FOVERperform competitively with state-of-the-art PRMs built on the same LLMs across broad reasoning tasks. This is a noteworthy result because these PRMs are trained on labels annotated by humans or stronger models, which F OVERdoes not employ. 2 Related Work Process Reward Models (PRMs). PRMs [ 49,18] are models that provide feedback for each step in solutions, which are often used to provide supervision for training and inference of LLMs [ 32,63,23]. A challenge in developing PRMs lies in obtaining step-level error labels for training [ 50,65]. Human annotation [ 49,18] | https://arxiv.org/abs/2505.15960v1 |
is a primary approach, but is particularly costly in this step-level labeling task. Recent studies explore synthetic annotations, such as rule-based or model-based perturbations [ 60, 23,33] and Monte Carlo roll-outs [ 50,22,65]. However, these approaches can produce unnatural or inaccurate step-level error labels. In contrast, our approach leverages formal verification tools to automatically and accurately annotate step-level error labels for training PRMs. Formal logic. Formal logic tasks, such as logical entailment, are a popular task for evaluating [ 5,46] and improving [ 27] reasoning capabilities of LLMs. However, much of the existing work uses these tasks for dataset generation without fully leveraging their verifiability, despite the availability of automatic solvers such as Z3 [ 7], Vampire [ 14], and E [ 39] that can rigorously verify logical correctness. Our work is the first to use formal logic to create a training dataset for PRMs. Formal theorem proof. Using LLMs as a tool for automatic theorem proof is a popular research topic [ 35,61,57]. In another direction, recent work attempts to use formal theorem proof to improve the math reasoning capabilities of LLMs during inference [ 68]. These studies use proof assistants such as Isabelle/HOL [ 30], Coq [ 47], and Lean [ 28] to provide feedback for formal theorem proof generation or math reasoning tasks during training and inference. Our work is the first to use these tools to annotate step-level labels for training PRMs. 3 Formal Logic Formal Theorem Proof $hypothesis$: ({D}{b} & ¬{C}{b}) $context$: fact1: {AA}{b} -> {D}{a} fact2: (x): ¬{C}x -> (¬{B}x v ¬{A}x) [ ...]assumes "(total_pencils::nat) = 1500" and "(cost_per_pencil::real) = 0.10" and " (sell_price_per_pencil::real) = 0.25" [...] shows "pencils_to_sell = 1000"Formal Statement Formal Statement Target LLM Target LLMInformal Problems (e.g., GSM8K) Automatic V erifier (Isabelle) Formal Solution Formal Solution have "total_pencils * cost_per_pencil = 150" ✅ then have "profit + (total_pencils * cost_per_pencil) = 250" then have "(profit + (total_pencils * cost_per_pencil)) / [...] then have "(profit + (total_pencils * cost_per_pencil)) / sell_price_per_pencil = 1000 then have "pencils_to_sell = 1000"✅ ❌✅✅fact10 -> int1: {D}{b} int1 & fact3 -> int2: ¬{C}{b} int2 & int1 -> int3: ({D}{b} & ¬{C}{b}); int3 -> hypothesis The final answer is PROVED❌ ❌ ✅ ❌(2) Automatic Error Annotation Informal to Formal Conversion by LLM Informal Solution(1) Formal Solution Generation (2) Automatic Error Annotation (1) Formal Solution Generation Automatic V erifier (Z3) have "total_pencils * cost_per_pencil / sell_price_per_pencil = 600" ✅Figure 3: Creation process of the FOVERdataset. (1) We first generate LLM reasoning in the format compatible with formal verification tools: Z3 and Isabelle. (2) We use the formal verification tools to automatically annotate step-level error labels, without involving human annotation. int2 & int1 -> int3: ({D}{b} & ¬{C}{b});Formal Step generated from LLM Postprocessed Step ¬{C}{b} & {D}{b} -> ({D}{b} & ¬{C}{b}); Automatic V erifier (Z3)✅ (a) Formal logic task. A store owner bought 1500 pencils at $0.10 each. If he sells them for $0.25 each, how many of them must he sell to make a profit of exactly $100.00?The store bought 1500 pencils for 0.10 each. So he spent 1500 x 0.10 | https://arxiv.org/abs/2505.15960v1 |
= 150 dollars. He wants to make a profit of 100 dollars, so [...]LLMInformal Question Informal Solution assumes "(total_pencils::nat) = 1500" and "(cost_per_pencil::real) = 0.10" and "(sell_price_per_pencil::real) = 0.25" [...] shows "pencils_to_sell = 1000" Conversion LLMFormal Statement Formal ProofConversion LLM Therefore, he needs to sell 1000 pencils.have "total_pencils * cost_per_pencil = 150" then have "profit + (total_pencils * cost_per_pencil) = 250" then have "(profit + (total_pencils * cost_per_pencil)) / [...] have "total_pencils * cost_per_pencil / [...] Final Answer Automatic V erifier (Isabelle)✅ (b) Formal theorem proof task. Figure 4: Automatic step-level error annotation by formal verification tools. 3 F OVER We propose FOVER, an approach to train PRMs using step-level error labels automatically annotated by formal verification tools, which provide automatic and accurate verification for symbolic logic tasks like formal theorem proof. FOVERis designed to address two key challenges in PRM development: automatic creation of training datasets and generalization to diverse reasoning tasks. As shown in Figure 1, FOVERsynthesizes training datasets for PRMs by automatically annotating step-level error labels on LLM responses using formal tools, thereby eliminating reliance on costly human annotation while ensuring label accuracy. Using tool-verifiable tasks for PRM training is a major departure from previous work because most prior approaches do not expect cross-task generalization and train PRMs on target tasks, typically in mathematical reasoning [ 18,50,65]. In addition, formal verification tools are often used to verify entire proofs [ 68,58], and their adaptation for step-level annotation represents another technical contribution of this work. Although our data synthesis is only applicable to tasks verifiable using formal tools, we aim to improve LLM-based PRMs across a broad range of reasoning tasks by leveraging the cross-task generalization capabilities of LLMs [37, 53]. We synthesize a dataset that includes step-level error labels on formal logic (§3.1) and formal theorem proof tasks (§3.2), which we refer to as the FOVERdataset. The FOVERdataset includes step-level error labels on responses from Llama 3.1 8B [ 20] and Qwen 2.5 7B [ 36]. The step-level error labels are automatically annotated using formal verification tools, Z3 [7] and Isabelle [30]. Figure 3 outlines the dataset creation process. (1) First, we generate initial responses from LLMs on the formal logic and theorem proof tasks with few-shot prompting. The responses should follow the format compatible with formal verification tools. Specifically, we use Z3 for the formal logic 4 Table 2: Statistics of the F OVERdataset. Refer to Appendix D.2 for further details. (a) Number of instances. Responses TasksNumber of Instances Train Validation Test Llama 3.1 8BFormal Logic 10,000 360 360 Formal Proof 10,000 360 360 Qwen 2.5 7BFormal Logic 10,000 360 360 Formal Proof 10,000 360 360(b) Raw training split. Solution-level Step-level # % Error # % Error 10,000 30.0% 43,082 43.7% 10,000 30.0% 69,214 13.2% 10,000 30.0% 34,879 41.3% 10,000 30.0% 69,301 14.1%(c) Balanced training data. Step-level # of Steps % Error 20,000 50% 20,000 50% 20,000 50% 20,000 50% task and Isabelle for the formal theorem proof task. In the formal logic task, we directly instruct the LLMs to generate formal solutions because the format is relatively | https://arxiv.org/abs/2505.15960v1 |
simple. In the formal theorem proof task, we generate informal reasoning from the LLMs and convert it to a formal format using stronger LLMs (Figure 3b) because Isabelle’s format is too complex to follow for the relatively small LLMs we use. (2) Second, we acquire step-level error labels using formal verification tools. This dataset creation process does not involve error annotation by humans or stronger models. Although we use stronger models for autoformalization in this paper, autoformalization is unnecessary when we apply this framework to strong models that can directly generate formal theorems. TheFOVERdataset includes the training, validation, and test splits (Table 2a). The raw verification results often include more correct than incorrect steps (Table 2b). We construct training datasets with balanced step-level labels by selectively masking certain steps during training (Table 2c). 3.1 Formal Logic We use the logical entailment task in first-order logic (FOL) reasoning, in which the goal is to determine whether a hypothesis is logically entailed by a given set of premises. This logical entailment task has been widely studied for evaluating or improving LLMs [5, 27]. Base dataset. We use responses from LLMs to the symbolic version of FLDx2 [ 27], a dataset for multi-step FOL deduction. To ensure that each step can be independently verified, we exclude instances where reasoning processes involve assumptions. Refer to Appendix E.1 for details. Automatic error annotation. We use Z3 [ 7] for the step-level error annotation. Each step is checked independently by supplying Z3 with the current premises and the step’s conclusion to determine logical validity, as in Figure 3a. Refer to Appendix E.2 for details. 3.2 Formal Theorem Proof We use the task of formal theorem proving for verifying solutions for math word problems, which are studied in research for formally verifying informal solutions from LLMs [ 55,68]. We use Isabelle/HOL [ 30] to automatically annotate step-level error labels on formal proofs generated by LLMs. Since we use relatively small LLMs for generating initial responses, which often cannot produce valid Isabelle syntax directly, we employ a stronger LLM to perform automatic formalization by translating their informal reasoning into formal Isabelle proofs. Base datasets. We use informal responses from LLMs to GSM8K [ 6], GSM8K-based cases in MetaMathQA [62], and math word problems in Big-Math [1]. Refer to Appendix F.1 for details. Informal to formal conversion. Inspired by prior work [ 55,68], we convert informal problems from math reasoning datasets, as well as informal reasoning generated by LLMs, into formal proofs in Isabelle’s format using Llama 3.3 70B [ 20] with few-shot prompting (Figure 3b). To facilitate this process, we use Isabelle’s Sledgehammer tool [ 34] for verification to avoid specifying supporting lemmas during conversion. While the conversion is not always perfect, the accuracy of the final dataset is guaranteed because error labels are annotated relative to the converted formal proof, and only the converted proofs are included in the final dataset. Refer to Appendix F.2 for details. Automatic error annotation. Isabelle is designed for solution-level verification and stops verification at the first error. To obtain step-level error labels, we | https://arxiv.org/abs/2505.15960v1 |
implement code for step-level verification, based on an existing library for verifying Isabelle proofs [ 54]. Specifically, our code assumes that the other steps are correct and evaluates whether the target step is valid. Refer to Appendix F.3 for details. 5 4 Experiments We evaluate our PRMs trained on the FOVERdataset on broad reasoning benchmarks. We compare our PRMs with baseline PRMs based on the same LLMs without additional training and state-of-the- art PRMs in prior work built on the same LLMs. Specifically, we evaluate PRMs in Best-of-K on 12 reasoning benchmarks (§4.1) and step-level verification on ProcessBench [ 66] (§4.2) , and also conduct ablation study (§4.3) and manual analysis (§4.4). The results are summarized as follows: •LLM-based PRMs trained with FOVERsignificantly improve verification capabilities on a broad range of reasoning tasks compared to baseline PRMs based on the same LLMs without additional training, demonstrating effective cross-task generalization of our approach. •PRMs trained with FOVERare competitive with or better than state-of-the-art PRMs built on the same LLMs. This result is remarkable because the state-of-the-art PRMs are trained on labels annotated by humans or stronger models, which F OVERdoes not use. •Our manual analysis verifies that FOVERgenuinely improves step-level verification and rarely degrades it from baseline PRMs based on the same LLMs without additional training. Evaluation Datasets. We evaluate our PRMs in two settings on diverse reasoning datasets. First, we evaluate Best-of-K performance of PRMs on 12 reasoning tasks, including math reason- ing: GSM8K [ 6], MATH [ 11], AQuA-RAT [ 19], AIME (2016-2024) [ 2],logic reasoning: FO- LIO [ 8], LogicNLI [ 48],NLI: ANLI [ 29], HANS [ 25],MMLU: MMLU-Pro-NoMath [ 11,52,31], BIG-Bench Hard: temporal sequences, tracking shuffled objects (three objects), and word sort- ing [ 45] (§4.1). For large datasets, we use 250 randomly sampled examples from each dataset for evaluation. Second, we evaluate step-level verification performance on ProcessBench [66] (§4.2). Models. We use Llama 3.1 8B [ 20] and Qwen 2.5 7B [ 36] as base LLMs for LLM-based PRMs. Our method trains these LLMs on the FOVERdataset, which we refer to as FOVERPRMs. Baseline PRMs: We compare with baselines that use the base LLMs as PRMs without additional training. State-of-the-art PRMs: We also compare with five state-of-the-art PRMs that are built on the same LLMs. Among PRMs based on Llama 3.1 8B, we evaluate RLHFlow-Llama3.1-8B trained on the DeepSeek or Mistral data [ 59], which include error labels on GSM8K and MATH acquired via the Monte Carlo rollouts [ 50] generated using stronger models. Among PRMs based on Qwen 2.5 7B, we evaluate Qwen2.5-Math-7B-PRM800K [ 66], which is trained on human-annotated labels on MATH, and Qwen2.5-Math-PRM-7B [ 65], which is trained on labels synthesized using both the Monte Carlo rollout and verification by a stronger model. We also evaluate Qwen2.5-7B-Skywork-PRM [ 10], which are trained on math reasoning and coding tasks, but the training details are not revealed. Training on their own mistakes. As explained above, existing PRMs are often trained on labels annotated using stronger models. While this approach effectively enhances performance, it is less suitable for | https://arxiv.org/abs/2505.15960v1 |
evaluating training methodologies, as it offers limited insight into improving PRMs built on top of state-of-the-art LLMs—in settings where stronger models for annotation are unavailable. In contrast, we train LLM-based PRMs using labels from formal verification tools on their own responses, avoiding distilling any capabilities from stronger models. By showing effectiveness in this challenging setting, we expect FOVERto be used to improve PRMs based on state-of-the-art LLMs using their own responses. We also note that the goal of this work is not to outperform state-of-the-art PRMs, as F OVERis complementary to existing approaches and can be combined with them. Implementation Details. We create inputs to LLM-based PRMs by preprocessing step-by-step solutions into a conversation format where each input contains a single step, and the expected output is a single token: “correct” or “incorrect”. To obtain step-level scores, we extract logits for the two words and apply the softmax function to compute the prediction probability for “correct”. This is a popular approach to use LLMs as PRMs [59]. As the baseline PRMs are not fine-tuned, we provide zero-shot instructions about this format (Appendix G). Training: For the results in Section 4.1 and 4.2, we train PRMs on the balanced training datasets (Table 2c) that include both formal logic and proof data, consisting of 40k steps. Refer to Appendix H for details. 4.1 Results of Best-of-K on Reasoning Tasks We evaluate PRMs using Best-of-K, which is a popular approach to assess PRMs [ 6,17,64]. Best-of- K (or sample-and-rank) selects the best response from multiple candidates generated for the same 6 Table 3: Best-of-K (K=5) performance of PRMs on 12 reasoning benchmarks. FOVERPRMs significantly outperform the baseline PRMs in the second row. In addition, while the state-of-the-art PRMs in the following rows are trained on labels annotated by humans or stronger models, FOVER PRMs frequently outperform them in tasks other than math reasoning.µ: Trained on math reasoning tasks.†: Trained on human annotated labels.‡: Trained on labels annotated by stronger models. ?: Training details have not been disclosed.∗: Statistically significant improvement over the baseline in the second row ( p < 0.05, paired bootstrap [12]). Best values in each column are shown in bold. (a) PRMs based on Llama 3.1 8B select the best response from K= 5responses generated by Llama 3.1 8B PRMsMath Logic NLI MMLU BBH Average GSM8K MATH AQuA AIME FOLIO LogicNLI ANLI HANS Pro-NoMath Temporal Tracking Sorting Random Selection 85.2 35.2 57.1 3.2 57.1 36.4 30.4 74.8 54.0 87.2 85.6 36.4 53.6 Llama 3.1 8B 86.8 42.4 65.0 3.2 57.6 38.8 27.2 73.6 56.4 90.0 90.0 40.0 55.9 RLHFlow-Llama3.1-8B-Mµ‡92.8∗45.2 64.6 2.8 59.1 44.0 29.2 79.2∗54.0 92.0 91.2 38.0 57.7∗ RLHFlow-Llama3.1-8B-Dµ‡91.6∗46.8∗67.7 4.4 60.6 39.6 29.2 76.0 57.2 98.8∗92.0 38.4 58.5∗ FOVER-Llama3.1-8B-PRM (ours) 86.4 43.2 65.7 4.0 64.0∗44.8∗28.8 82.8∗57.2 97.6∗93.2∗38.4 58.8∗ (b) PRMs based on Qwen 2.5 7B select the best response from K= 5responses generated by Qwen 2.5 7B PRMsMath Logic NLI MMLU BBH Average GSM8K MATH AQuA AIME FOLIO LogicNLI ANLI HANS Pro-NoMath Temporal Tracking Sorting Random Selection 90.4 61.6 77.6 10.0 64.5 49.2 34.0 84.4 56.4 84.4 88.0 28.8 60.8 Qwen | https://arxiv.org/abs/2505.15960v1 |
2.5 7B 90.4 64.4 79.5 12.0 62.1 46.8 32.0 84.8 58.8 90.0 89.2 28.8 61.6 Qwen2.5-7B-Skywork-PRMµ?96.0∗68.8∗78.0 13.6 62.6 44.8 30.0 82.0 60.0 82.4 90.8 30.8 61.6 Qwen2.5-Math-7B-PRM800Kµ†94.8∗68.8∗81.1 13.2 66.5∗47.6 34.0 83.6 58.8 81.6 90.4 28.4 62.4 Qwen2.5-Math-PRM-7Bµ‡94.8∗69.2∗82.7 15.2∗65.5 48.8 28.0 84.8 58.4 84.4 92.0∗29.6 62.8∗ FOVER-Qwen2.5-7B-PRM (ours) 92.8∗66.4 82.3 13.2 65.5 48.8 32.4 85.2 60.8 91.6 89.6 31.2 63.3∗ Table 4: Step-level binary classification performance of PRMs on ProcessBench (AUROC). The first two rows represent PRMs based on LLMs without additional training. The next set of rows contains PRMs trained on math reasoning tasks. The last row shows our FOVERPRMs.∗: Statistically significant improvement over the baseline PRMs in the first row ( p < 0.05, paired bootstrap [12]). (a) Llama 3.1 as PRMs PRMs GSM8K MATH Olympiad Omni Ave. Llama 3.1 8B 70.9 68.8 67.3 59.0 66.5 Llama 3.1 70B 92.0 82.2 83.4 80.5 84.5 RLHFlow-Llama3.1-8B-Mµ‡91.2∗75.2∗68.2 60.7 73.8∗ RLHFlow-Llama3.1-8B-Dµ‡88.5∗77.2∗73.6∗63.1∗75.6∗ FOVER-Llama3.1-8B-PRM (ours) 80.0∗74.1∗74.8∗74.7∗75.9∗(b) Qwen 2.5 as PRMs PRMs GSM8K MATH Olympiad Omni Ave. Qwen 2.5 7B 77.9 76.0 74.9 73.6 75.6 Qwen 2.5 72B 87.5 83.5 81.8 80.9 83.4 Qwen2.5-7B-Skywork-PRMµ?92.3∗82.6∗72.6 64.8 78.1 Qwen2.5-Math-7B-PRM800Kµ†94.3∗91.7∗91.5∗90.3∗92.0∗ Qwen2.5-Math-PRM-7Bµ‡97.7∗95.3∗94.8∗93.2∗95.3∗ FOVER-Qwen2.5-7B-PRM (ours) 90.8∗89.1∗84.6∗86.0∗87.6∗ input using a verifier to score and rank them. By comparing the performance in Best-of-K, we can indirectly assess PRMs without requiring evaluation datasets with step-level error labels. Settings. We generate K= 5responses using few-shot prompting with a temperature of 0.5 from the base LLMs. Our few-shot demonstrations follow a step-by-step format with line breaks separating individual steps. Accordingly, we post-process the generated responses by splitting them at line breaks to extract step-level inputs for PRMs. We compute the solution-level score as the minimum of the step-level scores and select the response with the highest solution-level score as the Best-of-K output, which is a standard way to use PRMs in Best-of-K [ 18,50]. We provide a baseline performance of Random Selection, which randomly selects a response without using PRMs. Results. Table 3 compares the performance of Best-of-K using different PRMs for re-ranking. Remarkably, training on the FOVERdataset—consisting of formal logic and theorem proof tasks— significantly improves performance from the baseline PRMs on broad out-of-distribution reasoning tasks. This result demonstrates that our approach, which only relies on error labels from formal verification tools, enhances PRMs across a broad range of reasoning tasks. We also observe that FOVERPRMs achieve performance competitive with that of state-of-the-art PRMs. Specifically, while existing PRMs tend to perform better on math reasoning tasks, likely 7 Table 5: Ablation study of PRMs trained on different versions of the FOVERdatasets in Best-of- K (K=5).∗: Statistically significant improvement over the baseline ( p < 0.05, paired bootstrap [ 12]). Training Dataset Math Logic NLI MMLU BBH Average Formal Logic Formal Proof GSM8K MATH AQuA AIME FOLIO LogicNLI ANLI HANS Pro-NoMath Temporal Tracking Sorting PRMs based on Llama 3.1 8B select the best response from K= 5 responses generated by Llama 3.1 8B ✓ – 88.4 40.8 69.7∗4.4 65.5∗43.6 30.8∗83.6∗55.2 89.2 96.0∗40.8 59.0∗ – ✓ 89.2 41.2 70.1∗2.0 61.1 39.2 28.8 77.2 55.6 97.2∗94.0∗40.0 58.0∗ ✓ ✓ 86.4 43.2 65.7 4.0 64.0∗44.8∗28.8 82.8∗57.2 97.6∗93.2∗38.4 58.8∗ PRMs based | https://arxiv.org/abs/2505.15960v1 |
on Qwen 2.5 7B select the best response from K= 5 responses generated by Qwen 2.5 7B ✓ – 92.8∗65.2 81.1 10.8 61.6 46.8 31.2 87.2∗61.2 90.8 89.2 32.8∗62.6∗ – ✓ 91.6 66.0 81.5 12.4 62.1 50.4 32.0 84.0 60.4 91.6 90.8 30.4 62.8∗ ✓ ✓ 92.8∗66.4 82.3 13.2 65.5 48.8 32.4 85.2 60.8 91.6 89.6 31.2 63.3∗ Table 6: Ablation study of FOVERPRMs on step-level verification on ProcessBench (AUROC). ∗: Statistically significant improvement over the baseline PRMs ( p < 0.05, paired bootstrap [12]). (a) Llama 3.1 as PRMs Logic Proof GSM8K MATH Olympiad Omni Average ✓ – 79.2∗79.8∗81.2∗76.5∗79.2∗ –✓ 87.9∗82.5∗82.7∗80.9∗83.5∗ ✓ ✓ 80.0∗74.1∗74.8∗74.7∗75.9∗(b) Qwen 2.5 as PRMs Logic Proof GSM8K MATH Olympiad Omni Average ✓ – 84.9∗83.3∗83.5∗82.5∗83.5∗ –✓ 90.5∗87.9∗84.3∗84.8∗86.9∗ ✓ ✓ 90.8∗89.1∗84.6∗86.0∗87.6∗ due to being trained on math datasets, FOVERPRMs often outperform them on the other reasoning tasks. This result suggests that FOVERis better at improving PRMs across broad out-of-distribution reasoning tasks. This is a notable result because the state-of-the-art PRMs are trained on labels annotated by humans or stronger models, which F OVERdoes not rely on. 4.2 Results of Step-level Verification on ProcessBench We evaluate PRMs on the step-level binary classification task in ProcessBench [ 66], which includes human-annotated step-level error labels (correct vs. incorrect) for responses from multiple LLMs on math reasoning tasks. ProcessBench includes labels only for the earliest error in each response, so we evaluate PRMs on steps up to the first error in each solution. Table 4 shows the step-level verification performance of PRMs, measured in AUROC. The result shows that training on the FOVERdataset, which includes error labels on GSM8K-level problems in Isabelle’s format, significantly improves step-level verification performance in more complex math reasoning tasks when compared to the baseline PRMs. Furthermore, we observe that FOVERPRM based on Llama 3.1 8B is competitive to PRMs that are trained on GSM8K and MATH, and FOVER PRM based on Qwen 2.5 7B is better than PRMs based on Qwen 2.5 72B. These results demonstrate that training on FOVERexhibits easy-to-difficult generalization, improving verification performance of PRMs on reasoning tasks that are more complex than those seen during training. 4.3 Ablation Study We evaluate PRMs trained on three variants of the FOVERdataset: formal logic only, formal theorem proving only, and a combination of both. Table 5 and 6 show results of Best-of-K on 12 reasoning datasets and step-level verification in ProcessBench. We observe that both formal logic and theorem proof datasets significantly improve performance from the baseline PRMs in both settings, verifying that both of these tasks are effective at improving PRMs on broad reasoning tasks. However, we observe that training using both formal logic and theorem proof tasks (the last rows) is not always effective and sometimes degrades performance from training on individual tasks. This is a counterintuitive result showing that more diverse training datasets are not effective. We expect that a more stable training strategy should be used to further improve the cross-task generalization of FOVER. We provide ablation studies on other hyperparameters in Appendix J.2. 4.4 Manual Analysis We manually analyze step-level scores for cases | https://arxiv.org/abs/2505.15960v1 |
where FOVERimproves or degrades re-ranking in Best-of-K from baseline PRMs (§4.1). As Best-of-K evaluates the final answers of selected solutions 8 Table 7: Verification of an LLM response to MMLU-Pro-NoMath by PRMs based on Llama 3.1 8B. FOVERPRM improves the scores from the baseline and successfully detects the wrong step (Step 1). Problem: Nagel clarifies that absolutism is the view that we should (A) [...] (B) never prevent murder [...] (I) avoid murder at all costs Ground truth answer: (I) avoid murder at all costs Step Solution steps generated by Llama 3.1 8B Baseline F OVER Step 0Absolutism is a philosophical position that holds that a particular statement is always true or always false, regardless of context or circumstance. In the context of morality, absolutism is the view that a particular action, such as murder, is always right or always wrong, regardless of the circumstances.0.84 0.91 Step 1Option (B) "never prevent murder" is the correct interpretation of absolutism in this context, as it implies that murder is always wrong and should never be prevented, which is a characteristic of an absolutist view.0.99 0.47 Step 2 The answer is (B). 0.98 0.99 Solution-level Score 0.84 0.47 without directly evaluating step-level verification by PRMs, we conduct manual analysis to verify the improvement by FOVER. We analyze step-level scores in GSM8K, ANLI, and the temporal sequence and word sorting task in BBH. Refer to Appendix J.3 for detailed results. Our manual analysis shows that FOVERgenuinely improves step-level verification and rarely leads to degradation compared to the baseline PRMs. Among cases where FOVERimproved final answers in Best-of-K, we observe that FOVERoften improves the step-level verification, particularly in GSM8K and the temporal sequence task, where over 50% of cases are improved. Among cases where FOVER degrades the final answers in Best-of-K, we observe that FOVERdegrades the step-level scores in less than 20% of the cases on most of the tasks. Our analysis shows that much of the degradation in the final answers does not represent a decline in step-level verification. For example, there are many cases where solutions selected by the baselines included incorrect steps, although they produced the correct final answers. When intermediate steps are wrong, PRMs should reject such solutions even if the final answers are correct, so this case does not represent degradation by F OVER. 4.5 Generalization in PRMs Trained on F OVER Our results show an effective generalization of PRMs trained with FOVER. First, FOVERexhibits a promising symbol-to-text generalization . The FOVERdataset includes symbolic responses in the formats that are compatible with Z3 and Isabelle, but it improves PRMs on LLM responses in informal natural language. Furthermore, our results show easy-to-difficult generalization . Although our formal theorem proof task includes errors in responses to GSM8K-level problems, we observe thatFOVERimproves PRMs on more complex math reasoning tasks, such as MATH and Olympiad- Bench (§4.2). Finally, FOVERexhibits cross-task generalization and improves PRMs on reasoning tasks that are largely different from those in training data (§4.1). For example, surprisingly, FOVER improves verification performance on MMLU-Pro-NoMath, a task that is distant from both of our training | https://arxiv.org/abs/2505.15960v1 |
tasks. As in Figure 7, problems in this dataset require complex reasoning and domain knowledge to verify solutions. In this example, FOVERPRM correctly understands the concept of “absolutism” in Step 0 and detects a logical mistake in Step 1. Although the FOVERdataset does not include knowledge-intensive tasks, it makes PRMs generalize to tasks that require logical reasoning over domain knowledge. This is a notable result that motivates further research into the creation of synthetic data to enhance the verification capabilities of LLMs across diverse reasoning tasks. 5 Conclusion This work aims to propose an approach that addresses two challenges in PRMs: automatic creation of training datasets and generalization to diverse reasoning tasks. We introduce FOVER, an approach to train PRMs using step-level error labels annotated by formal verification tools, such as Z3 and Isabelle, without relying on human annotation or inaccurate synthetic labels. We show that training on the tool-annotated error labels from formal logic and theorem proof tasks exhibits cross-task generalization, improving the verification capabilities of LLM-based PRMs across a broad range of reasoning tasks. Our results demonstrate that formal-tool-based annotation provides a scalable and complementary supervision signal for enhancing fine-grained reasoning verification by LLMs. 9 Acknowledgments and Disclosure of Funding This work was supported by NSF CAREER Award IIS-2338418. We thank Jin Peng Zhou for providing guidance on the use of his code [ 68]. We also thank Terufumi Morishita for the valuable discussions and for his assistance with the FLDx2 dataset [ 27], and we appreciate NLP Colloquium JP for providing the opportunity to connect with him. References [1]Alon Albalak, Duy Phung, Nathan Lile, Rafael Rafailov, Kanishk Gandhi, Louis Castricato, Anikait Singh, Chase Blagden, Violet Xiang, Dakota Mahan, and Nick Haber. Big-math: A large-scale, high-quality math dataset for reinforcement learning in language models. arXiv preprint arXiv:2502.17387 , 2025. [2]Art of Problem Solving. Aime problems and solutions, 2025. URL https://artofproblemsolving. com/wiki/index.php/AIME_Problems_and_Solutions . [3]Collin Burns, Pavel Izmailov, Jan Hendrik Kirchner, Bowen Baker, Leo Gao, Leopold Aschenbrenner, Yining Chen, Adrien Ecoffet, Manas Joglekar, Jan Leike, Ilya Sutskever, and Jeff Wu. Weak-to-strong generalization: Eliciting strong capabilities with weak supervision. arXiv preprint arXiv:2312.09390 , 2023. [4]Sanyuan Chen, Yutai Hou, Yiming Cui, Wanxiang Che, Ting Liu, and Xiangzhan Yu. Recall and learn: Fine-tuning deep pretrained language models with less forgetting. In Bonnie Webber, Trevor Cohn, Yulan He, and Yang Liu, editors, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 7870–7881, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.634. URL https://aclanthology. org/2020.emnlp-main.634/ . [5]Peter Clark, Oyvind Tafjord, and Kyle Richardson. Transformers as soft reasoners over language. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence , IJCAI’20, 2021. ISBN 9780999241165. [6]Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 , 2021. [7]Leonardo de Moura and Nikolaj Bjørner. Z3: an efficient smt solver. In 2008 Tools and Algorithms for Con- struction and Analysis of Systems , pages 337–340. Springer, Berlin, Heidelberg, | https://arxiv.org/abs/2505.15960v1 |
March 2008. URL https: //www.microsoft.com/en-us/research/publication/z3-an-efficient-smt-solver/ . [8]Simeng Han, Hailey Schoelkopf, Yilun Zhao, Zhenting Qi, Martin Riddell, Wenfei Zhou, James Coady, David Peng, Yujie Qiao, Luke Benson, Lucy Sun, Alexander Wardle-Solano, Hannah Szabó, Ekaterina Zubova, Matthew Burtell, Jonathan Fan, Yixin Liu, Brian Wong, Malcolm Sailor, Ansong Ni, Linyong Nan, Jungo Kasai, Tao Yu, Rui Zhang, Alexander Fabbri, Wojciech Maciej Kryscinski, Semih Yavuz, Ye Liu, Xi Victoria Lin, Shafiq Joty, Yingbo Zhou, Caiming Xiong, Rex Ying, Arman Cohan, and Dragomir Radev. FOLIO: Natural language reasoning with first-order logic. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen, editors, Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 22017–22031, Miami, Florida, USA, November 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.emnlp-main.1229. URL https://aclanthology. org/2024.emnlp-main.1229/ . [9]Peter Hase, Mohit Bansal, Peter Clark, and Sarah Wiegreffe. The unreasonable effectiveness of easy training data for hard tasks. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 7002–7024, Bangkok, Thailand, August 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.acl-long.378. URL https://aclanthology.org/2024.acl-long.378/ . [10] Jujie He, Tianwen Wei, Rui Yan, Jiacai Liu, Chaojie Wang, Yimeng Gan, Shiwen Tu, Chris Yuhao Liu, Liang Zeng, Xiaokun Wang, Boyang Wang, Yongcong Li, Fuxiang Zhang, Jiacheng Xu, Bo An, Yang Liu, and Yahui Zhou. Skywork-o1 open series. https://huggingface.co/Skywork , November 2024. URL https://huggingface.co/Skywork . [11] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the MATH dataset. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2) , 2021. URLhttps://openreview.net/forum?id=7Bywt2mQsCe . 10 [12] Philipp Koehn. Statistical significance tests for machine translation evaluation. In Dekang Lin and Dekai Wu, editors, Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing , pages 388–395, Barcelona, Spain, July 2004. Association for Computational Linguistics. URL https://aclanthology.org/W04-3250/ . [13] Takeshi Kojima, Shixiang (Shane) Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems , volume 35, pages 22199–22213. Curran As- sociates, Inc., 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/ 8bb0d291acd4acf06ef112099c16f326-Paper-Conference.pdf . [14] Laura Kovács and Andrei V oronkov. First-order theorem proving and vampire. In Natasha Sharygina and Helmut Veith, editors, Computer Aided Verification , pages 1–35, Berlin, Heidelberg, 2013. Springer Berlin Heidelberg. ISBN 978-3-642-39799-8. [15] Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles , 2023. [16] Aitor Lewkowycz, Anders Johan Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Venkatesh Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, and Vedant Misra. Solving quantitative reasoning problems with language models. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems , 2022. | https://arxiv.org/abs/2505.15960v1 |
URL https://openreview.net/forum? id=IFXTZERXdM7 . [17] Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. Making language models better reasoners with step-aware verifier. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki, editors, Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 5315–5333, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.291. URL https://aclanthology.org/ 2023.acl-long.291/ . [18] Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. In The Twelfth International Con- ference on Learning Representations , 2024. URL https://openreview.net/forum?id=v8L0pN6EOi . [19] Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. Program induction by rationale generation: Learning to solve and explain algebraic word problems. In Regina Barzilay and Min-Yen Kan, editors, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 158–167, Vancouver, Canada, July 2017. Association for Computational Linguistics. doi: 10.18653/v1/P17-1015. URL https://aclanthology.org/P17-1015/ . [20] Llama Team, AI @ Meta. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. [21] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations , 2019. URL https://openreview.net/forum?id=Bkg6RiCqY7 . [22] Liangchen Luo, Yinxiao Liu, Rosanne Liu, Samrat Phatale, Meiqi Guo, Harsh Lara, Yunxuan Li, Lei Shu, Yun Zhu, Lei Meng, Jiao Sun, and Abhinav Rastogi. Improve mathematical reasoning in language models by automated process supervision. arXiv preprint arXiv:2406.06592 , 2024. [23] Qianli Ma, Haotian Zhou, Tingkai Liu, Jianbo Yuan, Pengfei Liu, Yang You, and Hongxia Yang. Let’s reward step by step: Step-level reward model as the navigators for reasoning. arXiv preprint arXiv:2310.10080 , 2023. [24] Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdanbakhsh, and Peter Clark. Self-refine: Iterative refine- ment with self-feedback. In A. Oh, T. Neumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems , volume 36, pages 46534–46594. Curran As- sociates, Inc., 2023. URL https://proceedings.neurips.cc/paper_files/paper/2023/file/ 91edff07232fb1b55a505a9e9f6c0ff3-Paper-Conference.pdf . 11 [25] R. Thomas McCoy, Ellie Pavlick, and Tal Linzen. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Anna Korhonen, David Traum, and Lluís Màrquez, editors, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 3428– 3448, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1334. URLhttps://aclanthology.org/P19-1334/ . [26] Arindam Mitra, Hamed Khanpour, Corby Rosset, and Ahmed Awadallah. Orca-math: Unlocking the potential of slms in grade school math. arXiv preprint arXiv:2402.14830 , 2024. [27] Terufumi Morishita, Gaku Morio, Atsuki Yamaguchi, and Yasuhiro Sogawa. Enhancing reasoning capabil- ities of LLMs via principled synthetic logic corpus. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. URL https://openreview.net/forum?id=mljDUaQpln . [28] Leonardo de Moura and Sebastian Ullrich. The lean 4 theorem prover and programming language. InAutomated Deduction – CADE 28: 28th International Conference on Automated Deduction, Vir- tual Event, July | https://arxiv.org/abs/2505.15960v1 |
12–15, 2021, Proceedings , page 625–635, Berlin, Heidelberg, 2021. Springer-Verlag. ISBN 978-3-030-79875-8. doi: 10.1007/978-3-030-79876-5_37. URL https://doi.org/10.1007/ 978-3-030-79876-5_37 . [29] Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. Adversarial NLI: A new benchmark for natural language understanding. In Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel Tetreault, editors, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 4885–4901, Online, July 2020. Association for Computational Linguistics. doi: 10. 18653/v1/2020.acl-main.441. URL https://aclanthology.org/2020.acl-main.441/ . [30] Tobias Nipkow, Markus Wenzel, and Lawrence C. Paulson. Isabelle/HOL: A Proof Assistant for Higher- Order Logic , volume 2283 of Lecture Notes in Computer Science . Springer, 2002. ISBN 978-3-540-43376-7. doi: 10.1007/3-540-45949-9. [31] Sam Paech. Mmlu-pro-nomath, 2024. URL https://huggingface.co/blog/sam-paech/ mmlu-pro-nomath . [32] Sarah Pan, Vladislav Lialin, Sherin Muckatira, and Anna Rumshisky. Let’s reinforce step by step. InNeurIPS 2023 Workshop on Instruction Tuning and Instruction Following , 2023. URL https:// openreview.net/forum?id=QkdRqpClab . [33] Debjit Paul, Mete Ismayilzada, Maxime Peyrard, Beatriz Borges, Antoine Bosselut, Robert West, and Boi Faltings. REFINER: Reasoning feedback on intermediate representations. In Yvette Graham and Matthew Purver, editors, Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 1100–1126, St. Julian’s, Malta, March 2024. Association for Computational Linguistics. URL https://aclanthology.org/2024.eacl-long.67 . [34] Lawrence C. Paulson and Jasmin Christian Blanchette. Three years of experience with sledgehammer, a practical link between automatic and interactive theorem provers. In Geoff Sutcliffe, Stephan Schulz, and Eugenia Ternovska, editors, IWIL 2010. The 8th International Workshop on the Implementation of Logics , volume 2 of EPiC Series in Computing , pages 1–11. EasyChair, 2012. doi: 10.29007/36dt. URL /publications/paper/wV . [35] Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving, 2020. URLhttps://arxiv.org/abs/2009.03393 . [36] Qwen Team. Qwen2.5 technical report. arXiv preprint arXiv:2412.15115 , 2024. [37] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research , 21(140):1–67, 2020. URL http://jmlr.org/papers/v21/ 20-074.html . [38] William Saunders, Catherine Yeh, Jeff Wu, Steven Bills, Long Ouyang, Jonathan Ward, and Jan Leike. Self-critiquing models for assisting human evaluators. arXiv preprint arXiv:2206.05802 , 2022. [39] Stephan Schulz. E - a brainiac theorem prover. AI Commun. , 15(2,3):111–126, August 2002. ISSN 0921-7126. [40] Thomas Scialom, Tuhin Chakrabarty, and Smaranda Muresan. Fine-tuned language models are continual learners. In Yoav Goldberg, Zornitsa Kozareva, and Yue Zhang, editors, Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing , pages 6107–6122, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.emnlp-main. 410. URL https://aclanthology.org/2022.emnlp-main.410/ . 12 [41] Amrith Setlur, Nived Rajaraman, Sergey Levine, and Aviral Kumar. Scaling test-time compute without verification or rl is suboptimal. arXiv preprint arXiv:2502.12118 , 2025. [42] Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. Continual learning with deep gener- ative replay. In I. Guyon, U. V on Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information | https://arxiv.org/abs/2505.15960v1 |
Processing Systems , volume 30. Curran As- sociates, Inc., 2017. URL https://proceedings.neurips.cc/paper_files/paper/2017/file/ 0efbe98067c6c73dba1250d2beaa81f9-Paper.pdf . [43] Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling llm test-time compute optimally can be more effective than scaling model parameters. arXiv preprint arXiv:2408.03314 , 2024. [44] Zhiqing Sun, Longhui Yu, Yikang Shen, Weiyang Liu, Yiming Yang, Sean Welleck, and Chuang Gan. Easy-to-hard generalization: Scalable alignment beyond human supervision. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. URL https://openreview.net/forum? id=qwgfh2fTtN . [45] Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc Le, Ed Chi, Denny Zhou, and Jason Wei. Challenging BIG-bench tasks and whether chain-of-thought can solve them. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki, edi- tors, Findings of the Association for Computational Linguistics: ACL 2023 , pages 13003–13051, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.findings-acl.824. URLhttps://aclanthology.org/2023.findings-acl.824/ . [46] Oyvind Tafjord, Bhavana Dalvi, and Peter Clark. ProofWriter: Generating implications, proofs, and abductive statements over natural language. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli, editors, Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 , pages 3621–3634, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.findings-acl.317. URLhttps://aclanthology.org/2021.findings-acl.317/ . [47] The Coq Development Team. The Coq reference manual – release 8.19.0. https://coq.inria.fr/ doc/V8.19.0/refman , 2024. [48] Jidong Tian, Yitian Li, Wenqing Chen, Liqiang Xiao, Hao He, and Yaohui Jin. Diagnosing the first- order logical reasoning ability through LogicNLI. In Marie-Francine Moens, Xuanjing Huang, Lucia Specia, and Scott Wen-tau Yih, editors, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing , pages 3738–3747, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.303. URL https: //aclanthology.org/2021.emnlp-main.303/ . [49] Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia Creswell, Geoffrey Irving, and Irina Higgins. Solving math word problems with process- and outcome- based feedback. arXiv preprint arXiv:2211.14275 , 2022. [50] Peiyi Wang, Lei Li, Zhihong Shao, Runxin Xu, Damai Dai, Yifei Li, Deli Chen, Yu Wu, and Zhifang Sui. Math-shepherd: Verify and reinforce LLMs step-by-step without human annotations. In Lun- Wei Ku, Andre Martins, and Vivek Srikumar, editors, Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 9426–9439, Bangkok, Thailand, August 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.acl-long.510. URL https://aclanthology.org/2024.acl-long.510 . [51] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations , 2023. URL https://openreview. net/forum?id=1PL1NIMMrw . [52] Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, Shiguang Guo, Weiming Ren, Aaran Arulraj, Xuan He, Ziyan Jiang, Tianle Li, Max Ku, Kai Wang, Alex Zhuang, Rongqi Fan, Xiang Yue, and Wenhu Chen. MMLU-pro: A more robust and challenging multi-task language understanding benchmark. In The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track , 2024. URL https://openreview.net/forum?id=y10DM6R2r3 . [53] Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, | https://arxiv.org/abs/2505.15960v1 |
Nan Du, Andrew M. Dai, and Quoc V Le. Finetuned language models are zero-shot learners. In International Conference on Learning Representations , 2022. URL https://openreview.net/forum?id=gEZrGCozdqR . [54] Sean Welleck. Neural theorem proving tutorial. https://github.com/wellecks/ntptutorial , 2023. 13 [55] Yuhuai Wu, Albert Qiaochu Jiang, Wenda Li, Markus Rabe, Charles Staats, Mateja Jamnik, and Christian Szegedy. Autoformalization with large language models. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems , volume 35, pages 32353–32368. Curran Associates, Inc., 2022. URL https://proceedings.neurips.cc/paper_ files/paper/2022/file/d0c6bc641a56bebee9d985b937307367-Paper-Conference.pdf . [56] Tian Xie, Zitian Gao, Qingnan Ren, Haoming Luo, Yuqian Hong, Bryan Dai, Joey Zhou, Kai Qiu, Zhirong Wu, and Chong Luo. Logic-rl: Unleashing llm reasoning with rule-based reinforcement learning. arXiv preprint arXiv:2502.14768 , 2025. [57] Huajian Xin, Daya Guo, Zhihong Shao, Zhizhou Ren, Qihao Zhu, Bo Liu, Chong Ruan, Wenda Li, and Xiaodan Liang. Deepseek-prover: Advancing theorem proving in llms through large-scale synthetic data. arXiv preprint arXiv:2405.14333 , 2024. [58] Huajian Xin, Daya Guo, Zhihong Shao, Z.Z. Ren, Qihao Zhu, Bo Liu, Chong Ruan, Wenda Li, and Xiaodan Liang. Advancing theorem proving in LLMs through large-scale synthetic data. In The 4th Workshop on Mathematical Reasoning and AI at NeurIPS’24 , 2024. URL https://openreview.net/ forum?id=TPtXLihkny . [59] Wei Xiong, Hanning Zhang, Nan Jiang, and Tong Zhang. An implementation of generative prm. https: //github.com/RLHFlow/RLHF-Reward-Modeling , 2024. [60] Kaiyu Yang, Jia Deng, and Danqi Chen. Generating natural language proofs with verifier-guided search. In Yoav Goldberg, Zornitsa Kozareva, and Yue Zhang, editors, Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing , pages 89–105, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.emnlp-main.7. URL https://aclanthology.org/2022.emnlp-main.7 . [61] Kaiyu Yang, Aidan Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan J Prenger, and Animashree Anandkumar. Leandojo: Theorem proving with retrieval-augmented lan- guage models. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems , volume 36, pages 21573–21612. Curran Asso- ciates, Inc., 2023. URL https://proceedings.neurips.cc/paper_files/paper/2023/file/ 4441469427094f8873d0fecb0c4e1cee-Paper-Datasets_and_Benchmarks.pdf . [62] Longhui Yu, Weisen Jiang, Han Shi, Jincheng YU, Zhengying Liu, Yu Zhang, James Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions for large language models. In The Twelfth International Conference on Learning Representations , 2024. URL https://openreview.net/forum?id=N8N0hgNDRt . [63] Dan Zhang, Sining Zhoubian, Ziniu Hu, Yisong Yue, Yuxiao Dong, and Jie Tang. ReST-MCTS*: LLM self-training via process reward guided tree search. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. URL https://openreview.net/forum?id=8rcFOqEud5 . [64] Lunjun Zhang, Arian Hosseini, Hritik Bansal, Mehran Kazemi, Aviral Kumar, and Rishabh Agarwal. Generative verifiers: Reward modeling as next-token prediction. In The 4th Workshop on Mathematical Reasoning and AI at NeurIPS’24 , 2024. URL https://openreview.net/forum?id=CxHRoTLmPX . [65] Zhenru Zhang, Chujie Zheng, Yangzhen Wu, Beichen Zhang, Runji Lin, Bowen Yu, Dayiheng Liu, Jingren Zhou, and Junyang Lin. The lessons of developing process reward models in mathematical reasoning. In Findings of the Association for Computational Linguistics: ACL 2025 (to appear) , 2025. [66] | https://arxiv.org/abs/2505.15960v1 |
Chujie Zheng, Zhenru Zhang, Beichen Zhang, Runji Lin, Keming Lu, Bowen Yu, Dayiheng Liu, Jingren Zhou, and Junyang Lin. Processbench: Identifying process errors in mathematical reasoning. In The 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025) (to appear) , 2025. [67] Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, and Zheyan Luo. LlamaFactory: Unified effi- cient fine-tuning of 100+ language models. In Yixin Cao, Yang Feng, and Deyi Xiong, editors, Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demon- strations) , pages 400–410, Bangkok, Thailand, August 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.acl-demos.38. URL https://aclanthology.org/2024.acl-demos.38/ . [68] Jin Peng Zhou, Charles E Staats, Wenda Li, Christian Szegedy, Kilian Q Weinberger, and Yuhuai Wu. Don’t trust: Verify – grounding LLM quantitative reasoning with autoformalization. In The Twelfth International Conference on Learning Representations , 2024. URL https://openreview.net/forum? id=V5tdi14ple . 14 Table of Contents of Appendix A Limitations 16 B Additional Related Work 16 C Model Access and Software Libraries 17 C.1 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 C.2 Existing PRMs Compared . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 C.3 Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 D F OVERDataset 18 D.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 D.2 Dataset Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 E Dataset Creation: Formal Logic 21 E.1 Base Dataset and Initial Response Generation for Formal Logic . . . . . . . . . . . 21 E.2 Step-level Verification in Formal Logic . . . . . . . . . . . . . . . . . . . . . . . . 21 F Dataset Creation: Formal Proof 22 F.1 Base Dataset and Initial Response Generation for Formal Proof . . . . . . . . . . . 22 F.2 Informal to Formal Conversion in Formal Proof . . . . . . . . . . . . . . . . . . . 22 F.3 Step-level Proof Verification in Isabelle . . . . . . . . . . . . . . . . . | https://arxiv.org/abs/2505.15960v1 |
. . . . . . 25 G Input Format and Postprocessing for LLM-based PRMs 28 G.1 Input Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 G.2 Postprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 G.3 Existing PRMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 H Training Settings 30 H.1 Training Data Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 H.2 Training Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 I Evaluation Settings 32 I.1 Evaluation Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 I.2 Downstream Evaluation: Best-of-K . . . . . . . . . . . . . . . . . . . . . . . . . 32 J Additional Results 33 J.1 Reference Methods for Best-of-K: Majority-of-K and Pass@K. . . . . . . . . . . . 33 J.2 Hyperparameters that Influence the Cross-task Generalization of Training on F OVER 34 J.3 Manual Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 K License 41 L Computational Resources and Execution Time 41 15 A Limitations We expect future research to explore the following limitations in this paper. Small model size. Due to computational constraints, this work evaluates PRMs only with 8B- class LLMs. Prior work also typically uses these relatively small LLMs to create LLM-based PRMs [66, 65, 10], but evaluating larger models could show further insights into the scalability. Evaluation approaches. We evaluate PRMs using the step-level verification task in ProcessBench and Best-of-K performance on reasoning tasks, both of which are standard practices for assessing PRMs [ 18,50,66,65]. A recent paper [ 65] employs the same combination of evaluation approaches, which we take as support for the sufficiency of our current evaluation setting. Nonetheless, additional evaluation methods, such as step-level search [ 23,43] and reinforcement learning [ 32,63], would offer further insight into the performance of our PRMs. Difficulty | https://arxiv.org/abs/2505.15960v1 |
of source tasks for our training data. Our training data in this project targets improving PRMs based on 8B-class LLMs and includes tasks that may not be difficult for larger state-of-the-art LLMs, such as GSM8K-level math reasoning tasks in the formal theorem proof task. When applying our approach to improve PRMs based on stronger models, we may need to use more difficult tasks to make the LLMs introduce meaningful mistakes. Methods to improve cross-task generalization. Our approach to generalizing PRMs to broad reasoning tasks relies on the inherent cross-task generalization capabilities of LLMs. Although our results already exhibit an effective cross-task generalization (§4.5), we expect future work to explore transfer learning methods to further enhance cross-task generalization. For example, using pretraining data during fine-tuning, a technique known as rehearsal [ 42,40], is a popular approach to avoid catastrophic forgetting. In another direction, Morishita et al. [27] show that training on their formal logic datasets with the RecAdam optimizer [4] improves LLMs on broad reasoning tasks. B Additional Related Work Applications of PRMs. PRMs can be used to supervise LLM reasoning during training and infer- ence. First, for training , PRMs can generate reward signals, particularly in reinforcement learning settings [ 32,63]. They can be applied either to re-rank candidate responses from the policy or to provide direct reward supervision [ 49]. Second, for inference-time improvement , PRMs can guide response selection and refinement through methods such as Best-of-K [ 17], self-correction [ 38,24], and step-level search [23, 43]. Cross-task generalization in LLMs. Our work is the first to conduct an in-depth analysis of cross- task generalization in PRMs, although cross-task generalization in LLMs has been widely studied in general tasks. Early studies of LLMs, such as T5 [ 37] and FLAN [ 53], already observe their generalization to unseen tasks. Easy-to-hard generalization [ 3,9,44] is a challenging case, where LLMs trained on simpler tasks show improved performance on more complex ones. Directly related to our work, recent work [ 27,56] shows that synthetic and symbolic training data can improve general reasoning capabilities of LLMs. These successful observations motivate us to use formal logic and theorem proof to improve LLM-based PRMs in diverse out-of-distribution reasoning tasks, assuming the generalization of LLMs in verification capabilities. 16 C Model Access and Software Libraries This section provides details of LLMs and libraries used in our experiments. C.1 Models We use models that are provided at Hugging Face Hub. Base models. We use meta-llama/Llama-3.1-8B-Instruct and Qwen/Qwen2.5-7B-Instruct as base models for our PRMs. We also use these models to generate initial responses used in creating the FOVERdataset, and also for generating K= 5 responses in the Best-of-K evaluation (§4.1). Conversion in Formal Proof. As explained in Section 3, we use meta-llama/Llama-3.3-70B-Instruct for converting informal statements and solutions to the Isabelle format. C.2 Existing PRMs Compared This section provides details of state-of-the-art PRMs we evaluate in Section 4. Table 8 shows the details of the models we evaluate. We acquire these models at Hugging Face Hub and use vLLM [ 15] to generate reward scores.1 Table 8: State-of-the-art PRMs | https://arxiv.org/abs/2505.15960v1 |
we evaluate in Section 4.1. PRMs Source Base Datasets Error Annotation RLHFlow-Llama3.1-8B-DeepSeek [59] RLHFlow/Llama3.1-8B-PRM-Deepseek-Data GSM8K, MATH Math-Shepherd [50] RLHFlow-Llama3.1-8B-Mistral [59] RLHFlow/Llama3.1-8B-PRM-Mistral-Data GSM8K, MATH Math-Shepherd [50] Qwen2.5-Math-7B-PRM800K [66] Qwen/Qwen2.5-Math-7B-PRM800K MATH Human annotation Qwen2.5-Math-PRM-7B [65] Qwen/Qwen2.5-Math-PRM-7B Private DataMath-Shepherd [50] & LLM-as-a-Judge Qwen2.5-7B-Skywork-PRM [10] Skywork/Skywork-o1-Open-PRM-Qwen-2.5-7B Hidden Hidden C.3 Libraries Inference Code. We use vLLM [15] for accelerating LLM inference. Training Code. We use LLaMA-Factory [67] for training. 1https://docs.vllm.ai/en/latest/models/supported_models.html# reward-modeling-task-reward 17 D F OVERDataset This section provides details of our F OVERdataset. D.1 Examples We provide examples from the F OVERdataset. FOVER-FormalLogic. Here is an example of the data in the formal logic task based on the initial responses from Llama 3.1 8B. { 'id':'fldx2-train-058709_Llama-3.1-8B-Instruct ', 'error_labels ': [True, False, False, True, False], 'messages ': [ { 'role ':'user ', 'content ':'** Problem ** Based on the provided facts ($context$), either prove or disprove the hypothesis or state that it is unknown. The facts and the hypothesis are written in logical formulas as follows: capital letters such as "{A}", "{B}", "{AB}" are predicates, small letters such as "{a}", "{b}", "{ab}" are constants, "&" is logical conjunction, "v" is logical disjunction, " ¬" is negation, "->" is implication, "(x)" is "for all x", and "(Ex)" is "for some x".,→ ,→ ,→ ,→ ,→ ,→ $hypothesis$: {AB}{a} $context$: fact1: ( ¬{B}{a} & ¬{A}{a}) -> ¬{A}{dk} fact2: ( ¬{AQ}{et} & ¬{DF}{et}) fact3: ¬{CO}{a} -> ( ¬{DR}{a} & ¬{CK}{a}) fact4: ¬{HS}{a} fact5: (x): {C}x -> ( ¬{B}x & ¬{A}x) fact6: ¬{JF}{a} -> ¬{AA}{a} fact7: ( ¬{JJ}{du} & ¬{AA}{du}) fact8: ( ¬{BB}{a} & ¬{HQ}{a}) fact9: ¬{AA}{a} fact10: ¬{A}{a} -> ¬{AA}{a} fact11: ¬{DK}{a} fact12: ¬{E}{b} -> ¬(¬{D}{b} & ¬{C}{b}) fact13: ¬(¬{D}{b} & ¬{C}{b}) -> {C}{a} fact14: ¬{A}{ci} fact15: ¬{JC}{a} fact16: ¬{A}{r} -> ¬{DQ}{r} fact17: ¬{A}{a} -> ( ¬{AA}{a} & ¬{AB}{a}) fact18: ( ¬{HP}{a} & ¬{C}{a}) fact19: ¬{A}{jg} ** Task ** Your task is to evaluate the accuracy of each step in the provided solution to the above question. For each step, respond with "correct" if the reasoning is logically valid and mathematically sound, or if the step is a general statement or transition that does not contain reasoning. Respond with "incorrect" if the step includes any errors or flawed logic.,→ ,→ ,→ ,→ ,→ ** Sotluion ** fact17 & fact10 -> int1: ¬{A}{a} -> ¬{AB}{a} ' }, {'role ':'assistant ','content ':'correct '}, {'role ':'user ','content ':'fact10 -> int2: ¬{A}{a} '}, {'role ':'assistant ','content ':'incorrect '}, {'role ':'user ','content ':'int2 -> int3: ¬{AB}{a} '}, 18 {'role ':'assistant ','content ':'incorrect '}, {'role ':'user ','content ':'int3 -> ¬hypothesis '}, {'role ':'assistant ','content ':'correct '}, {'role ':'user ','content ':'The final answer is DISPROVED '}, {'role ':'assistant ','content ':'incorrect '} ] FOVER-FormalProof. Here is an example of the data in the formal proof task based on the initial responses from Llama 3.1 8B. { 'id':'bigmath_orca_math_138943_Llama-3.1-8B-Instruct ', 'error_labels ': [True, False, True, True, True], 'messages ': [ { 'role ':'user ', 'content ':'** Problem ** Generate a proof for the following theorem in the Isabelle proof assistant format. ,→ theorem example: assumes "(Stella_Artois_speed::real) = 150" and "(total_distance::real) = 500" and "(total_time::real) = 1.694915254237288" and "(first_car_speed::real) = (total_distance - | https://arxiv.org/abs/2505.15960v1 |
Stella_Artois_speed * total_time) / total_time" ,→ shows "first_car_speed = 145.2" ** Task ** Your task is to evaluate the accuracy of each step in the provided solution to the above question. For each step, respond with "correct" if the reasoning is logically valid and mathematically sound, or if the step is a general statement or transition that does not contain reasoning. Respond with "incorrect" if the step includes any errors or flawed logic.,→ ,→ ,→ ,→ ,→ ** Sotluion ** have "Stella_Artois_speed * total_time = 150 * total_time" ' }, {'role ':'assistant ','content ':'correct '}, { 'role ':'user ', 'content ':'then have "Stella_Artois_speed * total_time = 254.1794318175" then have "total_distance - Stella_Artois_speed * total_time = 245.8205681825" ' ,→ }, {'role ':'assistant ','content ':'incorrect '}, {'role ':'user ','content ':'then have "(total_distance - Stella_Artois_speed * total_time) / total_time = 145.2" '}, ,→ {'role ':'assistant ','content ':'correct '}, {'role ':'user ','content ':'then have "first_car_speed = 145.2" '}, {'role ':'assistant ','content ':'correct '}, {'role ':'user ','content ':'thus ?thesis '}, {'role ':'assistant ','content ':'correct '} ] 19 D.2 Dataset Statistics This section provides the detailed statistics of the F OVERdataset. Please also refer to Table 2. Raw dataset. Table 9 shows the statistics of the raw FOVERdataset, which is the full version of Table 2b. Table 9: Statistics of the F OVERdataset. Responses TasksTrain Validation Test Solution-level Step-level Solution-level Step-level Solution-level Step-level # % Error # % Error # % Error # % Error # % Error # % Error Llama 3.1 8BFormal Logic 10000 70.0% 43082 43.8% 360 70.0% 1428 44.5% 360 70.0% 1417 45.2% Formal Proof 10000 70.0% 69532 13.3% 360 70.0% 2286 14.0% 360 70.0% 2499 13.8% Qwen 2.5 7BFormal Logic 10000 70.0% 34879 41.3% 360 70.0% 1208 39.2% 360 70.0% 1225 41.5% Formal Proof 10000 70.0% 69452 14.1% 360 70.0% 2297 16.7% 360 70.0% 2501 16.6% 20 E Dataset Creation: Formal Logic This section provides details of the creation process of the formal logic task in the FOVERdataset, outlined in Section 3. E.1 Base Dataset and Initial Response Generation for Formal Logic We use FLDx2 [ 27] as a base dataset for our formal logic task. We use the symbolic version of the dataset and remove cases where step-level verification is difficult. Specifically, we removed cases whose reasoning steps include “assump,” which is used in cases such as proof by contradiction. The following is an example of a few-shot demonstration for the initial generation in our formal logic task. We provide six examples as a demonstration. { 'role ':'user ', 'content ': 'Based on the provided facts ($context$), either prove or disprove the hypothesis or state that it is unknown. The facts and the hypothesis are written in logical formulas as follows: capital letters such as "{A}", "{B}", "{AB}" are predicates, small letters such as "{a}", "{b}", "{ab}" are constants, "&" is logical conjunction, "v" is logical disjunction, " ¬" is negation, "->" is implication, "(x)" is "for all x", and "(Ex)" is "for some x".,→ ,→ ,→ ,→ ,→ ,→ Don't generate anything other than the proof and proof_label. Don't generate the | https://arxiv.org/abs/2505.15960v1 |
proof by contradiction. If the hypothesis is disproved, provide a direct counterexample. If the proof_label is PROVED, your proof should end with the 'hypothesis '. If the proof_label is DISPROVED, your proof should end with '¬hypothesis '.,→ ,→ ,→ $hypothesis$: ({AB}{c} & ¬{B}{c}) $context$: fact1: (x): ¬{D}x -> ({A}x & {C}x) fact2: ¬({B}{a} & ¬{A}{a}) fact3: ¬{AB}{b} -> ¬{C}{d} fact4: ¬{C}{a} -> ¬{A}{d} fact5: ¬{A}{a} fact6: ({AB}{c} & ¬{B}{c}) -> {C}{d} fact7: ¬({F}{b} v {D}{b}) -> ¬{D}{a} fact8: {A}{a} -> ({AB}{c} & ¬{B}{c}) fact9: ({AB}{c} & ¬{AA}{c}) -> {B}{b} fact10: ({AB}{c} & ¬{B}{c}) -> {C}{b} ' }, { 'role ':'assistant ', 'content ': '$proof$: fact6 -> int1: ¬{C}{d} -> ¬({AB}{c} & ¬{B}{c}); fact3 & int1 -> int2: ¬{AB}{b} -> ¬({AB}{c} & ¬{B}{c}); $proof_label$: UNKNOWN ' } E.2 Step-level Verification in Formal Logic We use Z3 [ 7] for step-level verification of the formal logic task. Specifically, we use the checker provided by the FLDx2 paper [ 27].2As explained in Figure 3a, we first postprocess each step in solutions to independent logical step and check the validity using Z3. 2https://github.com/hitachi-nlp/FLD-generator/blob/00d12c4a9132a4fb43cd77f24db03ea7f5b27877/ FLD_generator/formula_checkers/z3_logic_checkers/checkers.py#L179 21 F Dataset Creation: Formal Proof This section provides details of the creation process of the formal proof task in the FOVERdataset, outlined in Section 3. F.1 Base Dataset and Initial Response Generation for Formal Proof We use informal responses from LLMs to GSM8K-level math word problems: GSM8K [ 6], GSM8K- based cases in MetaMathQA [ 62], and math word problems in Big-Math [ 1]. We select these relatively simple math reasoning problems to make the informal-to-formal conversion feasible. Math reasoning problems that involve more complex concepts make it difficult for existing LLMs to convert the solutions into Isabelle’s format because they are often not well-trained on Isabelle, but we expect further work to create training datasets based on those cases to improve PRMs further. The following is an example of a few-shot demonstration for the initial generation from GSM8K. We use a different set of few-shot demonstrations for each dataset. Refer to our code for further details. [ { 'role ':'user ', 'content ':'There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today? ',→ ,→ }, { 'role ':'assistant ', 'content ':'There were originally 15 trees in the grove. After the grove workers planted trees today, there are now 21 trees. So, the grove workers planted 21 - 15 = 6 trees today. Therefore, the answer (arabic numerals) is 6. ' } ] F.2 Informal to Formal Conversion in Formal Proof As explained in Section 3, we use meta-llama/Llama-3.3-70B-Instruct with few-shot prompt- ing for converting the informal solutions from LLMs into the Isabelle format. Our conversion is based on the ideas in prior work [ 55,68]. The following is an example of a few-shot demonstration for the conversion. Few-Shot Demonstration for Statement Conversion { 'role ':'user ', 'content ':'Your task is to convert the following informal statement into a formal statement in Isabelle 2022. ,→ * Make | https://arxiv.org/abs/2505.15960v1 |
your formal statement consistent with the provided Informal Statement. * The final answer can be wrong, but your formal statement should be faithful to the informal statement and should not correct the mistakes in the informal statement.,→ ,→ * You should not use new variables in "shows". * The "shows" part is expected to be formatted as "variable = number" (e.g., "x = 3"). ,→ ** Informal Statement ** Gloria wants to buy the $129,000 mountain cabin that her friend Alfonso is selling. She only has $150 in cash. She intends to raise the remaining amount by selling her mature trees for lumber. She has 20 cypress trees, 600 pine trees, and 24 maple trees. She will get $100 for each cypress tree, $300 for a maple tree, and $200 per pine tree. After paying Alfonso for the cabin, how much money will Gloria have left? ',→ ,→ ,→ ,→ ,→ }, 22 { 'role ':'assistant ', 'content ':'theorem example: (* Gloria wants to buy the $129,000 mountain cabin that her friend Alfonso is selling. *) ,→ assumes "(Cabin_cost::nat) = 129000" (* She only has $150 in cash. *) and "(Gloria_cash::nat) = 150" (* She intends to raise the remaining amount by selling her mature trees for lumber. *) ,→ (* She has 20 cypress trees, 600 pine trees, and 24 maple trees. *) and "(Cypress_count::nat) = 20" and "(Pine_count::nat) = 600" and "(Maple_count::nat) = 24" (* She will get $100 for each cypress tree, $300 for a maple tree, and $200 per pine tree. *) ,→ and "(Cypress_price::nat) = 100" and "(Maple_price::nat) = 300" and "(Pine_price::nat) = 200" (* After paying Alfonso for the cabin, how much money will Gloria have left? *) ,→ and "(After_paying_Alfonso::nat) = Gloria_cash + Cypress_count * Cypress_price + Maple_count * Maple_price + Pine_count * Pine_price - Cabin_cost",→ ,→ (* Final Answer -- The answer is 200. *) shows "After_paying_Alfonso = 200" ' } Few-Shot Demonstration for Proof Conversion { 'role ':'user ', 'content ':'Your task is to convert the following informal proof into a formal proof in Isabelle 2022. ,→ * The input informal proof can be wrong, but your formal proof should be faithful to the informal proof and should not correct the mistakes in the informal proof.,→ ,→ * In your formal proof, use variables defined in the provided Formal Statement. ,→ * Use sledgehammer. ※You should use defined variables whenever possible and should not write equations that only contain numbers. ,→ * The last step is expected to be the same as the equation shown in the “shows” section of the Formal Statement. ,→ * Include informal statements and proof as comments. ** Informal Statement ** Gloria wants to buy the $129,000 mountain cabin that her friend Alfonso is selling. She only has $150 in cash. She intends to raise the remaining amount by selling her mature trees for lumber. She has 20 cypress trees, 600 pine trees, and 24 maple trees. She will get $100 for each cypress tree, $300 for a maple tree, and $200 per pine tree. After paying Alfonso | https://arxiv.org/abs/2505.15960v1 |
for the cabin, how much money will Gloria have left?,→ ,→ ,→ ,→ ,→ Final Answer -- The answer is 200. ** Informal Proof ** Gloria needs 129,000 dollars for the cabin, but she only has 150 dollars. So she needs 129,000 - 150 = 128,850 dollars. She will get 100 dollars for each of the 20 cypress trees, which is 20 x 100 = 2,000 dollars. ,→ She will get 300 dollars for each of the 24 maple trees, which is 24 x 300 = 7,200 dollars. ,→ She will get 200 dollars for each of the 600 pine trees, which is 600 x 200 = 120,000 dollars. ,→ 23 In total, she will get 2,000 + 7,200 + 120,000 = 129,200 dollars from selling the trees. ,→ After selling the trees, she will have 129,200 dollars. She will pay 129,000 dollars for the cabin, so she will have 129,200 - 129,000 = 200 dollars left. ,→ Therefore, the answer (arabic numerals) is 200. ** Formal Statement ** theorem example: (* Gloria wants to buy the $129,000 mountain cabin that her friend Alfonso is selling. *) ,→ assumes "(Cabin_cost::nat) = 129000" (* She only has $150 in cash. *) and "(Gloria_cash::nat) = 150" (* She intends to raise the remaining amount by selling her mature trees for lumber. *) ,→ (* She has 20 cypress trees, 600 pine trees, and 24 maple trees. *) and "(Cypress_count::nat) = 20" and "(Pine_count::nat) = 600" and "(Maple_count::nat) = 24" (* She will get $100 for each cypress tree, $300 for a maple tree, and $200 per pine tree. *) ,→ and "(Cypress_price::nat) = 100" and "(Maple_price::nat) = 300" and "(Pine_price::nat) = 200" (* After paying Alfonso for the cabin, how much money will Gloria have left? *) ,→ and "(After_paying_Alfonso::nat) = Gloria_cash + Cypress_count * Cypress_price + Maple_count * Maple_price + Pine_count * Pine_price - Cabin_cost",→ ,→ (* Final Answer -- The answer is 200. *) shows "After_paying_Alfonso = 200" ' }, { 'role ':'assistant ', 'content ':'theorem example: (* Gloria wants to buy the $129,000 mountain cabin that her friend Alfonso is selling. *) ,→ assumes "(Cabin_cost::nat) = 129000" (* She only has $150 in cash. *) and "(Gloria_cash::nat) = 150" (* She intends to raise the remaining amount by selling her mature trees for lumber. *) ,→ (* She has 20 cypress trees, 600 pine trees, and 24 maple trees. *) and "(Cypress_count::nat) = 20" and "(Pine_count::nat) = 600" and "(Maple_count::nat) = 24" (* She will get $100 for each cypress tree, $300 for a maple tree, and $200 per pine tree. *) ,→ and "(Cypress_price::nat) = 100" and "(Maple_price::nat) = 300" and "(Pine_price::nat) = 200" (* After paying Alfonso for the cabin, how much money will Gloria have left? *) ,→ and "(After_paying_Alfonso::nat) = Gloria_cash + Cypress_count * Cypress_price + Maple_count * Maple_price + Pine_count * Pine_price - Cabin_cost",→ ,→ (* Final Answer -- The answer is 200. *) shows "After_paying_Alfonso = 200" proof - (* Gloria needs 129,000 dollars for the cabin, but she only has 150 dollars. *) ,→ | https://arxiv.org/abs/2505.15960v1 |
(* So she needs 129,000 - 150 = 128,850 dollars. *) have "Cabin_cost - Gloria_cash = 128850" sledgehammer 24 (* She will get 100 dollars for each of the 20 cypress trees, which is 20 ×100 = 2,000 dollars. *) ,→ then have "Cypress_count * Cypress_price = 2000" sledgehammer (* She will get 300 dollars for each of the 24 maple trees, which is 24 × 300 = 7,200 dollars. *) ,→ then have "Maple_count * Maple_price = 7200" sledgehammer (* She will get 200 dollars for each of the 600 pine trees, which is 600 × 200 = 120,000 dollars. *) ,→ then have "Pine_count * Pine_price = 120000" sledgehammer (* In total, she will get 2,000 + 7,200 + 120,000 = 129,200 dollars from selling the trees. *) ,→ then have "Cypress_count * Cypress_price + Maple_count * Maple_price + Pine_count * Pine_price = 129200" ,→ sledgehammer (* After selling the trees, she will have 129,200 dollars. *) (* She will pay 129,000 dollars for the cabin, so she will have 129,200 - 129,000 = 200 dollars left. *) ,→ then have "(Cypress_count * Cypress_price + Maple_count * Maple_price + Pine_count * Pine_price) - Cabin_cost = 200" ,→ sledgehammer (* Therefore, the answer (arabic numerals) is 200. *) then have "After_paying_Alfonso = 200" sledgehammer show ?thesis sledgehammer qed' } F.3 Step-level Proof Verification in Isabelle Sledgehammer in Isabelle. From the above conversion process, we get formal statements and proofs with “sledgehammer”. In Isabelle, we need to manually specify lemmas and theorems to construct proofs. For example, in the below toy example, we need to manually specify rule add_assoc to properly prove the theorem. theory Addition_Assoc_Manual imports Main begin lemma add_assoc_manual: "(a + b) + c = a + (b + c)" proof - show ?thesis by (rule add_assoc) qed end However, this process can be labor-intensive and time-consuming. Isabelle’s Sledgehammer tool [ 34] automates this step by invoking external automated theorem provers (ATPs) to suggest proof steps or entire proofs. It translates the current proof state into a form suitable for ATPs, runs them, and then attempts to reconstruct the suggested proof within Isabelle’s logical framework. This significantly reduces the manual effort required, improves productivity, and bridges the gap between interactive and automated proving. Using Sledgehammer, we can simplify the above theorem as follows to verify it using Isabelle. theory Addition_Assoc_Sledgehammer imports Main begin 25 lemma add_assoc_manual: "(a + b) + c = a + (b + c)" proof - show ?thesis sledgehammer qed end We use Sledgehammer to facilitate the informal-to-formal conversion by removing the need to specify lemmas during conversion. Note that we use a wrapper for Isabelle [54] for using Sledgehammer. Format verification. As we use LLMs for the conversion, it is possible that the conversion generates an invalid format that cannot be verified using Isabelle. To filter out theorems in the invalid format, we use the “sorry” keyword of Isabelle. The “sorry” keyword serves as a placeholder for incomplete or unproven proofs, allowing the theorem to be accepted by the system without a formal justification. By inserting | https://arxiv.org/abs/2505.15960v1 |
“sorry” into all generated proof steps, we can isolate and verify only the syntactic and semantic well-formedness of theorems. For example, if the following theorem gets errors from Isabelle, we can detect syntax errors and filter this theorem out from our dataset. In this case, the expression babysitting_minutes × (Weng_hourly_wage / 60) contains the symbol ×, which is not a valid multiplication operator in Isabelle syntax. theorem example: assumes "(Weng_hourly_wage::real) = 12" and "(babysitting_minutes::real) = 50" and "(babysitting_hours::real) = babysitting_minutes / 60" and "(Weng_earnings::real) = Weng_hourly_wage * babysitting_hours" shows "Weng_earnings = 10" proof - have "Weng_hourly_wage / 60 = 0.20" sorry then have "babysitting_minutes ×(Weng_hourly_wage / 60) = 10" sorry then have "Weng_earnings = 10" sorry thus ?thesis sorry qed For this input, Isabelle returns the following error. Step error: Inner syntax error (line 1)\nat \"? ( Weng_hourly_wage / 60 ) = 10\"\nFailed to parse prop\nAt command \"have\" (line 1) ,→ Step-level verification. By default, Isabelle halts at the first encountered error and does not provide a step-by-step verification of a proof. To enable independent verification of each step in a multi-step proof, we insert the “sorry” keyword in all but one step. This allows Isabelle to type-check and parse each step individually, even if other steps are incomplete or invalid. The following example is for verifying the third step independently. For each theorem, we run Isabelle once per step to isolate and validate its correctness. theorem example: assumes "(wallet_cost::nat) = 100" and "(betty_savings::nat) = wallet_cost div 2" and "(parent_contribution::nat) = 15" and "(grandparent_contribution::nat) = 2 * parent_contribution" and "(total_savings::nat) = betty_savings + parent_contribution + grandparent_contribution" ,→ and "(additional_needed::nat) = wallet_cost - total_savings" shows "additional_needed = 5" 26 proof - have "betty_savings = wallet_cost div 2" sorry then have "betty_savings = 50" sorry have "grandparent_contribution = 2 * parent_contribution" sledgehammer then have "grandparent_contribution = 30" sorry then have "parent_contribution + grandparent_contribution = 45" sorry then have "total_savings = 95" sorry then have "additional_needed = wallet_cost - total_savings" sorry then have "additional_needed = 5" sorry thus ?thesis sorry 27 G Input Format and Postprocessing for LLM-based PRMs This section provides details of the input format and postprocessing of LLM-based PRMs, including our F OVERPRMs, the baseline PRMs, and the state-of-the-art PRMs G.1 Input Format We describe the input format for FOVERPRMs and the baseline LLM-based PRMs, which are based on Llama 3.1 8B and Qwen 2.5 7B. FOVERPRMs are trained on the FOVERdataset, so the input format has the same format as the training data. The only difference is that we replace all step-level labels with “correct” in the input. This preprocessing allows us to provide the whole input once to get the step-level predictions for all steps. The following is an example input for GSM8K. [ { 'role ':'user ', 'content ':'** Problem ** Alice is 7 years older than Beth, who is 5 years older than Erica. What is the difference between the ages of Alice and Erica, if Erica is 30 years old?,→ ,→ ** Task ** Your task is to evaluate the accuracy of each step in the provided | https://arxiv.org/abs/2505.15960v1 |
solution to the above question. For each step, respond with "correct" if the reasoning is logically valid and mathematically sound, or if the step is a general statement or transition that does not contain reasoning. Respond with "incorrect" if the step includes any errors or flawed logic.,→ ,→ ,→ ,→ ,→ ** Sotluion ** Since Erica is 30 years old, and Beth is 5 years older than Erica, Beth is 30 + 5 = 35 years old. ' ,→ }, {'role ':'assistant ','content ':'correct '}, {'role ':'user ','content ':'Alice is 7 years older than Beth, who is 35 years old. '}, ,→ {'role ':'assistant ','content ':'correct '}, {'role ':'user ','content ':'So, Alice is 35 + 7 = 42 years old. '}, {'role ':'assistant ','content ':'correct '}, {'role ':'user ','content ': "The difference between Alice 's age and Erica 's age is 42 - 30 = 12 years."}, ,→ {'role ':'assistant ','content ':'correct '}, {'role ':'user ','content ':'Therefore, the answer (arabic numerals) is 12. '}, {'role ':'assistant ','content ':'correct '} ] G.2 Postprocessing We describe the postprocessing for F OVERPRMs and the baseline LLM-based PRMs. Extracting logits. Since we use causal LLMs as PRMs, we extract the model’s predictions for the tokens immediately preceding the dummy step-level labels (e.g., “correct”) in the input. Computing step-level scores. At each identified position, we extract the logits corresponding to the tokens “correct” and “incorrect”. We then apply the softmax function over these two logits to compute the probability assigned to the “correct” token. This probability serves as the step-level score in our PRMs. 28 G.3 Existing PRMs PRMs based on Llama 3.1 8B. In RLHFlow-Llama3.1-8B-DeepSeek and RLHFlow-Llama3.1- 8B-Mistral [ 59], the input format is mostly similar to ours, with the key difference being the use of “+” and “-” instead of “correct” and “incorrect”.3For these models, we apply our input format and postprocessing procedures with a simple substitution of “correct” with “+”. PRMs based on Qwen 2.5 7B. Qwen2.5-Math-7B-PRM800K [ 66] and Qwen2.5-Math-PRM-7B [ 65] are supported by vLLM [ 15]. We follow the input format specified in their respective model descriptions and adopt the reward modeling implemented in vLLM.4For Qwen2.5-7B-Skywork- PRM [10], we use a code provided by the authors.5 3https://github.com/RLHFlow/RLHF-Reward-Modeling/tree/main/math-rm 4https://docs.vllm.ai/en/latest/models/pooling_models.html 5https://github.com/SkyworkAI/skywork-o1-prm-inference 29 H Training Settings This section provides details of the training settings for our F OVERPRMs. H.1 Training Data Format To balance step-level error labels (“correct” or “incorrect”), we set mask_history: True in LLaMA-Factory [ 67], which configures the model to use only the last step of each conversation during training. We postprocess the training data by truncating reasoning steps so that the final steps are balanced between “correct” and “incorrect” labels. This approach is functionally equivalent to selectively masking certain steps to achieve label balance, but is more straightforward to implement within LLaMA-Factory. For example, from the case in Appendix D.1, we can make a training case for the “incorrect” label. In each instance, models are only trained on the prediction for a single token, “correct” or “incorrect” in the last step, and the other tokens are masked. { 'id':'bigmath_orca_math_138943_Llama-3.1-8B-Instruct ', | https://arxiv.org/abs/2505.15960v1 |
'error_labels ': [True, False, True, True, True], 'messages ': [ { 'role ':'user ', 'content ':'** Problem ** Generate a proof for the following theorem in the Isabelle proof assistant format. ,→ theorem example: assumes "(Stella_Artois_speed::real) = 150" and "(total_distance::real) = 500" and "(total_time::real) = 1.694915254237288" and "(first_car_speed::real) = (total_distance - Stella_Artois_speed * total_time) / total_time" ,→ shows "first_car_speed = 145.2" ** Task ** Your task is to evaluate the accuracy of each step in the provided solution to the above question. For each step, respond with "correct" if the reasoning is logically valid and mathematically sound, or if the step is a general statement or transition that does not contain reasoning. Respond with "incorrect" if the step includes any errors or flawed logic.,→ ,→ ,→ ,→ ,→ ** Sotluion ** have "Stella_Artois_speed * total_time = 150 * total_time" ' }, {'role ':'assistant ','content ':'correct '}, { 'role ':'user ', 'content ':'then have "Stella_Artois_speed * total_time = 254.1794318175" then have "total_distance - Stella_Artois_speed * total_time = 245.8205681825" ' ,→ }, {'role ':'assistant ','content ':'incorrect '}, ] 30 H.2 Training Parameters We fine-tune all model parameters and do not use parameter-efficient techniques. We use the AdamW optimizer [ 21] and select the learning rate based on the average Best-of-K performance on the validation tasks: Orca-Math [ 26] and two tasks in BBH [ 45] (Logical Deduction (three objects) and Boolean Expressions). We evaluate models trained with the learning rate 1e-6, 2e-6, 5e-6, and 1e-5, and select the model with the best average performance on the validation tasks. We use the parameters in Table 10 in all models, and we did not conduct hyperparameter tuning for these parameters. Please refer to the configuration files in our code for further details. Table 10: Hyperparameters in training on F OVER Parameter Value Number of Epochs 1 Batch size 32 Learning Rate Warm up and Decay Strategy Linear Learning Rate Warm up Ratio 0.5 31 I Evaluation Settings This section provides details of the evaluation in Section 4. I.1 Evaluation Datasets Table 11 shows sources of datasets evaluated in Best-of-K in Section 4. Table 11: Datasets evaluated in Best-of-K experiments. Dataset Source GSM8K https://huggingface.co/datasets/openai/gsm8k MATH https://github.com/hendrycks/math AQuA-RAT https://huggingface.co/datasets/deepmind/aqua_rat AIME (2016-2024) https://huggingface.co/datasets/di-zhang-fdu/AIME_1983_2024 FOLIO https://huggingface.co/datasets/yale-nlp/FOLIO LogicNLI https://huggingface.co/datasets/tasksource/LogicNLI ANLI https://huggingface.co/datasets/facebook/anli HANS https://github.com/tommccoy1/hans MMLU https://huggingface.co/datasets/sam-paech/mmlu-pro-nomath-sml BBH https://github.com/suzgunmirac/BIG-Bench-Hard/tree/main/bbh I.2 Downstream Evaluation: Best-of-K This section provides detailed settings of Best-of-K experiments in Section 4. Initial generation prompts. Table 12 shows detailed settings of generating K= 5 responses for the Best-of-K evaluation in Section 4.1. We create new few-shot examples or modify few- shot demonstrations used in prior work to enhance the quality and to simplify the post-processing procedure. For example, we add line breaks between reasoning steps in all tasks. An example prompt for GSM8K is provided in Appendix E.1. Please also refer to our code for further details. Table 12: Detailed settings for Best-of-K downstream evaluation DatasetFew-shot Examples for Initial GenerationAnswer Matching GSM8K [13]6Exact match after extraction and conversion to integer MATH [16, Appendix D.2] [16, Appendix G] AQuA-RAT Made by us (3-shot) Exact match after extraction AIME Made by us (3-shot) | https://arxiv.org/abs/2505.15960v1 |
Exact match after extraction and conversion to integer FOLIO Made by us (2-shot) Exact match after extraction LogicNLI Made by us (3-shot) Exact match after extraction ANLI Made by us (3-shot) Exact match after extraction HANS Made by us (2-shot) Exact match after extraction MMLU Made by us (4-shot) Exact match after extraction BBH [45]7Exact match after extraction 6https://github.com/kojima-takeshi188/zero_shot_cot/blob/ 5ef330fcdeec0cd26aee27943504f91f8ec1c33c/utils.py#L328 7https://github.com/suzgunmirac/BIG-Bench-Hard/blob/main/cot-prompts 32 J Additional Results This section provides additional results. J.1 Reference Methods for Best-of-K: Majority-of-K and Pass@K. Table 13 shows the performance of reference methods on datasets evaluated in Section 4.1. Majority@K Majority@K (or self-consistency) [ 51], which takes the majority vote of the final answers. It is known to be a strong baseline and is competitive with the Best-of-K performance of our verifiers. Oracle Verification. Oracle Verification (or Pass@K) is the performance when we always select the solution with the correct final answer if it exists, which is the upper-bound of Best-of-K. Table 13: Performance of reference methods on datasets in Table 3. (a) PRMs based on Llama 3.1 8B select the best response from K= 5responses generated by Llama 3.1 8B MethodsMath Logic NLI MMLU BBH Average GSM8K MATH AQuA AIME FOLIO LogicNLI ANLI HANS Pro-NoMath Temporal Tracking Sorting FOVERPRM 86.4 43.2 65.7 4.0 64.0 44.8 28.8 82.8 57.2 97.6 93.2 38.4 58.8 Majority@K 89.2 44.4 66.1 4.8 62.6 46.0 33.2 76.4 58.8 92.8 94.0 42.4 59.2 Oracle Verification 97.2 58.4 83.9 8.8 90.1 83.6 54.4 94.4 75.2 100.0 98.4 60.0 75.4 (b) PRMs based on Qwen 2.5 7B select the best response from K= 5responses generated by Qwen 2.5 7B MethodsMath Logic NLI MMLU BBH Average GSM8K MATH AQuA AIME FOLIO LogicNLI ANLI HANS Pro-NoMath Temporal Tracking Sorting FOVERPRM 92.8 66.4 82.3 13.2 65.5 48.8 32.4 85.2 60.8 91.6 89.6 31.2 63.3 Majority@K 91.6 65.2 83.9 12.8 65.0 50.0 32.4 84.0 61.6 88.4 86.4 31.2 62.7 Oracle Verification 96.8 73.2 91.7 19.2 78.3 80.0 49.6 89.2 75.2 93.2 93.2 38.8 73.2 33 J.2 Hyperparameters that Influence the Cross-task Generalization of Training on F OVER We analyze hyperparameters that may influence cross-task generalization of FOVER. We train PRMs on the FOVERdataset with different learning rates, label distributions, and dataset sizes, and evaluate performance on the validation tasks: Orca-Math [26] and two tasks in BBH [45]. However, although these parameters influence the performance of PRMs, we do not observe any consistent trends, and the performance of different models on different evaluation datasets exhibits inconsistent behaviors. These results indicate that hyperparameter tuning is necessary but remains non-trivial under the current training setup. We expect future research to explore more stable training strategies. Note that, for the models evaluated in Sections 4.2 and 4.1, we perform hyperparameter search only over the learning rate, while fixing the label distribution to 0.5 and using the dataset size of 40k. Learning rate. We evaluate PRMs trained on the FOVERdatasets using different learning rates: 1e-6, 2e-6, 5e-6, and 1e-5. Figure 5a reports the average accuracy on the validation tasks for PRMs trained on the 40k-instance training set. We observe that the learning rate has a substantial | https://arxiv.org/abs/2505.15960v1 |
impact on the transferability and that the optimal learning rate varies across models and evaluation tasks. Label distribution. We evaluate PRMs trained on FOVERwith varying proportions of correct and incorrect step-level labels. Specifically, we construct versions of the FOVERdataset that include both formal logic and proof tasks, with 25%, 50%, and 75% correct labels, each containing 20k instances. Figure 5b reports the average accuracy on the validation tasks. We do not observe a consistent trend across label distributions; different models and evaluation datasets exhibit varying behavior. Dataset size. We evaluate PRMs trained on different sizes of the FOVERdataset, which includes both formal logic and proof tasks. Specifically, we train models on 5k, 10k, 20k, and 40k instances. To isolate the effect of training data diversity, all models are trained for the same number of steps. Figure 5c presents the accuracy on the validation tasks, showing that larger and more diverse training data often improves cross-task generalization. However, the results also indicate that increased diversity can negatively impact performance in Orca-Math, which is counterintuitive and highlights that cross-task generalization is inherently complex and highly task-dependent. (a) Learning rate. (b) Label distribution. (c) Dataset size. Figure 5: Influence of hyperparameters on the cross-task generalization of PRMs trained on FOVER. We do not observe consistent trends across different models and evaluation datasets. The thick lines represent the average accuracy in the validation tasks. The dashed lines represent performance on validation tasks: Orca-Math, Logical Deduction, and Boolean Expressions. 34 J.3 Manual Analysis We provide the detailed results of the manual analysis in Section 4.4. We analyze cases where FOVER PRMs improve or degrade re-ranking in Best-of-K with regard to the final answers from the baseline PRMs based on the original LLMs. We categorize the reasoning for the improvement and degradation of the final answers into the following categories: The baseline PRMs assign wrong step-level scores, although FOVERPRMs assign reasonable step-level scores, which represent the genuine improvement by F OVER. The original questions are ambiguous or the ground-truth labels are incorrect. The selected solutions with the correct final answers include incorrect intermediate steps. PRMs are expected to assign low scores to those solutions, although they produce the correct final answers. Both PRMs assign incorrect step-level scores, with the baseline selecting the correct answer by chance. FOVERPRMs assign wrong step-level scores, although the baseline PRMs assign reasonable step-level scores, which represent the genuine improvement by F OVER. Only the first and last categories represent cases where FOVERPRMs genuinely improve or degrade step-level verification performance compared to the baseline PRMs. Figure 6 shows the results of the manual analysis. We observe that FOVERPRMs often improve step-level verification while rarely degrade step-level scores. We note that GSM8K results of Qwen 2.5 7B and the temporal sequence results of both models only have three cases where FOVERdegrades the final answers. Therefore, the 33% degradation in these cases represents that FOVERdegrades step-level scores only in one case out of 250 solutions evaluated in Best-of-K, which is a positive result. (a) F OVERPRMs select solutions with correct final answers while baseline PRMs select | https://arxiv.org/abs/2505.15960v1 |
incorrect ones. (b) F OVERPRMs select solutions with incorrect final answers while baseline PRMs select correct ones. Figure 6: Manual analysis of step-level scores in cases where FOVERPRMs improve (6a) or degrade (6b) re-ranking in Best-of-K with regard to the final answers. The blue and red areas represent the cases in which FOVERgenuinely improves and degrades the step-level verification scores. The other cases do not represent genuine improvement or degradation. : The original questions are ambiguous or the ground-truth labels are incorrect. : The selected solutions with the correct final answers contain incorrect intermediate steps. : Both PRMs assign incorrect step-level scores. We provide an example for each category. 35 The baseline PRMs assign wrong step-level scores, although FOVERPRMs assign reasonable step-level scores. The following example shows a response from Qwen 2.5 7B on ANLI. In this example, Step 1 of the above solution is wrong, which is correctly identified by FOVERPRMs while the baseline PRM wrongly put a high score. In addition, in the second solution, the baseline PRM puts a low score to Step 2, which is correct. This case shows an example FOVERimproves step-level verification. Problem: Premise: Cooper Manning SWAG-JACKED Don ’Magic’ Juan??? Call the fashion police ... ’cause Cooper Manning straight-up STOLE his Super Bowl swag from Snoop’s spiritual adviser, Bishop Don ""Magic"" Juan! Peyton and Eli’s older bro was cruisin’ around Super Bowl media night this week in a money suit – the same kind of outfit we’ve seen The Bishop rockin’ for years. Twinsies. Hypothesis: Payton and Eli Manning are both older than Cooper Manning. Question: do the premises derive the hypothesis? Provide reasoning and answer with either ""entailment"", ""contradiction"" or ""neutral"". Ground truth: neutral Step ExplanationBaseline PRMFOVER PRM Solution selected by the baseline PRM (wrong final answer) Step 0Premise states, "Peyton and Eli’s older bro was cruisin’ around Super Bowl media night this week in a money suit," implying that Cooper Manning is the brother of Peyton and Eli Manning.1.0000 0.9609 Step 1Since Cooper Manning is described as the brother, it follows that Peyton and Eli Manning are older than Cooper Manning.1.0000 0.5625 Step 2Therefore, the premise supports the hypothesis that Peyton and Eli Manning are both older than Cooper Manning.1.0000 0.9961 Step 3 The final answer is entailment. 1.0000 1.0000 Solution-level Score 1.0000 0.5625 Solution selected by F OVERPRMs (correct final answer) Step 0Premise states: "Peyton and Eli’s older bro was cruisin’ around Super Bowl media night this week in a money suit." This implies that Cooper Manning is the older brother of Peyton and Eli Manning.0.9922 0.9258 Step 1Given that Cooper Manning is the older brother, it follows that Peyton and Eli Manning are both younger than Cooper Manning.1.0000 0.9922 Step 2Therefore, the hypothesis "Peyton and Eli Manning are both older than Cooper Manning" is contradicted by the premise.0.0000 0.9844 Step 3 The final answer is contradiction. 0.9531 0.9961 Solution-level Score 0.0000 0.9258 36 The original questions are ambiguous or the ground-truth labels are incorrect. The following example shows a response generated from Llama 3.1 8B on ANLI. In this case, the ground | https://arxiv.org/abs/2505.15960v1 |
truth answer is “neutral”. However, this problem is ambiguous, and both neutral and entailment are reasonable final answers. Therefore, this case does not represent a degradation by F OVER. Problem: Premise: Trying Tofu<br>Alicia was feeling a little adventurous. She wanted to try Tofu. She had never had it before. So she took a small bite. To her delight it was delicious. Hypothesis: Tofu contains the letters to Question: do the premises derive the hypothesis? Provide reasoning and answer with either ""entailment"", ""contradiction"" or ""neutral"". Ground truth: neutral Step ExplanationBaseline PRMFOVER PRM Solution selected by the baseline PRM (correct final answer) Step 0The premise does not mention the letters in the word "tofu" at all. It talks about Alicia trying tofu and enjoying it.0.3496 0.7773 Step 1 The premise does not provide any information about the letters in the word "tofu." 0.9922 0.9922 Step 2 Therefore, the final answer is neutral. 0.9453 0.9766 Solution-level Score 0.3496 0.7773 Solution selected by F OVERPRMs (wrong final answer) Step 0 The premise mentions the word "tofu", which contains the letters "to". 0.2813 0.9805 Step 1The premise does not say anything about the taste, nutritional content, or any other properties of tofu, but it does provide evidence that the word "tofu" contains the letters "to".0.9922 0.9727 Step 2 Therefore, the final answer is entailment. 0.5313 0.9570 Solution-level Score 0.2813 0.9570 37 Solutions selected by the baseline contain incorrect intermediate steps, although producing the correct final answers. The following example shows a response generated from Llama 3.1 8B on the word sorting task in BBH. In the first solution, the final answer is correct; however, the first step includes ["farther" "insolent"] in the sorted list, which is incorrect. When intermediate steps contain such errors, PRMs are expected to assign lower scores, potentially causing them to fail in selecting solutions with correct final answers. This behavior is expected and does not indicate that FOVERdegrades verification performance. Problem: Sort the following words alphabetically: List: stephen amperage protocol crimea farther raillery tech insolent ping Ground truth: amperage crimea farther insolent ping protocol raillery stephen tech Step ExplanationBaseline PRMFOVER PRM Solution selected by the baseline PRM (correct final answer) Step 0First, we look at the first letters of each word. "stephen": "s" (19th letter). "amperage": "a" (1st letter). "protocol": "p" (16th letter). "crimea": "c" (3rd letter). "farther": "f" (6th letter). "raillery": "r" (18th letter). "tech": "t" (20th letter). "insolent": "i" (9th letter). "ping": "p" (16th letter). We now have: ["amperage"] < "crimea" < ["farther" "insolent"] < ["ping" "protocol"] < "raillery" < "stephen" < "tech".0.5938 0.1641 Step 1Now let’s sort this subpart ["farther" "insolent"] by looking at their second letters. The second letter: "farther": "a" (1st letter). "insolent": "n" (14th letter). We now have: "farther" < "insolent" < ["ping" "protocol"] < "raillery" < "stephen" < "tech".0.9883 0.6523 Step 2Now let’s sort this subpart ["ping" "protocol"] by looking at their second letters. The second letter: "ping": "i" (9th letter). "protocol": "r" (18th letter). We now have: "ping" < "protocol" < "raillery" < "stephen" < "tech".0.9844 0.9258 Step 3So the answer is amperage < crimea | https://arxiv.org/abs/2505.15960v1 |
< farther < insolent < ping < protocol < raillery < stephen < tech.0.9961 0.7305 Solution-level Score 0.5938 0.1641 Solution selected by F OVERPRMs (wrong final answer) Step 0First, we look at the first letters of each word. "stephen": "s" (19th letter). "amperage": "a" (1st letter). "protocol": "p" (16th letter). "crimea": "c" (3rd letter). "farther": "f" (6th letter). "raillery": "r" (18th letter). "tech": "t" (20th letter). "insolent": "i" (9th letter). "ping": "p" (16th letter). We now have: ["amperage"] < "crimea" < "farther" < ["protocol" "ping"] < "insolent" < "raillery" < "stephen" < "tech".0.5625 0.1826 Step 1Now let’s sort this subpart ["protocol" "ping"] by looking at their second letters. The second letter: "protocol": "r" (18th letter). "ping": "i" (9th letter). We now have: "amperage" < "crimea" < "farther" < "ping" < "protocol" < "insolent" < "raillery" < "stephen" < "tech".0.9961 0.7305 Step 2Now let’s sort this subpart ["ping" "protocol"] by looking at their third letters. The third letter: "ping": "n" (14th letter). "protocol": "o" (15th letter). We now have: "amperage" < "crimea" < "farther" < "ping" < "protocol" < "insolent" < "raillery" < "stephen" < "tech".0.9727 0.9844 Step 3So the answer is amperage < crimea < farther < ping < protocol < insolent < raillery < stephen < tech.0.9922 0.8672 Solution-level Score 0.5625 0.1826 38 Both PRMs assign incorrect step-level scores, with the baseline selecting the correct answer by chance. The following example shows a response generated from Llama 3.1 8B on GSM8K. In this case, both PRMs assign high scores to both solutions, indicating that neither effectively distinguishes correctness. The baseline PRM happens to select the better solution, but this is due to chance rather than meaningful score differences. Since both PRMs are unsuccessful, this case does not show the degradation by F OVER. Problem: Mark is a copy-editor. He edits an equal number of sentences each week for two different publishers, who each pay him a different rate per sentence. Publisher B pays Mark twice what Publisher A pays. Mark edits a total number of 1000 sentences each week, and Publisher A pays him 5 cents per sentence. How much does Mark make in a week, in cents? Ground truth: 7500 Step ExplanationBaseline PRMFOVER PRM Solution selected by the baseline PRM (correct final answer) Step 0 Mark edits a total of 1000 sentences each week. 0.9922 1.0000 Step 1He edits an equal number of sentences for both Publisher A and Publisher B, so he edits 1000 / 2 = 500 sentences for Publisher A and 500 sentences for Publisher B.1.0000 0.9258 Step 2 Publisher A pays 5 cents per sentence, so Mark earns 500 * 5 = 2500 cents from Publisher A. 1.0000 0.9961 Step 3Publisher B pays twice what Publisher A pays, so Publisher B pays 2 * 5 = 10 cents per sentence. Mark earns 500 * 10 = 5000 cents from Publisher B.1.0000 0.9922 Step 4 In total, Mark earns 2500 + 5000 = 7500 cents per week. 1.0000 0.9922 Step 5 Therefore, the answer (arabic numerals) is 7500. 1.0000 0.9727 Solution-level Score 0.9922 0.9258 Solution selected by | https://arxiv.org/abs/2505.15960v1 |
F OVERPRMs (wrong final answer) Step 0 Mark edits 1000 sentences each week. 0.9922 1.0000 Step 1Publisher A pays him 5 cents per sentence, so Publisher A pays him 1000 x 5 = 5000 cents per week.1.0000 0.9883 Step 2Since Publisher B pays twice what Publisher A pays, Publisher B pays 2 x 5000 = 10,000 cents per week.0.9961 0.9766 Step 3 The total amount Mark makes in a week is 5000 + 10,000 = 15,000 cents. 0.9961 0.9453 Step 4 Therefore, the answer (arabic numerals) is 15000. 1.0000 0.9453 Solution-level Score 0.9922 0.9453 39 FOVERPRMs assign wrong step-level scores, although the baseline PRMs assign reasonable step-level scores. The following example shows a response generated by Llama3.18B on GSM8K. In this case, the F OVERPRM assigns a relatively low score to Step 3 of the first solution, even though the step is correct. It also fails to assign a low score to Step 0 of the second solution, which contains an error. This represents a relatively rare instance in which the FOVERPRM genuinely degrades step-level verification performance. Problem: Each person in a certain household consumes 0.2 kg of rice every meal. Supposing 5 members of the household eat rice every lunch and dinner, how many weeks will a 42 kg bag of rice last? Ground truth: 3 Step ExplanationBaseline PRMFOVER PRM Solution selected by the baseline PRM (correct final answer) Step 0Each person consumes 0.2 kg of rice per meal. There are 2 meals per day (lunch and dinner), so each person consumes 0.2 * 2 = 0.4 kg of rice per day.0.9688 0.9961 Step 1There are 5 members in the household, so the total rice consumption per day is 0.4 * 5 = 2 kg.1.0000 0.9961 Step 2 There are 7 days in a week, so the total rice consumption per week is 2 * 7 = 14 kg. 0.9961 1.0000 Step 3 A 42 kg bag of rice will last for 42 / 14 = 3 weeks. 0.9922 0.8516 Step 4 Therefore, the answer (arabic numerals) is 3. 1.0000 0.9961 Solution-level Score 0.9688 0.8516 Solution selected by F OVERPRMs (wrong final answer) Step 0Each person consumes 0.2 kg of rice per meal, and there are 3 meals a day (breakfast, lunch, and dinner).0.7656 0.8945 Step 1 So, each person consumes 0.2 * 3 = 0.6 kg of rice per day. 1.0000 1.0000 Step 2 Since there are 5 members, the total rice consumption per day is 0.6 * 5 = 3 kg. 1.0000 0.9961 Step 3 A 42 kg bag of rice will last for 42 / 3 = 14 days. 0.9609 0.9531 Step 4 Since there are 7 days in a week, the rice will last 14 / 7 = 2 weeks. 0.9961 0.9805 Step 5 Therefore, the answer (arabic numerals) is 2. 1.0000 0.9883 Solution-level Score 0.7656 0.8945 40 K License We release our dataset under Creative Commons Attribution 4.0 International and our code under Apache License 2.0. Our dataset and code are based on the following resources. We consider our license to be (one-way) compatible with all | https://arxiv.org/abs/2505.15960v1 |
licenses listed below. Datasets. Our training data is based on the following datasets. • FLDx2 [27]: Creative Commons Attribution 4.0 International8 • GSM8K [6]: MIT9 • MetaMathQA [62]: MIT10 • Big-Math [1]: Apache License 2.011 Code and packages. Our code is partially based on the following resources. • FLD [27]: Apache License 2.012 • Isabelle: BSD-style regulations13 • Neural theorem proving tutorial [54]: MIT14 • DTV [68]: MIT15 L Computational Resources and Execution Time LLM training and inference. We use four NVIDIA A100 SXM4 80GB GPUs for training and inference. Training each 8B-class model on our dataset takes approximately 30 minutes using our training data, which consists of 10,000 instances. Evaluation requires considerably more time; reproducing all the training and evaluation in this paper will take approximately three days, excluding the dataset creation. Dataset Creation. Dataset creation in the formal logic tasks is efficient, and the verification for our dataset takes a few hours. However, verification of formal proofs using Isabelle is CPU-intensive and takes a long time. We run 40 parallel Isabelle processes across four servers, verifying approximately 200 proofs per hour. The servers have different CPUs, with the most powerful one equipped with two AMD EPYC 7763 64-Core Processors. The verification of our formal proof dataset takes about two weeks in total. We can accelerate the verification by running more Isabelle processes in parallel. 8https://github.com/hitachi-nlp/FLD-corpus/blob/neurips_2025/LICENSE 9https://github.com/openai/grade-school-math/blob/master/LICENSE 10https://huggingface.co/datasets/meta-math/MetaMathQA/blob/main/README.md 11https://huggingface.co/datasets/SynthLabsAI/Big-Math-RL-Verified/blob/main/ README.md 12https://github.com/hitachi-nlp/FLD/blob/neurips_2025/LICENSE 13https://isabelle.in.tum.de/ 14https://github.com/wellecks/ntptutorial/blob/main/LICENSE 15https://github.com/jinpz/dtv/blob/main/LICENSE 41 | https://arxiv.org/abs/2505.15960v1 |
arXiv:2505.15962v1 [cs.CL] 21 May 2025Pre-training Large Memory Language Models with Internal and External Knowledge Linxi Zhao∗, Sofian Zalouk, Christian K. Belardi, Justin Lovelace, Jin Peng Zhou, Kilian Q. Weinberger ,Yoav Artzi ,Jennifer J. Sun Department of Computer Science Cornell University {lz586, saz43, ckb73, jl3353, jz563, kilian, jjs533}@cornell.edu yoav@cs.cornell.edu Abstract Neural language models are black-boxes – both linguistic patterns and factual knowledge are distributed across billions of opaque parameters. This entangled encoding makes it difficult to reliably inspect, verify, or update specific facts. We propose a new class of language models, Large Memory Language Models (LMLM)†with a pre-training recipe that stores factual knowledge in both internal weights and an external database. Our approach strategically masks externally retrieved factual values from the training loss, thereby teaching the model to per- form targeted lookups rather than relying on memorization in model weights. Our experiments demonstrate that LMLMs achieve competitive performance compared to significantly larger, knowledge-dense LLMs on standard benchmarks, while offering the advantages of explicit, editable, and verifiable knowledge bases. This work represents a fundamental shift in how language models interact with and manage factual knowledge. 1 Introduction Large language models (LLMs) encode very different capabilities in their neural weights: vast factual knowledge and linguistic competency. These capabilities are intertwined inside the model, with significant implications for both training and inference. For instance, facts need to be observed many times during training to be reliably remembered, and it is difficult to forget specific facts after training. This tight coupling of knowledge and linguistic ability makes updating one without affecting the other extremely challenging. As an alternative, related work have studied offloading knowledge, such as nearest-neighbor language models that maintain access to the entire training corpus [ 27,30]. However, these works do not distinguish between different types of knowledge and require constant, indiscriminate look-ups. We take a middle ground between storing knowledge internally or externally: instead of retrieving for every token, we explicitly retrieve facts, and allow the language model to learn when to best leverage its internal linguistic knowledge versus when to access an external knowledge base. Consider the sentence “ Napoleon was born on August 15th, 1769 in Ajaccio, Corsica to Letizia Bonaparte ” (Figure 1). The common knowledge here includes general facts: that Napoleon had a mother who gave birth to him at a certain place and time. In contrast, the specific knowledge includes the exact date, location, and names. This distinction is not merely about frequency. Common ∗Correspondence to: zhaolx19@cs.cornell.edu †LMLMstands for Large Memory Language Model and is pronounced “LamLam”. ‡We will open-source our code and models at https://github.com/kilian-group/LMLM . Preprint. Under review. Large Memory Language Models (LMLMs)How to store new facts during pre-training? Napoleon was born on August 15, 1769. Store externally Non-parametric knowledgekNN-LM etc. Common Knowledge Napoleon was born.Specific Knowledge (Napoleon, Birth Date) → August 15, 1769 Stored in Large DatabaseLearned by Language ModelStore in model weights Parametric knowledgeLarge Language Models (LLMs)Figure 1: Spectrum of knowledge representation in pre-training , from fully parametric (e.g., LLMs) to fully externalized (e.g., retrieval from a datastore). Our LARGE MEMORY LANGUAGE MODELS (LMLMs) are | https://arxiv.org/abs/2505.15962v1 |
designed to both internalize knowledge and retrieve it from external sources. knowledge generalizes across contexts, making it well-suited for storage in the model’s parameters. Specific knowledge is non-generalizable: it appears in narrow contexts, does not transfer, and contributes disproportionately to the memorization burden. Recognizing this asymmetry allows us to design models that are more parameter-efficient, verifiable, and editable. We introduce LARGE MEMORY LANGUAGE MODELS (LMLM), a new class of language models with an externalized database aiming to decouple the memorization of specific knowledge from the model weights. We provide an integrated solution for LMLMs spanning data preparation, pre-training, and inference. To prepare the training data, we annotate the pre-training corpus with database lookups to offload factual content, using a small, cost-effective LM, fine-tuned for this task. During pre-training, the returned facts are masked from the loss , systematically separating factual knowledge from the neural weights. During inference, instead of recalling memorized facts, the model queries the database . We compare our LMLMs against their LLM counterparts of the same size, pre-trained on the same amount of data without external memory. We find that smaller LMLMs achieve comparable performance to LLMs much larger in size on both standard language modeling tasks as well as factual precision on FactScore [ 36] and T-REx [ 45]. For instance, LMLMwith 382M parameters achieves comparable performance to LLAMA2-7B on these factual precision benchmarks. Additionally, a natural benefit of decoupling knowledge from model parameters is that editing or unlearning becomes a simple operation on the external memory. To verify this, we demonstrate that LMLMcan achieve instant forgetting without compromising the general capabilities of the model on TOFU machine unlearning benchmark [33]. While our current experiments are constrained to relatively modest datasets and model scales due to computational limitations, our results reveal compelling trends that suggest LMLMs offer a pathway toward language models with substantially reduced dependency on parameter count for factual accuracy. LMLMs have the potential to be integrated with techniques from broad fields, including knowledge representation [ 39], knowledge editing [ 59], symbolic reasoning [ 8], as well as mechanistic interpretability [ 1]. This approach presents promising opportunities for future research to capitalize on the real-time, verifiable knowledge update capabilities inherent to external knowledge databases—potentially transforming how language models store, access, and maintain knowledge. 2 L ARGE MEMORY LANGUAGE MODELS LMLMis a new class of language models designed to offload factual knowledge to an external database rather than store it implicitly in model parameters (Figure 2). We first describe the automated annotation during pre-training data preparation (Section 2.1), then discuss pre-training with lookup masking (Section 2.2), and finally how we perform inference with L MLMs (Section 2.3). 2.1 Data Preparation: Extracting Knowledge We begin by extracting atomic entity-level factual knowledge from text and constructing a compact external database. These extracted facts are then interleaved with the original training corpus through explicit lookup calls. 2 Extracting Knowledge Inference Pretraining Prompt: When was Napoleon born? Output: Napoleon was born on August 15, 1769.August 15, 1769(Napoleon , Birth Date )→ DatabaseNapoleon was born on [ (Napoleon , Birth Date )→August | https://arxiv.org/abs/2505.15962v1 |
15, 1769 ] August 15, 1769.Annotated T ext: (Napoleon , Birth Date )→August 15, 1769Napoleon was born on August 15, 1769.Original T ext: Annotator Next token prediction with masked Return Value Special Tokens Entity Relation Return V alueFigure 2: Overview of the LMLMframework. Our framework consists of (Left) Data Preparation , where entity-level facts are automatically annotated and stored in an external database; (Middle) Pretraining , where the model is trained on the annotated text while excluding return values from the loss to discourage memorization; and (Right) Inference , where the model interleaves text generation with database lookups to ground its outputs on retrieved facts. Knowledge Specification. We primarily focus on entity-level atomic factual knowledge , a natural starting point within the broader LMLMframework. We define facts as triplets of the form: ( entity , relation )→value . This level of granularity aligns with previous definitions [ 34] and represents the most compact and tractable form of factual knowledge to disentangle from the intertwined linguistic patterns in raw text. It also naturally maps to a knowledge graph structure, where triplets define nodes and edges [ 9]. These atomic facts are ideal for externalization: they are straightforward to extract and verify, yet hard to encode in the model parameters, making them well-suited for storage in an external database. Automating Knowledge Extraction. Manually extracting factual triples and constructing knowledge graphs at the scale of pre-training data is a major challenge [ 21]. Additionally, the choice of which facts should be externalized and when the model should issue a database lookup is underexplored. To address this, we distill high-quality annotations from GPT-4o into a lightweight ANNOTATOR model that can scalably identify and externalize factual knowledge from raw text, through the following (Figure 3): 1.Seed annotation: We use GPT-4o to annotate a small seed dataset of Mknowledge-intensive documents with lookup calls and return values. We use M= 1000 in our setting. 2.Filtering: We fine-tune a CORRECTOR model ( LLAMA-3.1-8B-I NSTRUCT ) on the seed annota- tions to adapt it to the database lookup format. We intentionally underfit the model to identify and filter out lookup calls that are contextually unsupported, overly specific, or improperly formatted. This results in a clean dataset containing non-trivial facts suitable for externalization. 3.Annotation: We apply instruction-tuning to an ANNOTATOR model (also LLAMA-3.1-8B- INSTRUCT ) on the cleaned data and use it to annotate the full pre-training corpus at scale. See Appendix A.1 for additional details. This annotation process serves two purposes: (1) Database Construction: The extracted triplets form a token-efficient external database that scales with the size of the pre-training corpus. (2) Pre-training Corpus Generation: Lookup calls are interleaved with the original text, enabling the model to learn when to rely on internal knowledge and when to issue a lookup. 1) Annotate with GPT4o 2a) Finetune Corrector 3) Instruction tuning Original Seed Data Annotated Seed Data Corrector Cleaned Seed Data2b) Clean Data Annotator 3. Annotation 2. Filtering 1. Seed Annotation Figure 3: Training the ANNOTATOR model. We distill high-quality annotations from GPT-4o into a lightweight model that learns to identify and | https://arxiv.org/abs/2505.15962v1 |
externalize factual knowledge from raw pre-training text, enabling scalable annotation of the full corpus. 3 2.5 5.0 7.5 10.012.515.0 Pretrain Tokens (B)6789101112Normalized Eval PPL ↓ 382M176M382M176M LMLM Standard35 40 45 NLU Average Performance1015202530FactScore (%)↑ GPT2-124M∗GPT2-355M∗GPT2-774M∗ LLaMA2-176MLLaMA2-176M LLaMA2-382MLLaMA2-382M Pythia-1B∗ LMLM Standard Off-the-shelf0.0 0.2 0.4 0.6 0.8 1.0 Model Utility10−2010−1710−1410−1110−810−510−2Forget Quality (p-value) ↑p= 0.05 GA GDIdkDPONPO SimNPOLMLM Ideal Unlearning Utility Trade-off Standard - Base Standard - UnlearnLMLM - Base LMLM - UnlearnFigure 4: Results overview. (Left) Normalized validation perplexity throughout training: LMLM achieves lower perplexity, indicating that offloading factual knowledge improves training efficiency. (Middle) FactScore vs. average natural language understanding (NLU) performance: LMLMsig- nificantly improves factual precision over its STANDARD counterparts without sacrificing NLU performance. (Right) Machine unlearning: LMLMforgets targeted facts while preserving general model performance. 2.2 Pre-training We adopt a standard next-token prediction setup with one critical modification: During pre-training, tokens corresponding to the retrieved factual values are excluded from the loss computation (see Appendix A.3). This design discourages LMLMs from memorizing facts that are offloaded to the external database. Instead, the model learns to ground its generation on retrieved lookup values, reducing the entropy of upcoming tokens, and making training more efficient. This effect is supported by our empirical observation of lower validation perplexity throughout training (see Section 3.2). Intuitively, when the model can rely on accurate externally provided facts, it no longer needs to expand capacity learning complex long-tail distributions of factual content. This insight is consistent with Rho-1 [31], which showed that not all tokens are beneficial to pre-training, and that focusing on informative tokens during training leads to better models. 2.3 Inference During inference, LMLMgenerates text autoregressively until a special token is produced, which triggers a database lookup. The model then generates a query of the form “lookup (entity ,relation )→”2and retrieves the corresponding value from the database. The retrieved value is then appended to the context, and the model resumes generation. To perform retrieval, we use fuzzy matching of the query in the ALL-MINILM-L6- V2sentence embedding space. Additional details are provided in Appendix A.4. 3 Experiments We evaluate LMLMs via validation perplexity (Section 3.2), factual precision (Section 3.3), as well as machine unlearning (Section 3.4), with summary of these results in Figure 4. We further discuss additional implications on L MLMin Section 4. 3.1 Experimental Setup Pretraining and Model Setup. We pretrain on a high-quality Wikipedia corpus ( ∼3Btokens) from the OLMo2 project3[18], and evaluate perplexity on a held-out set of 1,000 samples ( ∼245ktokens). We pre-train LMLMfrom scratch using GPT-2 and LLaMA2-style architectures with their standard tokenizers and vocabularies, extended by four special tokens for lookup calls. All models are trained 2Special tokens are used to format the query and retrieved value. 3https://huggingface.co/datasets/allenai/dolmino-mix-1124 4 for 8 epochs with a context length of 1,024-tokens, using mixed precision. For LLAMA2-176M andLLAMA2-382M , we use a batch size of 256 over 105k steps. Training completes within 8 H100-days. Full configuration details are in Appendix A.2. Database and Retrieval Setting. We construct the database by annotating the entire pretraining corpus, resulting in 54.6M knowledge triplets. Retrieval uses fuzzy matching with cosine | https://arxiv.org/abs/2505.15962v1 |
similarity over ALL-MINILM-L6- V2 embeddings (with a rejection threshold of 0.6). Baseline Comparisons. We consider the following pre-training settings: • LMLM(Ours) : Pre-trained on our annotated data with lookup calls, using the loss in Equation (1). • STANDARD : Pre-trained on our data without lookup calls. All other settings are identical. •Off-the-shelf Models : Models with publicly available pre-training weights, includ- ing: OPENAI/GPT2-124M ,OPENAI/GPT2-355M ,OPENAI/GPT2-774M [46],PYTHIA - 1B [3],LLAMA2-7B [14], and LLAMA3.1-8B [13]. OPENAI/GPT2-124M and OPENAI/GPT2-355M are comparable in size to our LMLMmodels. These models are marked with an asterisk (*) in the results tables. 3.2 Learning to Lookup Facts is Easier than Memorization Evaluation Setup. We first evaluate our models using language modeling perplexity on a held-out Wikipedia validation set. As LMLMintroduces lookup calls, its perplexity is not directly comparable to that of S TANDARD . To ensure a fair comparison, we report three variants of perplexity: •Static (Oracle) : Assumes perfect lookup behavior, where L MLMalways generates correct lookup calls and retrieves the correct values. This provides an optimistic lower bound. Perplexity is computed over all tokens excluding the lookup calls. •Dynamic : Reflects actual model behavior during inference. Lookup calls are generated and executed in real time, capturing failures due to incorrect queries or failed retrievals. Perplexity is again calculated over all tokens except the lookup calls. •Normalized : Measures the combined likelihood of generating the correct queries and subsequent text. Perplexity is computed over all tokens except for the retrieved values, and normalized by the number of tokens in the original unannotated text. See Appendix A.3 for formal definitions. GPT2-124M LLaMA2-176M GPT2-355M LLaMA2-382M Model468101214Perplexity14.1 8.512.2 7.711.8 6.79.9 6.310.8 6.79.2 6.29.1 5.87.9 5.5Standard LMLM-Dynamic LMLM-Normalized LMLM-Static (Oracle) Figure 5: Validation perplexity comparison between STANDARD andLMLMon three variants of perplexity. Lower perplexity indicates better performance.Perplexity Results. Figure 5 reports val- idation perplexities for both LMLMand STANDARD models. IfLMLMwas benefiting from retrievals without learning to query correctly, we would expect Static perplexity to drop while Dynamic and Normalized worsen. Improvements across all variants indicate thatLMLMis learning both to query and to generate more effectively. We observe that LMLMconsistently achieves lower perplexity than STANDARD across all model sizes and perplexity vari- ants. In particular, LMLMachieves an av- erage perplexity reduction of 1.98points under the Dynamic setting, demonstrating its effectiveness even with imperfect lookup calls. The Normalized variant highlights that LMLMassigns higher likelihoods to both lookup queries and grounded text, indicating improved training efficiency. These results support a key insight: learning to lookup specific facts is easier than memorizing them. Lastly, the observed gap between Dynamic andStatic perplexities highlights potential for improvements to annotation quality and retrieval method. Downstream NLU performance. We compare LMLMandSTANDARD models on five standard natural language understanding (NLU) tasks (Table 9). This evaluation serves as a sanity check to ensure that separating factual knowledge during pretraining does not come at the expense of general language understanding. We find that LMLMperforms on par with STANDARD across all tasks, confirming that factual offloading preserves the model’s general-purpose capabilities. 5 Table 1: Evaluations on factual precision. FactScore evaluates open-ended biography generation with | https://arxiv.org/abs/2505.15962v1 |
factual precision, while T-REx tests factual statement completion. Subscripts indicate the absolute difference compared to their respective S TANDARD baselines. FactScore (Open-ended Generation) T-REx (Knowledge Completion) Model Model Type FActScore ↑FActScore w/o len. penalty ↑# Facts / Response ↑Exact Match ↑ Precision @1 ↑ OPENAI/GPT2-124M∗- 14.6 14.7 24.2 20.1 20.3 GPT2-124MSTANDARD 10.7 11.3 36.7 41.2 35.8 LMLM 20.6 +9.9 20.9 +9.6 43.4 54.6 +13.4 40.3 +4.5 LLAMA2-176MSTANDARD 10.1 11.8 23.7 46.3 30.6 LMLM 30.6 +20.5 32.4 +20.6 31.4 54.1 +7.8 34.5 +3.9 OPENAI/GPT2-355M∗- 15.2 15.2 24.6 28.4 29.6 GPT2-355MSTANDARD 14.4 14.4 38.9 44.9 38.9 LMLM 23.9 +9.5 25.0 +10.6 40.4 58.7 +13.8 44.0 +5.1 LLAMA2-382MSTANDARD 14.0 16.6 32.4 52.0 34.5 LMLM 31.9 +17.9 32.7 +16.1 33.7 58.1 +6.1 34.6 +0.1 OPENAI/GPT2-774M∗- 17.4 17.4 49.4 35.6 31.9 PYTHIA -1B∗- 21.1 21.1 50.6 47.8 37.9 LLAMA2-7B∗- 34.0 35.1 33.9 60.5 44.4 LLAMA3.1-8B∗- 40.3 40.5 38.5 67.3 60.6 * Models marked with an asterisk (*) are off-the-shelf models with no additional training. 3.3 Externalizing Knowledge Improves Factual Precision FactScore Results. We evaluate factual precision on an open-ended biography generation task using prompts such as “Tell me a bio of Kang Ji-hwan. Kang Ji-hwan is”, and assess model outputs using the FactScore metric [ 36]. FactScore extracts atomic facts from generated text and measures the proportion that are supported by a trusted knowledge source. Table 1 shows that LMLMsubstantially improves factual precision over STANDARD across all models. In particular, it outperforms STANDARD by+20.5%onLLAMA2-176M and+17.9%on LLAMA2-382M , demonstrating that even small models can produce factually accurate outputs when equipped with lookup capabilities. Remarkably, its performance approaches that of much larger models like P YTHIA -1B and LL AMA2-7B, despite having far fewer parameters. T-REx Results. Additionally, we evaluate using the T-REx subset of the LAMA benchmark [ 45], which contains 11,615 samples. The task is to complete a short statement with a missing fact (e.g., a date or a place). For example, the statement “Jaideep Sahni (born 1968) is an Indian [MASK]” should be completed with the word “actor”. Following Schick et al. [50], we report Exact Match , which checks if the correct word appears within the first five predicted words, and Precision @1 , which measures whether the model’s first predicted token matches the first token of the correct answer. In Table 1, we observe that LMLMachieves large improvements on T-REx across all models. Specifically, it achieves an average gain of 10.3%on Exact Match, demonstrating LMLM’s advantage on knowledge-intensive tasks. As our models are trained solely on Wikipedia, direct comparison to off-the-shelf models trained on broader web corpora is not entirely equitable. Nonetheless, the comparison between L MLMand S TANDARD remains valid, as both share the same training data. Table 2: Comparison of RAG vs. LMLMon factual precision. Results are shown for GPT2-355M. Model FactScore (%) ↑T-REx EM (%) ↑ STANDARD 15.2 28.4 STANDARD + RAG 20.1−3.8 75.8 +17.1 LMLM 23.9 58.7How Does LMLMCompare to RAG? While both LMLMand retrieval-augmented generation (RAG) incorporate external knowledge, their motivations differ fundamentally. RAG retrieves documents at inference time and prepends them to the model input [ 29,23], requiring | https://arxiv.org/abs/2505.15962v1 |
the LM to extract relevant information implicitly. In contrast, LMLMis trained end-to-end to issue structured lookup calls only when factual retrieval is needed, interleaving retrieval and generation at the entity level. This makes LMLMmore precise and efficient: it avoids unnecessary retrievals, reduces token overhead, and discourages memorization of externalized facts. By contrast, RAG often retrieves irrelevant content and may inject information even when none is needed. That said, RAG is better suited to settings where relevant knowledge is long-form or unstructured—such as open-domain QA, or multi-hop reasoning. These settings fall outside LMLM’s current scope. Finally, we note that RAG and LMLMare complementary—RAG can be applied on top of LMLMfor further gains. 6 0 20 40 60 Unlearning Steps0.00.10.20.30.4Forget Quality (p-value ) 0 20 40 60 Unlearning Steps0.350.400.450.500.550.60Model Utility (-) 0 20 40 60 Unlearning Steps0.20.40.60.8Forget Answer ROUGE 0 20 40 60 Unlearning Steps0.20.40.60.8Retain Answer ROUGE (-) Standard - Base Standard - Unlearn (NPO) Standard - Retain (Ideal) LMLM - Base LMLM - UnlearnFigure 6: Evaluation of Machine Unlearning. We compare LMLMwith NPO on the TOFU benchmark. Unlike prior methods, LMLMperforms unlearning without any additional training. (a–b) Forget quality vs. utility trade-off. LMLMachieves ideal forgetting ( p-value >0.05) without sacrificing general utility. (c–d) LMLMretains knowledge outside the forget set, unlike other methods that degrade retain-set performance due to parameter entanglement. Y-axis labels use –or↑to indicate retention or improvement. For metrics like Forget ROUGE, where lower isn’t always better, we mark the retain model’s performance as reference. To understand how LMLMcompares empirically, we implement a standard RAG baseline that retrieves the top-4 most relevant Wikipedia articles and prepends them to a standard LLM during generation. While RAG is not the focus of our study, we include it as a point of reference. As shown in Table 2, LMLMachieves higher scores on FactScore at the current model scale, indicating the effectiveness of offloading factual knowledge at the entity level. However, RAG performs better on T-REx, possibly because retrieving full Wikipedia passages—rather than entity—level facts-provides broader coverage for knowledge completion tasks. We observe that small RAG models often struggle to make effective use of retrieved context and can be distracted by irrelevant information. Nonetheless, RAG exhibits encouraging levels of factuality, especially with instruction-tuned models. 3.4 Machine Unlearning: L MLMSupports Instant Forgetting One natural benefit of decoupling knowledge from model parameters is that editing and unlearning are achievable through simple operations on the external memory, without compromising the model’s general capabilities. To verify this, we extend our evaluation to a standard machine unlearning benchmark, TOFU [33]. Evaluation Setup. TOFU evaluates unlearning efficacy in a privacy-sensitive setting, where the goal is to selectively forget a targeted subset of information (the Forget Set ) while preserving performance on the Retain Set and maintaining general model capabilities. The objective is to produce an unlearned model that is statistically indistinguishable from a model trained solely on the Retain Set (referred to as the retain model ). The benchmark consists of 200 synthetic author profiles, each containing 20 QA pairs. It evaluates two key aspects: •Model utility: The average of three metrics— | https://arxiv.org/abs/2505.15962v1 |
ROUGE (answer quality), Answer Probability , and Truth Ratio (likelihood assigned to the correct answer over distractors)—measured on the Retain Set, Real Author Set, and World Facts Set. •Forget quality: Thep-value of a statistical test comparing the unlearned model with the correspond- ing retain model to assess whether the targeted knowledge has been effectively removed. We use LLAMA-3.2-1B-I NSTRUCT as the base model and compare against NPO [60], a state-of- the-art unlearning method. For LMLM, we perform unlearning by simply removing entries in the database corresponding to the Forget Set. See Appendix B.2 for implementation details. Unlearning Results. Figure 6 presents the results on the TOFU benchmark, where the Forget Set is 5% of the data. In Figure 6 (a-b), we show the forget quality and model utility throughout the unlearning process, where ideal performance is defined by effective forgetting without degrading model utility. LMLMachieves precisely this—effective forgetting with no loss in model utility—a direct benefit of decoupling factual knowledge from model parameters. Figure 6 (c–d) shows that LMLMpreserves knowledge outside the Forget Set, whereas previous training-based methods tend to forget related information due to parameter entanglement. Importantly, forgetting is performed through simple database operations, without model updates or access to the Retain Set. In contrast, the strongest comparable baseline, NPO, incurs a utility degradation, while other unlearning methods either fail to forget or exhibit catastrophic drops in model utility (See Figure 4, right). While LMLM 7 incurs upfront costs for data annotation and model training, it provides substantial payoff in use cases where knowledge editing, removal, or compliance with data deletion requests are necessary. 4 Discussion Toward Efficient Scaling via Knowledge Offloading. We summarize our main findings in Figure 4: (1) during pretraining, LMLMconsistently obtains lower perplexity compared to its STANDARD counterparts. (2) LMLMachieves higher factual precision without compromising NLU performance. (3)LMLMsupports instant forgetting while maintaining general model capabilities. In particular, even smaller instances of LMLMmatch or exceed the performance of much larger off-the-shelf models. These results highlight the potential of LMLMto scale efficiently by offloading knowledge to an external database, thereby maintaining strong factual accuracy with fewer parameters. While our experiments are limited to modest scales due to computational constraints, the observed trends suggest that LMLMs offer a promising direction toward parameter-efficient language models that externalize factual storage. Such models may enable real-time, verifiable knowledge updates and open up new possibilities for deploying L MLMin resource-constrained or fact-sensitive environments. 0 100 200 300 400 500 600 Global Step2468value LMLM Regular SFT Figure 7: Training loss on return value tokens.Does LMLMStill Memorize Facts in Its Parameters? We provide preliminary evidence that LMLMreduces fac- tual memorization through its training design. Using the TOFU synthetic trainset, we compare training objectives by tracking the loss on return value tokens—the factual answers intended to be retrieved rather than memorized. As shown in Figure 7, models trained with a standard SFT objective quickly reduce the loss on these tokens, suggesting memorization in the model’s parameters. In contrast, LMLM, trained with the masked loss maintains a high loss throughout training, indicating that these facts are not stored | https://arxiv.org/abs/2505.15962v1 |
internally. LMLM’s successful application in the machine unlearning benchmark (Sec. 3.4) further supports this finding. Additionally, we observe a notable performance gap in Table 3, where factual precision drops substantially when the external database is disabled—forcing the model to rely on its internal parameters. These findings suggest that editing the database offers a direct way to control what the model knows and forgets. Table 3: Disabling retrieval significantly reduces performance on both FACTSCORE and T-REx; see Table 8 for the full comparison. Model Type FactScore (%) ↑T-REx EM (%) ↑ STANDARD 14.0 52.0 LMLM(w/o database) 12.8−19.1 38.5−19.6 LMLM 31.9 58.1Beyond Entity-Level Knowledge Offloading. By leveraging knowledge triplets, LMLMfo- cuses on separating entity-level factual knowl- edge from linguistic competency. Current re- search on probing [ 44,21], evaluating [ 36,55], and editing [ 35] internal knowledge in language models similarly concentrates on entity-level atomic facts. However, extending our method beyond entity-level knowledge remains a chal- lenge. Under the LMLMframework, this difficulty arises in determining effective formats for separating knowledge beyond entity-focused triplets. For instance, using a simple QA function call format raises concerns about the potential for hallucinated facts during annotation. Additionally, while LMLMattempts to minimize the memorization of factual knowledge, some knowledge still remains unremoved. More comprehensive benchmarks for probing internal knowledge in language models, as well as distinct benchmarks for disentangling knowledge and reasoning, remain underexplored. Limitations. LMLMis a promising step toward separating factual knowledge from language models with many exciting future directions. Our current limitations include: (1) It does not guarantee perfect factuality during generation. Noise in the database and errors from fuzzy matching can introduce inaccuracies. However, such issues are easily traceable and verifiable for LMLM. (2) LMLM introduces additional tokens for lookup queries, which increases training and inference costs. (3) The current implementation focuses on entity-level factual knowledge, which captures only a subset 8 of the broader factual spectrum. A more systematic framework is needed to determine which types of knowledge should be delegated to external databases. (4) Our experiments are limited to small models and datasets due to compute constraints. While sufficient to show core benefits, scaling up may improve performance and support more complex reasoning [ 17], which remains an open challenge for L MLM. 5 Related Work Parametric vs Non-Parametric Knowledge. Language models encode knowledge in two main forms: parametric, learned in model weights during training, and non-parametric, retrieved from external sources [ 34]. Early work showed that pre-trained models [ 11,44] implicitly store factual knowledge. Scaling model size further improves factual recall [ 6,24], but recent studies highlight fundamental limitations of parametric storage, including hallucination, staleness, poor attribution, and limited adaptability [42, 22, 26, 2], motivating the shift toward non-parametric approaches. Retrieval-Augmented LLMs. Non-parametric methods enhance language models with external sources to improve factuality and controllability. Retrieval-augmented generation (RAG)[ 29,23,52] retrieves relevant passages at inference time, while tool-augmented methods[ 50,56] enable symbolic reasoning via external API calls. Retrieval-based pretraining approaches such as REALM [19] and RETRO [5] show that incorporating retrieval during training improves generalization and reduces memorization. More recently, SILO [ 37] demonstrates that | https://arxiv.org/abs/2505.15962v1 |
even when excluding certain sources from pretraining, competitive perplexity can be maintained through proper retrieval mechanisms. Semi-parametric language models aim to decouple knowledge storage from generation by attaching non-parametric memories of key-value pairs. A prominent example is kNN-LM [ 27], which retrieves nearest-neighbor training examples at test time to adjust token probabilities without retraining. Follow-up works extend this idea to various settings [ 43,28,51,57,40]. Our approach builds on this line of work by focusing on scalable factual memory and dynamic usage during pretraining. Rather than storing full documents or relying on retrieval pipelines at inference time, our method enables the model to learn when to issue structured lookup calls and how to integrate the results into generation. Machine Unlearning. Machine unlearning aims to remove specific knowledge from a trained language model without full retraining [ 25,7]. The goal is to forget a targeted subset of information while preserving the model’s general capabilities. Existing approaches fall into several main families. Gradient Ascent (GA) increases the loss on the forget set, while Gradient Difference (GD) jointly pre- serves performance on a retain set [ 32]. Preference-based methods, such as IdkDPO [ 33], NPO [ 60], and SimNPO [ 15], encourage the model to respond with neutral or deflective answers [ 47]. Among these, NPO achieves the strongest results on the TOFU benchmark [ 33], which evaluates entity-level forgetting and utility retention on a synthetic dataset of author profiles. Despite strong results on TOFU, existing methods struggle to scale to larger forget sets (e.g., 5% of training data) and often degrade model utility or erase unintended related knowledge. These limita- tions arise from the entanglement of factual content and general linguistic ability in model parameters. Our approach takes a different route by structurally decoupling factual knowledge from the model. By offloading facts to an external database, forgetting reduces to removing entries—offering a scalable, verifiable, and precise solution without compromising utility. Moreover, LMLMcan also be applied to knowledge editing benchmarks [ 38,53,16], where modifying or deleting facts becomes as simple as updating the database. 6 Conclusion We introduce LMLMs, a new class of language models for externalizing knowledge alongside an integrated solution to achieve this. Our results demonstrate promising trends towards efficient use of model capacity and offloading facts onto an external database. LMLMrepresents an alternative way to store facts during pre-training, and has the potential to be integrated with other common approaches developed for LLMs, including retrieval-based methods, symbolic reasoning, as well as knowledge representation. Consequently, LMLMopens up new ways for future language models to leverage the benefits of external knowledge databases such as verifiable updates – fundamentally, it is much easier and more memory-efficient to learn how to look up facts rather than to remember them. 9 Acknowledgments SZ is supported by the Defense Advanced Research Projects Agency (DARPA) under Grant No. D24AP00259-00. CKB is supported by the National Science Foundation (NSF) through the NSF Research Traineeship (NRT) program under Grant No. 2345579. JPZ is supported by a grant from the Natural Sciences and Engineering Research Council of Canada (NSERC) (567916). This work is | https://arxiv.org/abs/2505.15962v1 |
partially supported by LinkedIn-Cornell Partnership, Open Philantropy, the National Science Foundation NSF (IIS-2107161, and IIS-1724282, HDR-2118310), the Cornell Center for Materials Research with funding from the NSF MRSEC program (DMR-1719875), DARPA, arXiv, and the New York Presbyterian Hospital. 10 References [1]Leonard Bereska and Efstratios Gavves. Mechanistic interpretability for ai safety–a review. arXiv preprint arXiv:2404.14082 , 2024. [2]Baolong Bi, Shenghua Liu, Yiwei Wang, Yilong Xu, Junfeng Fang, Lingrui Mei, and Xueqi Cheng. Parameters vs. context: Fine-grained control of knowledge reliance in language models, 2025. URL https://arxiv.org/abs/2503.15888 . [3]Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle O’Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal. Pythia: A suite for analyzing large language models across training and scaling, 2023. URL https://arxiv.org/abs/2304. 01373 . [4]Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. Piqa: Reasoning about physical commonsense in natural language, 2019. URL https://arxiv.org/abs/ 1911.11641 . [5]Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George van den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego de Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, Tom Hennigan, Saffron Huang, Loren Maggiore, Chris Jones, Albin Cassirer, Andy Brock, Michela Paganini, Geoffrey Irving, Oriol Vinyals, Simon Osindero, Karen Simonyan, Jack W. Rae, Erich Elsen, and Laurent Sifre. Improving language models by retrieving from trillions of tokens, 2022. URL https: //arxiv.org/abs/2112.04426 . [6]Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-V oss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners, 2020. URLhttps://arxiv.org/abs/2005.14165 . [7]Yinzhi Cao and Junfeng Yang. Towards making systems forget with machine unlearning. In 2015 IEEE Symposium on Security and Privacy , pp. 463–480, 2015. doi: 10.1109/SP.2015.35. [8]Swarat Chaudhuri, Kevin Ellis, Oleksandr Polozov, Rishabh Singh, Armando Solar-Lezama, Yisong Yue, et al. Neurosymbolic programming. Foundations and Trends ®in Programming Languages , 7(3):158–243, 2021. [9]Nurendra Choudhary and Chandan K Reddy. Complex logical reasoning over knowledge graphs using large language models. arXiv preprint arXiv:2305.01157 , 2023. [10] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge, 2018. URL https://arxiv.org/abs/1803.05457 . [11] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding, 2019. URL https://arxiv.org/ abs/1810.04805 . [12] Zhengxiao Du, Aohan Zeng, Yuxiao Dong, and Jie Tang. Understanding emergent abilities of language models from the loss perspective, 2025. URL https://arxiv.org/abs/2403. 15796 . [13] Aaron Grattafiori et al. The llama 3 herd of models, 2024. URL https://arxiv.org/abs/ 2407.21783 . [14] Hugo Touvron et al. Llama 2: Open foundation and fine-tuned chat models, 2023. URL https://arxiv.org/abs/2307.09288 . 11 [15] Chongyu Fan, Jiancheng Liu, Licong Lin, Jinghan Jia, Ruiqi Zhang, Song Mei, and Sijia Liu. | https://arxiv.org/abs/2505.15962v1 |
Simplicity prevails: Rethinking negative preference optimization for llm unlearning, 2025. URL https://arxiv.org/abs/2410.07163 . [16] Junfeng Fang, Houcheng Jiang, Kun Wang, Yunshan Ma, Shi Jie, Xiang Wang, Xiangnan He, and Tat seng Chua. Alphaedit: Null-space constrained knowledge editing for language models, 2025. URL https://arxiv.org/abs/2410.02355 . [17] Albert Gong, Kamil ˙e Stankevi ˇci¯ut˙e, Chao Wan, Anmol Kabra, Raphael Thesmar, Johann Lee, Julius Klenke, Carla P. Gomes, and Kilian Q. Weinberger. Phantomwiki: On-demand datasets for reasoning and retrieval evaluation, 2025. URL https://arxiv.org/abs/2502.20377 . [18] Dirk Groeneveld, Iz Beltagy, Pete Walsh, Akshita Bhagia, Rodney Kinney, Oyvind Tafjord, Ananya Harsh Jha, Hamish Ivison, Ian Magnusson, Yizhong Wang, Shane Arora, David Atkin- son, Russell Authur, Khyathi Raghavi Chandu, Arman Cohan, Jennifer Dumas, Yanai Elazar, Yuling Gu, Jack Hessel, Tushar Khot, William Merrill, Jacob Morrison, Niklas Muennighoff, Aakanksha Naik, Crystal Nam, Matthew E. Peters, Valentina Pyatkin, Abhilasha Ravichander, Dustin Schwenk, Saurabh Shah, Will Smith, Emma Strubell, Nishant Subramani, Mitchell Wortsman, Pradeep Dasigi, Nathan Lambert, Kyle Richardson, Luke Zettlemoyer, Jesse Dodge, Kyle Lo, Luca Soldaini, Noah A. Smith, and Hannaneh Hajishirzi. Olmo: Accelerating the science of language models, 2024. URL https://arxiv.org/abs/2402.00838 . [19] Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. Realm: Retrieval-augmented language model pre-training, 2020. URL https://arxiv.org/abs/ 2002.08909 . [20] Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models, 2021. URL https://arxiv.org/abs/2106.09685 . [21] Yujia Hu, Tuan-Phong Nguyen, Shrestha Ghosh, and Simon Razniewski. Gptkb: Compre- hensively materializing factual llm knowledge, 2024. URL https://arxiv.org/abs/2411. 04920 . [22] Lei Huang, Xiaocheng Feng, Weitao Ma, Liang Zhao, Yuchun Fan, Weihong Zhong, Dongliang Xu, Qing Yang, Hongtao Liu, and Bing Qin. Advancing large language model attribution through self-improving, 2024. URL https://arxiv.org/abs/2410.13298 . [23] Gautier Izacard and Edouard Grave. Leveraging passage retrieval with generative models for open domain question answering, 2021. URL https://arxiv.org/abs/2007.01282 . [24] Gautier Izacard and Edouard Grave. Distilling knowledge from reader to retriever for question answering, 2022. URL https://arxiv.org/abs/2012.04584 . [25] Joel Jang, Dongkeun Yoon, Sohee Yang, Sungmin Cha, Moontae Lee, Lajanugen Logeswaran, and Minjoon Seo. Knowledge unlearning for mitigating privacy risks in language models, 2022. URLhttps://arxiv.org/abs/2210.01504 . [26] Nikhil Kandpal, Haikang Deng, Adam Roberts, Eric Wallace, and Colin Raffel. Large language models struggle to learn long-tail knowledge, 2023. URL https://arxiv.org/abs/2211. 08411 . [27] Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. Gen- eralization through memorization: Nearest neighbor language models, 2020. URL https: //arxiv.org/abs/1911.00172 . [28] Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. Nearest neighbor machine translation, 2021. URL https://arxiv.org/abs/2010.00710 . [29] Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. Retrieval-augmented generation for knowledge-intensive nlp tasks, 2021. URL https://arxiv.org/abs/2005.11401 . 12 [30] Minghan Li, Xilun Chen, Ari Holtzman, Beidi Chen, Jimmy Lin, Scott Yih, and Victoria Lin. Nearest neighbor speculative decoding for llm generation and attribution. Advances in Neural Information Processing Systems , 37:80987–81015, 2024. [31] Zhenghao Lin, Zhibin Gou, Yeyun Gong, Xiao Liu, Yelong Shen, | https://arxiv.org/abs/2505.15962v1 |
Ruochen Xu, Chen Lin, Yujiu Yang, Jian Jiao, Nan Duan, and Weizhu Chen. Rho-1: Not all tokens are what you need, 2025. URLhttps://arxiv.org/abs/2404.07965 . [32] Bo Liu, Qiang Liu, and Peter Stone. Continual learning and private unlearning, 2022. URL https://arxiv.org/abs/2203.12817 . [33] Pratyush Maini, Zhili Feng, Avi Schwarzschild, Zachary C. Lipton, and J. Zico Kolter. Tofu: A task of fictitious unlearning for llms, 2024. URL https://arxiv.org/abs/2401.06121 . [34] Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Daniel Khashabi, and Hannaneh Ha- jishirzi. When not to trust language models: Investigating effectiveness of parametric and non-parametric memories, 2023. URL https://arxiv.org/abs/2212.10511 . [35] Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. Locating and editing factual associations in gpt, 2023. URL https://arxiv.org/abs/2202.05262 . [36] Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen tau Yih, Pang Wei Koh, Mohit Iyyer, Luke Zettlemoyer, and Hannaneh Hajishirzi. Factscore: Fine-grained atomic evaluation of factual precision in long form text generation, 2023. URL https://arxiv.org/abs/2305. 14251 . [37] Sewon Min, Suchin Gururangan, Eric Wallace, Weijia Shi, Hannaneh Hajishirzi, Noah A. Smith, and Luke Zettlemoyer. Silo language models: Isolating legal risk in a nonparametric datastore, 2024. URL https://arxiv.org/abs/2308.04430 . [38] Eric Mitchell, Charles Lin, Antoine Bosselut, Chelsea Finn, and Christopher D. Manning. Fast model editing at scale, 2022. URL https://arxiv.org/abs/2110.11309 . [39] Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, and Xindong Wu. Unifying large language models and knowledge graphs: A roadmap. IEEE Transactions on Knowledge and Data Engineering , 36(7):3580–3599, 2024. [40] Xiaoman Pan, Wenlin Yao, Hongming Zhang, Dian Yu, Dong Yu, and Jianshu Chen. Knowledge- in-context: Towards knowledgeable semi-parametric language models, 2023. URL https: //arxiv.org/abs/2210.16433 . [41] Guilherme Penedo, Hynek Kydlí ˇcek, Loubna Ben allal, Anton Lozhkov, Margaret Mitchell, Colin Raffel, Leandro V on Werra, and Thomas Wolf. The fineweb datasets: Decanting the web for the finest text data at scale, 2024. URL https://arxiv.org/abs/2406.17557 . [42] Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Liden, Zhou Yu, Weizhu Chen, and Jianfeng Gao. Check your facts and try again: Improving large language models with external knowledge and automated feedback, 2023. URL https://arxiv.org/abs/2302.12813 . [43] Guangyue Peng, Tao Ge, Si-Qing Chen, Furu Wei, and Houfeng Wang. Semiparametric language models are scalable continual learners, 2023. URL https://arxiv.org/abs/2303. 01421 . [44] Fabio Petroni, Tim Rocktäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H. Miller, and Sebastian Riedel. Language models as knowledge bases?, 2019. URL https: //arxiv.org/abs/1909.01066 . [45] Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick Lewis, Majid Yazdani, Nicola De Cao, James Thorne, Yacine Jernite, Vladimir Karpukhin, Jean Maillard, Vassilis Plachouras, Tim Rocktäschel, and Sebastian Riedel. Kilt: a benchmark for knowledge intensive language tasks, 2021. URL https://arxiv.org/abs/2009.02252 . 13 [46] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. OpenAI , 2019. URLhttps://cdn.openai.com/better-language-models/language_models_are_ unsupervised_multitask_learners.pdf . Accessed: 2024-11-15. [47] Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model, 2024. URL https://arxiv.org/abs/2305.18290 . [48] Pranav Rajpurkar, Robin Jia, and Percy | https://arxiv.org/abs/2505.15962v1 |
Liang. Know what you don’t know: Unanswerable questions for squad, 2018. URL https://arxiv.org/abs/1806.03822 . [49] Maarten Sap, Hannah Rashkin, Derek Chen, Ronan LeBras, and Yejin Choi. Socialiqa: Com- monsense reasoning about social interactions, 2019. URL https://arxiv.org/abs/1904. 09728 . [50] Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettle- moyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools, 2023. URL https://arxiv.org/abs/2302.04761 . [51] Weijia Shi, Julian Michael, Suchin Gururangan, and Luke Zettlemoyer. knn-prompt: Nearest neighbor zero-shot inference, 2022. URL https://arxiv.org/abs/2205.13792 . [52] Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, and Wen tau Yih. Replug: Retrieval-augmented black-box language models, 2023. URLhttps://arxiv.org/abs/2301.12652 . [53] Anton Sinitsin, Vsevolod Plokhotnyuk, Dmitriy Pyrkin, Sergei Popov, and Artem Babenko. Editable neural networks, 2020. URL https://arxiv.org/abs/2004.00345 . [54] Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In Jill Burstein, Christy Doran, and Thamar Solorio (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) , pp. 4149–4158, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1421. URL https:// aclanthology.org/N19-1421/ . [55] Jason Wei, Nguyen Karina, Hyung Won Chung, Yunxin Joy Jiao, Spencer Papay, Amelia Glaese, John Schulman, and William Fedus. Measuring short-form factuality in large language models, 2024. URL https://arxiv.org/abs/2411.04368 . [56] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models, 2023. URL https: //arxiv.org/abs/2210.03629 . [57] Dani Yogatama, Cyprien de Masson d’Autume, and Lingpeng Kong. Adaptive semiparametric language models, 2021. URL https://arxiv.org/abs/2102.02557 . [58] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence?, 2019. URL https://arxiv.org/abs/1905.07830 . [59] Ningyu Zhang, Yunzhi Yao, Bozhong Tian, Peng Wang, Shumin Deng, Mengru Wang, Zekun Xi, Shengyu Mao, Jintian Zhang, Yuansheng Ni, et al. A comprehensive study of knowledge editing for large language models. arXiv preprint arXiv:2401.01286 , 2024. [60] Ruiqi Zhang, Licong Lin, Yu Bai, and Song Mei. Negative preference optimization: From catastrophic collapse to effective unlearning, 2024. URL https://arxiv.org/abs/2404. 05868 . 14 A Implementation Details A.1 Knowledge Extraction Details Model and Data. We construct a high-quality seed dataset by sampling 1,000 passages from SQuAD-v2 [ 48] and 1,000 passages from Wikipedia. These passages are annotated by GPT-4o with structured factual triples and used to train the ANNOTATOR model. The remaining Wikipedia passages are used as the pre-training corpus for L MLM, with no overlap with the validation set. Both the CORRECTOR andANNOTATOR are based on LLAMA-3.1-8B-I NSTRUCT , selected for its strong instruction-following capabilities. The CORRECTOR andANNOTATOR use a maximum context length of 2048 tokens. All input sequences are truncated to 1024 tokens during LMLM pretraining for consistency. GPT-4o and ANNOTATOR annotations follow the format [dblookup(’Entity’, ’Relation’) -> Value] . These are converted to a token-based format for pretraining: <|db_start|> Entity <|sep|> Relation <|db_value|> Value <|db_end|> . • The C ORRECTOR is fine-tuned using LoRA | https://arxiv.org/abs/2505.15962v1 |
( r=32 ,alpha=16 ) [20] for 2 epochs on 19k GPT-4o- annotated SQuAD-v2 passages, with a learning rate of 2×10−4and an effective batch size of 32. Sequence packing is enabled to improve training efficiency. •The ANNOTATOR is tuned with instruction on the 2k annotated passages using LoRA ( r=32 , alpha=16 ), with a learning rate of 2×10−4and an effective batch size of 32. Training runs for 10 epochs with a maximum sequence length of 2048. Annotation Pipeline. We adopt a three-stage pipeline to distill GPT-4o’s structured annotations into a lightweight and scalable A NNOTATOR model: •Stage 1: Seed Annotation. GPT-4o is prompted to annotate input passages with structured factual triples (entity, relation, value) . These are embedded directly into the text using the lookup call format. Prompt template is provided in Table 4. Table 4: Prompt templates used for GPT-4o and A NNOTATOR annotation. Model Prompt Template GPT-4o You are a knowledge base construction expert. Extract entity-based factual knowledge from a passage and annotate it using the format: [dblookup(’Entity’, ’Relationship’) -> Value] . These annotations simulate a knowledge base query for factual generation. Place dblookup right after the entity and relationship appear, keeping the text flow natural. — Entity-Based Factual Knowledge Principles: - Entities: Use full names for people, organizations, places, or works. - Relationships: Use specific, reusable labels that define the connection clearly. - Values: Keep them concise and factual. Annotation Principles: 1. Extract ALL Atomic Facts: Each annotation should capture a single verifiable fact. 2. Precise Annotations: Use correct and specific entity-relationship-value triples. 3. Ensure Reusability: Use standardized and reusable entity and relation names. 4. Contextual Positioning Rule: Place annotations only after both entity and relation appear. 5. Preserve Text and Maintain Flow: Do not alter or disrupt the original text. Example Annotation: Input: Beyoncé Giselle Knowles-Carter (born September 4, 1981) is an American singer, songwriter, record producer, and actress. Output: Beyoncé Giselle Knowles-Carter (born [dblookup(’Beyoncé Giselle Knowles-Carter’, ’Birth Date’) -> September 4, 1981] September 4, 1981) is an [dblookup(’Beyoncé Giselle Knowles-Carter’, ’Nationality’) -> American] American [dblookup(’Beyoncé Giselle Knowles-Carter’, ’Occupation’) -> singer, songwriter, record producer, actress] singer, songwriter, record producer, and actress. ANNOTATOR Your task is to extract and annotate entity-based factual knowledge from the provided text. Identify and annotate specific entities, relationships, and values using the dblookup format: [dblookup(’Entity’, ’Relationship’) -> Value] Annotation Guidelines: - Inline Insertion: Insert dblookup before factual statements without altering the text. - Atomic Facts: Each dblookup should capture one verifiable fact. - Entities: Use full names for people, organizations, places, or works. - Relationships: Use specific, reusable labels (avoid vague terms). - Values: Keep them concise and factual. 15 •Stage 2: Cleaning. A warm-start CORRECTOR model is trained on the annotated data for 2 epochs without instruction prompts. Although underfit, it is effective at identifying noisy or ill-formed annotations. Specifically, we discard lookup calls where the token-level loss on the entity or relation is in the top 10% of the distribution. •Stage 3: Annotation. We instruction-tune a new ANNOTATOR model on the cleaned dataset for 10 epochs. This model learns to detect when | https://arxiv.org/abs/2505.15962v1 |
factual knowledge should be externalized and how to issue structured lookup queries. The trained model is then applied to the full pre-training corpus to generate large-scale factual supervision. Prompt template is provided in Table 4. Additional Notes. We observe a bimodal loss distribution on entity and relation tokens in the GPT- 4o-generated annotations. This is likely due to GPT-4o accessing future context during generation, which breaks the left-to-right constraint of autoregressive models. As a result, some annotations are not recoverable from preceding context alone. TheCORRECTOR helps filter out such cases—removing lookup calls that are (1) not inferable from prior context, (2) overly specific or inconsistent, or (3) syntactically malformed. This filtering improves the quality of supervision provided to the final A NNOTATOR model. Ultimately, the ANNOTATOR learns to insert lookup calls only when they are contextually grounded and likely to enhance factual accuracy. This encourages retrieval-based reasoning and helps LMLM offload factual knowledge from its parameters into a structured database. A.2 Model Architecture and Training Details We pretrain LMLMfrom scratch using GPT-2 and LLaMA2-style decoder-only architectures. Each model uses its original tokenizer and vocabulary, extended with four special tokens reserved for lookup calls. This results in a vocabulary size of 50,261 for GPT-2 models and 32,004 for LLaMA2 variants. Full architecture specifications, including hidden size, depth, and parameter counts, are shown in Table 5. All models are trained for 8 epochs with a context length of 1,024 tokens using mixed-precision training. For LLAMA2-176M andLLAMA2-382M , we use a batch size of 256 and train for 105k steps, totaling approximately 8 H100-days. Training is performed using Hugging Face Accelerate in bf16 precision. Hyperparameters such as learning rate, scheduler, and warmup steps are detailed in Table 6. Table 5: Model architecture, vocabulary size (including 4 special tokens), and parameter counts. We report both total and non-embedding parameter counts. Model Hidden Size #Layers #Heads Vocab Size Params (Total / Non-Embed) GPT2-124M 768 12 12 50,261 124.4M / 85.5M LLAMA2-176M 512 8 8 32,004 176.4M / 160.0M GPT2-355M 1024 24 16 50,261 354.8M / 303.4M LLAMA2-382M 768 12 12 32,004 381.8M / 357.3M Table 6: Training hyperparameters. LLAMA2-176M ,LLAMA2-382M are initialized in float 32 and trained with mixed precision ( bf16) using Hugging Face Accelerate. Model Batch Size Total Steps LR Scheduler Warmup Precision GPT2-124M,GPT2-355M 320 66k 5.0e-4 – – bf16(mixed) LLAMA2-176M, LL AMA2-382M 256 105k 5.0e-4 cosine 2000 bf16(mixed) A.3 Formalization of Training and Evaluation Objectives We denote an autoregressive language model by pθ, which defines a probability distribution over a sequence of tokens x= (x1, . . . , x T)as: pθ(x) =TY t=1pθ(xt|x<t). 16 Each token xtbelongs to one of the following categories: •Torg: original text tokens from the raw corpus; •Itrain: Return values and <|db_end|> are excluded from training loss objective. •Te,Tr: tokens representing entities and relation arguments within database lookup calls; •Tv: tokens corresponding to retrieved factual values (i.e., return values); •Tdb: special tokens used to mark database lookup segments, including: –<|db_start|> : begins a lookup call; –<|sep|> : separates entity and relation in the query; –<|db_retrieve|> : signals the insertion | https://arxiv.org/abs/2505.15962v1 |
point for the returned value; –<|db_end|> : marks the end of the lookup block. Here is an example using background color to highlight different token categories: Torg: Napoleon was born on <|db_start|> Napoleon <|sep|> Birth_Date <|db_retrieve|> August 15, 1769 <|db_end|> August 15, 1769. Itrain: Napoleon was born on <|db_start|> Napoleon <|sep|> Birth_Date <|db_retrieve|> August 15, 1769 <|db_end|> August 15, 1769. Training Loss. The training objective excludes supervision over return values and the closing marker<|db_end|> to prevent memorization of factual knowledge: L(θ) =−X t∈I trainlogpθ(xt|x<t),where Itrain={t|xt/∈ T v∪ {<|db_end|> }}. (1) Evaluation Metrics. We report both perplexity and negative log-likelihood (NLL), computed over different token subsets depending on the evaluation setting: •Static & Dynamic Perplexity: Tokens corresponding to lookup calls and return values are excluded: PPL static/dynamic = exp −1 |Iorg|X t∈I orglogpθ(xt|x<t) . •Normalized Perplexity: This metric fairly compares generation likelihood by excluding retrieved factual values from the loss, but normalizing by the length of the original (fully reconstructed) text. Specifically: PPL norm= exp −1 |Torg|X t∈I trainlogpθ(xt|x<t)! •Negative Log-Likelihood (NLL): Matches the training loss computation: NLL(x) =−X t∈I trainlogpθ(xt|x<t). A.4 Database and Retrieval Setting We build our database by annotating the pre-training data, obtaining 54.6M knowledge triplets consisting of 9.5M unique entities, 8.5M relationships and 16.2M unique values. For retrieval, we employ a fuzzy matching mechanism based on the cosine similarity of sentence embeddings from ALL-MINILM-L6- V24. As shown in Table 8, specifically, given a lookup call, we compute its embedding and compare it with the embeddings of stored triplets in our database. If the highest similarity score is below a threshold of 0.6, we return unknown to indicate that no sufficiently similar 4https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2 17 match was found. Alternatively, we implement a prefix-tree constrained generation, ensuring that lookup calls remain covered by the structured knowledge representations. See detailed discussion in Appendix D.3. Napoleon was born on (Napoleon, Birth Date) Napoleon was born on (Napoleon, Birthday)Napoleon, Birth Date Napoleon, MotherMozart, Birth DateSentence Embedding Space STARTNDate Place Map el...B irthrother ......... ...Pre x-Tree Decoding Figure 8: Unconstrained vs. Constrained Query Generation. B Experimental Setup B.1 Evaluation Benchmarks Perplexity. We evaluate language modeling perplexity on a held-out Wikipedia test set consisting of 1,000 passages (~245k tokens). We use the same tokenizer as the original model (either GPT2 or LLaMA2) and apply Hugging Face’s Trainer with sequence packing enabled. For LMLM, reference completions are annotated using ANNOTATOR . In the dynamic setting, the model generates its own lookup arguments, but we force lookup calls to occur at the same positions as in the reference. The exact formulation of perplexity is provided in Appendix A.3. FactScore. We evaluate factual precision using FACTSCORE [36], a benchmark for open-ended biography generation. Given a generated text, FactScore extracts a set of atomic facts and computes the proportion that is supported by a trusted knowledge source. We use the first 100 biography queries provided in the benchmark. All models generate outputs using greedy decoding (maximum length = 256 tokens; repetition penalty = 1.2). Factuality is validated using retrieval-augmented prompting with ChatGPT, following the official evaluation protocol.5 ForLMLM, which is not instruction-tuned, | https://arxiv.org/abs/2505.15962v1 |
we use a fixed prompt template to elicit biography completions: “ Tell me a bio of <name>. <name> is ” This prompt is applied consistently across all samples. To encourage structured queries during generation, we apply a logit bias to four special tokens in the vocabulary: <|db_start|> ,<|sep|> ,<|db_retrieve|> , and<|db_end|> , with respective bias values of 5.0, 2.0, 2.0, and 2.0. Retrieval is performed using fuzzy matching with a cosine similarity threshold of 0.6. If no relevant triplet is found, the model continues generation using the plain text string unknown as a fallback. This behavior is untrained and left for future work to improve robustness to retrieval failures. T-REx. We adapt the T-REx subset from LAMA [ 44] for autoregressive models by filtering examples in which the masked entity does not appear in the final position. This yields 11,615 left-to- right compatible examples, following Schick et al. [50]. Each input consists of a factual statement, such as: “ Jaideep Sahni (born 1968) is an Indian [MASK] ” The model is expected to complete the statement with a single token (e.g., actor ). We evaluate using two metrics: Exact Match , which checks whether the reference answer appears among the first five generated content words, and Precision@1 , which checks whether the first generated content token matches the reference. All models generate outputs using greedy decoding, and both metrics are computed after post- processing to remove lookup calls. For LMLM, we enforce a database lookup call at the masked position using fuzzy matching with a similarity threshold of 0.6. If no match is found, the model continues with standard decoding without triggering a structured lookup call, as the target fact may belong to common knowledge not covered by the database. 5https://github.com/shmsw25/FActScore 18 B.2 Machine Unlearning Setting TOFU. The TOFU benchmark [ 33] evaluates unlearning efficacy in privacy-sensitive scenarios, aiming to selectively remove a specific subset of information (the Forget Set ) from a model (the Full Model ) while maintaining performance on retained information (the Retain Set ) and general model capabilities. The benchmark’s primary goal is to ensure the resulting Unlearned Model is statistically indistinguishable from a model trained exclusively on the Retain Set (the Retain Model ). The TOFU benchmark comprises 200 synthetic author profiles, each associated with 20 QA pairs. Performance is assessed using two primary metrics: •Model utility: An average of three metrics— ROUGE (answer quality), Answer Probability (the likelihood of correct answers), and Truth Ratio (the likelihood assigned to correct answers over distractors)—evaluated on the Retain Set, Real Author Set, and World Facts Set. •Forget quality: Thep-value from a statistical test comparing the unlearned model to the correspond- ingretain model , quantifying the effectiveness of knowledge removal. Our implementation directly builds upon the official TOFU repository6. We use LLAMA-3.2-1B-I NSTRUCT as the base model and compare against NPO [60], a state-of- the-art method for unlearning. We test on forget 5% setting. For LMLM, unlearning is implemented simply by removing relevant entries corresponding to the Forget Set from the external database. NPO baseline. Negative Preference Optimization (NPO) [ | https://arxiv.org/abs/2505.15962v1 |
60] is a state-of-the-art method for selective knowledge removal, especially effective for unlearning large portions (50%-90%) of training data. Unlike traditional gradient-based unlearning methods, which often degrade a model’s general performance and struggle to forget only 10% of training data, NPO explicitly guides the model away from undesired (negative) samples, maintaining stable training dynamics and preventing catastrophic performance degradation. Further details can be found in the original paper [ 60]. We follow the official TOFU implementation, running unlearning fine-tuning three times with different random seeds and reporting the mean and variance in Figure 6. LMLMImplementation of TOFU Evaluation. To evaluate unlearning effectiveness, we assume the synthetic knowledge used in TOFU is fully represented in our external database. We annotate the complete TOFU dataset (4k synthetic QA pairs) using GPT-4o , subsequently building our database from these annotations. Since LMLMis applied to pre-existing models, we introduce an additional step to ensure the model can utilize the lookup mechanism effectively. Specifically, we first perform a warm-up training stage on annotated Wikipedia data, followed by fine-tuning on the annotated TOFU training set, using the same hyperparameters as the baseline models (see Table 7). For ROUGE evaluations, generated answers are post-processed to remove structured lookup tokens before computing the scores. For likelihood-based metrics (Answer Probability and Truth Ratio), we evaluate the model’s probabilities only on the annotated training input segments ( I_train , as defined in Appendix A.3). Thus, ROUGE scores remain directly comparable across methods, but likelihood-based metrics are not directly comparable due to differences between raw and annotated reference answers. Interpreting TOFU Results. When interpreting TOFU results, the key indicator of successful unlearning is how closely the unlearned model ’s performance matches that of the retain model . Ideally, the two models should be indistinguishable, indicated by a forget quality p-value above 0.05. In practice, this means the unlearned model should achieve forget quality above 0.05 while maintaining consistent model utility, and its ROUGE scores on both the Forget Set and Retain Set should closely resemble those of the retain model . Importantly, lower ROUGE scores on the Forget Set do not necessarily imply better unlearning, as these reductions could also result from general performance degradation. 6https://github.com/locuslab/open-unlearning 19 B.3 Experimental Settings by Figure We detail the experimental configurations corresponding to each figure in the main paper: •Figure 4 : (Left) We eval on a held-out wikipedia validaition set of 100 passages (~21k tokens) every 1000 steps during pretraining. (Middle) The detailed results for FactScore and NLU are in Table 1 and 9. (Right) The backbone model is LLAMA-3.2-1B-I NSTRUCT . We implement LMLMevaluation based on TOFU official repo It is forget 5% setting. The detailed results for baselines methods are copied from TOFU repo. •Figure 6 : For NPO baselines, we follow the official TOFU implementation, running unlearning fine-tuning five times with different random seeds (0, 42, 420, 69, 4497) and reporting the mean and variance throughout training. The ideal performance of the retain model is indicated with a marker. ForLMLM, which does not require training to unlearn, we show only the pre- and | https://arxiv.org/abs/2505.15962v1 |
post-unlearning results. Details on how to interpret the results are in Appendix B.2. •Figure 7 : We use the TOFU synthetic training set to compare training objectives by tracking the loss on return value tokens—the factual spans intended to be retrieved via lookup. Models are evaluated every 125 steps during training. Table 7: Training hyperparameters used in different experimental settings. (Full set: 4k QA pairs, Retain set: 3.8k QA pairs) Setting learning_rate warmup_steps num_train_epochs batch_size dataset Finetune (Standard) 1e-5 0.2 5 32 TOFU trainset Warmup (L MLM) 5e-5 0.25 1 64 Annotated Wikipedia (9.8k chunk) Finetune (L MLM) 5e-5 0.2 5 32 Annotated TOFU trainset 20 C Detailed Results RAG. We implement a standard retrieval-augmented generation (RAG) pipeline following Lewis et al. [29]. The retriever uses BM25 (via the BM25Retriever from FlashRAG7) to fetch the top-4 relevant chunks from English Wikipedia segmented into 100-word passages. Retrieved passages are appended to the input using the prompt template:“ Answer the question or complete the prompt based on the given document. The following are given documents: [RETRIEVED_DOCUMENTS] \n [USER_QUERY] ” as shown in Table 14. No fine-tuning is performed; models are evaluated with retrieval at inference time only. In practice, we observe that the BM25 retriever consistently retrieves highly relevant supporting documents, ensuring a fair and competitive RAG baseline. We evaluate RAG with OPENAI/GPT2-355M , enabling a direct comparison against our L MLMmodel of similar scale. Factual Degradation When Forcing Internal Recall. Table 8 shows the full results when database access is disabled, forcing models to rely solely on internal parameters. Across all variants, we observe a consistent drop in factual precision, often below the STANDARD baseline. This degradation supports our main claim: LMLMdoes not memorize factual answers but retrieves them externally. These results highlight that what the model knows and forgets is determined by the database, enabling precise and direct control through simple edits. Table 8: Impact of database access on factual precision. Disabling access leads to performance drops in both FACTSCORE and T-REx, confirming that LMLMrelies on retrieval from external database rather than memorization. Metrics Model Model Type Database FActScore (%) ↑T-REx Exact Match (%) ↑ GPT2-124MSTANDARD - 10.7 41.2 LMLM × 14.9−5.7 32.0−22.6 LMLM ✓ 20.6 54.6 LLAMA2-176MSTANDARD - 10.1 46.3 LMLM × 11.3−19.3 34.9−19.2 LMLM ✓ 30.6 54.1 GPT2-355MSTANDARD - 14.4 44.9 LMLM × 10.4−13.5 36.4−22.3 LMLM ✓ 23.9 58.7 LLAMA2-382MSTANDARD - 14.0 52.0 LMLM × 12.8−19.1 38.5−19.6 LMLM ✓ 31.9 58.1 Additional Loss Curves. Figure 9 shows the full training loss curves on different token types in the TOFU synthetic trainset. While the main paper focuses on return value tokens, we include loss onTorg, entity, and relationship tokens here for completeness. We observe no significant difference between training objectives on these tokens. 0 200 400 600 Global Step2.02.53.0org 0 200 400 600 Global Step0.500.751.001.251.501.75entity 0 200 400 600 Global Step246rel 0 200 400 600 Global Step2468value LMLM Regular SFT Figure 9: Training loss on Torgtokens, entity tokens, relationship tokens, and return value tokens. 7https://github.com/RUC-NLPIR/FlashRAG/blob/main/docs/original_docs/baseline_ details.md#global-setting 21 100101102103104105106 Wikipedia Frequency0100200300400Entity CountFigure 10: Frequency distribution of database entities matched to Wikipedia entries. | https://arxiv.org/abs/2505.15962v1 |
The database spans a wide range of entity frequencies, including many long-tail cases. D Further Analysis D.1 Does L MLMAffect Language Understanding? Beyond the promising results, it is important to verify that our approach does not compromise the general capabilities of pretrained language models. To assess this, we follow Penedo et al. [41] and evaluate on a set of "high-signal" Natural Language Understanding (NLU) benchmarks (as shown in Table 9). Given our focus on smaller models, we exclude benchmarks where both GPT-2 and similarly sized models fail to rise above the noise floor [ 12]. This leaves us with the following benchmarks: Commonsense QA [54],HellaSwag [58],PIQA [4],SIQA [49], and ARC Easy [10]. Implementation details can be found in lighteval8. It is important to mention that STANDARD andLMLMare pretrained solely on the Wikipedia dataset, which makes certain benchmarks not applicable for comparison with off-the-shelf models. However, the chosen NLU benchmarks still effectively address concerns about the potential negative impact on the models’ general language understanding ability introducing by removing factual knowledge. Table 9: Evaluation of NLU benchmarks using normalized accuracy metrics, demonstrating that separating factual knowledge during pretraining does not compromise overall model performance. Metrics Model Model Type CSQA HellaSwag PIQA SIQA ARC Easy All Random Chance - 20.0 25.0 50.0 33.3 25.0 30.7 OPENAI/GPT2-124M∗- 30.3 29.8 62.5 40.7 39.5 40.6 GPT2-124MSTANDARD 26.5 26.4 55.3 39.2 34.2 36.3 LMLM 27.9 26.8 55.1 39.9 35.0 37.0 LLAMA2-176MSTANDARD 26.6 27.0 55.4 40.4 33.9 36.7 LMLM 26.8 28.2 55.2 40.2 35.8 37.2 OPENAI/GPT2-355M∗- 32.6 37.1 66.4 41.2 43.6 44.2 GPT2-355MSTANDARD 28.1 27.0 55.7 40.0 37.8 37.7 LMLM 27.1 27.7 56.8 40.1 36.9 37.7 LLAMA2-382MSTANDARD 27.8 28.8 55.2 41.0 35.8 37.7 LMLM 26.9 29.1 56.1 40.8 35.9 37.8 * Models marked with an asterisk (*) are off-the-shelf models with no additional training. D.2 Entity Frequency Analysis in the Knowledge Database We analyze the prevalence of knowledge triplets in our database by estimating how often each entity appears in Wikipedia. Using entity frequency statistics from Kandpal et al. [26], we apply fuzzy string matching (threshold = 70) to align database entities with Wikipedia entries. Figure 10 shows the distribution of matched entity frequencies. The results indicate that our database spans both 8https://github.com/huggingface/lighteval 22 Table 10: Comparison of Fuzzy Match and Prefix-Tree Decoding. Fuzzy matching, used by default in L MLM, offers higher flexibility; prefix-tree decoding is included for ablation. Model Model Type Decoding FActScore (%) ↑FActScore w/o len. penalty ↑Facts / Response ↑ LLAMA2-176MSTANDARD - 10.1 11.8 23.7 LMLM Prefix-tree 23.0−7.6 23.9−8.5 34.6 +3.2 LMLM Fuzzy Match 30.6 32.4 31.4 LLAMA2-382MSTANDARD - 14.0 16.6 32.4 LMLM Prefix-tree 23.5−8.4 31.3−1.4 28.1−5.6 LMLM Fuzzy Match 31.9 32.7 33.7 common and long-tail knowledge, with a substantial portion of entities being less frequent in the overall training corpus. D.3 Ablation: Unconstrained vs. Constrained Query Generation We compare fuzzy matching with prefix-tree constrained decoding for generating (entity, relation) queries. In the unconstrained setting, the model freely generates queries, which are then matched against the database using cosine similarity over sentence embeddings. This approach provides flexibility but may result in syntactically invalid or ambiguous | https://arxiv.org/abs/2505.15962v1 |
queries. In contrast, prefix-tree decoding restricts generation to valid entries encoded in a trie structure, ensuring syntactic correctness and reducing hallucinations. It is also compatible with beam search and nucleus sampling, allowing the model to explore multiple valid paths in the induced knowledge graph. See Appendix A.4 for more details. As shown in Table 10, fuzzy matching consistently outperforms prefix-tree decoding in our setting, and is therefore used in all reported experiments. While prefix-tree decoding offers stronger structural guarantees, we find its diversity can be overly constrained by a relatively small database. As both the database and model scale, we expect the benefits of structured decoding to become more pronounced. D.4 Qualitative Results To complement our quantitative results, we present qualitative examples comparing outputs from LMLM,STANDARD , and off-the-shelf models. As shown in Table 11, 12 and 13, LMLMproduces concise, factually grounded responses by leveraging external knowledge, whereas standard and off-the-shelf models often include hallucinated content. These examples illustrate how knowledge offloading enables LMLMto maintain factual precision, further supporting our central finding (Figure 4) that LMLMs achieve more with less by scaling efficiently through externalized knowledge. 23 Table 11: Qualitative examples of generated outputs from models with STANDARD andLMLM settings. Model Generated Output LLAMA2-176M - LMLMKo Itakura is the <|db_entity|> Ko Itakura<|db_relationship|> Position<|db_return|> center-back, defensive midfielder<|db_end|> center-back and defensive midfielder for <|db_entity|> Ko Itakura<|db_relationship|> Current Team<|db_return|> Japan national team<|db_end|> Japan’s senior squad in the <|db_entity|> Ko Itakura<|db_relationship|> League Participation<|db_return|> 2022 FIFA World Cup<|db_end|> 2022 FIFA World Cup. He was born on <|db_entity|> Ko Itakura<|db_relationship|> Birth Date<|db_return|> January 27, 1997<|db_end|> January 27, 1997. Career. On <|db_entity|> Ko Itakura<|db_relationship|> Joined Club<|db_return|> Kawasaki Frontale<|db_end|> July 30, 2015, it was announced that Itakura joined JFL club Kawasaki Frontale from their youth system. In <|db_entity|> Ko Itakura<|db_relationship|> Moved To<|db_return|> Groningen<|db_end|> August 2016 he moved to Groningen. On <|db_entity|> Ko Itakura<|db_relationship|> Debut Date<|db_return|> June 17, 2019<|db_end|> June 17, 2019, Itakura debuted against <|db_entity|> Ko Itakura<|db_relationship|> Debut Opponent<|db_return|> Uruguay<|db_end|> Uruguay in an international friendly match. After playing only one game for Groningen during his debut season, Itakura left the club at the end of the year. LLAMA2-176M - STANDARDKo Itakura is the name given to an ancient Japanese settlement in what is now part of Shimane Prefecture, Japan. The site was discovered by archaeologists from Kyoto University and has been excavated since 1978. The first written reference to this place appears in "Nihon K ¯oki", which dates back to the year 604 AD (Kumano no Kan’i), but it seems that there were already settlements at this location before then. In the early Heian period, the area around modern-day Shimonoseki City became known as , meaning "the land between rivers". During the Kamakura period, the region came under control of the Ashikaga clan who built their capital on Mount Hiei near present-day Naka Ward. After the fall of the Ashikaga shogunate, they lost control over much of eastern Japan, including the area surrounding modern-day Shimane prefecture. However, after the Battle of Sekigahara, Tokugawa Ieyasu established himself as lord of most of northern Japan with his domain centered on Matsue Castle. | https://arxiv.org/abs/2505.15962v1 |
This castle later fell into disrepair during the Edo Period, and 24 Table 12: Qualitative examples of generated outputs from models with STANDARD andLMLM settings. Model Generated Output LLAMA2-382M - LMLMKo Itakura is the <|db_entity|> Ko Itakura<|db_relationship|> Position<|db_return|> center-back, defensive midfielder<|db_end|> center-back and defensive midfielder for <|db_entity|> Ko Itakura<|db_relationship|> Team<|db_return|> Japan national team<|db_end|> Japan’s national team. He was born in <|db_entity|> Ko Itakura<|db_relationship|> Birthplace<|db_return|> Yokohama<|db_end|> Yokohama on <|db_entity|> Ko Itakura<|db_relationship|> Birth Date<|db_return|> January 27, 1997<|db_end|> 27th January 1997. His father is from <|db_entity|> Ko Itakura<|db_relationship|> Father’s Origin<|db_return|> Japanese<|db_end|> Japanese and his mother is from <|db_entity|> Ko Itakura<|db_relationship|> Mother’s Origin<|db_return|> Japanese<|db_end|> Japanese. When he was young, he played baseball but af- ter watching an exhibition match against a professional baseball club, he decided to become a footballer. In <|db_entity|> Ko Itakura<|db_relationship|> Joined Club Year<|db_return|> Kawasaki Frontale<|db_end|> 2013, he joined J1 League side Kawasaki Frontale. However, he could not play many matches behind Shusaku Nishikawa until <|db_entity|> Ko Itakura<|db_relationship|> First Match as Starter<|db_return|> Uruguay<|db_end|> September when he debuted at right back against Uruguay. After that, he became a regular player under man- ager<|db_entity|> Ko Itakura<|db_relationship|> Manager Under Whom Became Regular Player<|db_return|> Japan national team<|db_end|> Shinji Ono. On <|db_entity|> Ko Itakura<|db_relationship|> Debut Date<|db_return|> June 17, 2019<|db_end|> 17 June 2019, he de- buted with Japan national team against Chile during the <|db_entity|> Ko Itakura<|db_relationship|> Competition Debut<|db_return|> Uruguay<|db_end|> 2019 Copa América. Career statistics. “Updated to end of 2018 season”. National team career. In August 2016, Itakura was elected to the <|db_entity|> Ko Itakura<|db_relationship|> U-23 Selection<|db_return|> Japan U-20 national team<|db_end|> Japan U-20 national team for the <|db_entity|> Japan U-20 national team<|db_relationship|> Tournament Participation<|db_return|> 2017 U-20 World Cup<|db_end|> 2017 U-20 World Cup. At this tournament, he played all 4 matches as left back of three back defense. In May 2019, he was se- lected for the <|db_entity|> Ko Itakura<|db_relationship|> Senior Squad Selection<|db_return|> LLAMA2-382M - STANDARDKo Itakura is the first album by Japanese singer-songwriter Koda Kumi, released on November 19, 2003 under Sony Music Japan label. The album was reissued in March 2004 with two bonus tracks: “Kimi no Uta” and “Tell Me A Bio”. 25 Table 13: Qualitative examples of generated outputs from different off-the-shelf models. Model Generated Output OPENAI/GPT2-124M Ko Itakura is the author and editor of The Art Of Anime , which has been published by Kadokawa since 2013 (and was also translated into English in 2015). I’m not sure if I can say much more about this book than what you’ve said before but it’s pretty good! This manga really does have an interesting story arc that makes for some great reading experience as well as being very entertaining to read on its own terms!! OPENAI/GPT2-355M Ko Itakura is an American author, illustrator and animator who has worked on many anime series including: Naruto, One Piece (one of the most popular manga in Japan), Bleach, Dragon Ball Z, Ghost in The Shell, Gintama. He also created characters for various games such as Final Fantasy VII, Pokémon X/Y , or Pokemon Omega Ruby/Alpha Sapphire. The story takes place during World War II when Japanese soldiers are sent | https://arxiv.org/abs/2505.15962v1 |
to fight against Nazi Germany’s forces under General Yamamoto at Kure Castle near Tokyo. After defeating them all except one—Genosha-Kaiji from Earth Kingdom fame—they return home with their families but find that there was no sign left behind by those killed fighting alongside them; instead it seems like everything had been destroyed! Now this isn’t exactly what you’d expect if your favorite character were actually dead... but then again maybe he wasn’t? Well I’m sure we’ll get into more details about him later so stay tuned... PYTHIA -1B Ko Itakura is the name of my father.¨ Ï don’t know him, ¨said the man, ¨but I can tell you that he was an excellent swordsman and a good fighter. He had many friends in the army who were killed by bandits or robbers when they tried to capture them. They would have been glad if someone like him could help them escape from their captors. But it’s not possible for anyone with his skill to be captured alive. So we must find out where he lives so that we may rescue him.¨ The man went on: ¨He has lived here all these years because there are no other people around. The only thing that keeps him going is the hope that someday some day he will return home. And now he wants to see you again. Please come this way.¨ They walked along the road until they came to a small house which looked as though it might belong to one of the villagers. A woman opened the door and invited them inside. She led them into a room full of bookshelves. There were several large tables covered with maps and charts. On each table sat a young girl about ten years old. Her hair was long and black and her eyes were bright blue. She wore a white dress and carried LLAMA2-7B Ko Itakura is the author of *The Artist’s Way* (4.06 avg rating, 12 ratings, 3 reviews), *The Artist’s Way Workbook: A Course in Discovering and Recovering Your Creative Self* (5/5(7)). *The Artist’s Way* by Julia Cameron – Book Review | The Guardian Julia Cameron has been an active artist for more than thirty years. She was born in London but now lives in New York City with her husband, Martin. They have three children together. In addition to writing books on creativity, she also writes plays and musicals. Her latest book, *Finding Water*, will be published this fall. She is currently working on another novel called *The Last Days Of Summer*. This article contains affiliate links which means that if you click through from here we may receive compensation at no extra cost to yourself! We only recommend products or services that we believe are useful for our readers so please feel free to check them out before making any purchases :) Thank you for reading our blog post today!! If there’s one thing I know about myself it’s how much I love reading blog posts like these ones because they always make my day better when someone 26 Table | https://arxiv.org/abs/2505.15962v1 |
14: Example of input and output from the RAG model. The model is prompted using retrieved documents and a user query. Field Content Prompt Template Answer the question or complete the prompt based on the given document. The following are given documents: [RETRIEVED_DOCUMENTS] \n [USER_QUERY] Retrieved Documents "Kang Ji-hwan" is a South Korean actor born Jo Tae-gyu on March 20, 1977. He began in musical theatre before transitioning to television and film, with notable roles in dramas like "Be Strong, Geum-soon!", "Capital Scandal", and "Hong Gil-dong". "90 Days, Time to Love" is a 2006 South Korean TV series starring Kang Ji-hwan, centered on a professor and scriptwriter who rekindle a romance after discovering they are cousins. Kang also produced and starred in the musical "Cafe In", becoming the first Korean actor to perform at Tokyo Globe Theatre. User Query Tell me a bio of Kang Ji-hwan. Model Input Answer the question or complete the prompt based on the given document. The following are given documents: "Kang Ji-hwan" is a South Korean actor born Jo Tae-gyu on March 20, 1977. He began in musical theatre before transitioning to television and film, with notable roles in dramas like "Be Strong, Geum-soon!", "Capital Scandal", and "Hong Gil-dong". "90 Days, Time to Love" is a 2006 South Korean TV series starring Kang Ji-hwan, centered on a professor and scriptwriter who rekindle a romance after discovering they are cousins. Kang also produced and starred in the musical "Cafe In", becoming the first Korean actor to perform at Tokyo Globe Theatre. Tell me a bio of Kang Ji-hwan. Model Output (OPENAI/GPT2- 355M)Kang Ji-hwan is known primarily for portraying characters with complex emotions like "the heartbreak," which can be seen throughout most drama shows nowadays. His character arc often involves both romantic relationships and difficult situations where people have conflicting feelings about their own lives. For example during this season we see how complicated it is for two men trying desperately not get divorced because there isn’t enough time left before graduation! What do you think? Do your favorite actors play these kinds/characters differently than others? Let us know what kind(ies?) I missed out below... Model Output ( LLAMA - 3.2-1B-I NSTRUCT )Kang Ji-hwan is a renowned South Korean actor known for his versatility and range in various genres. Early Life Kang Ji-hwan was born on March 20, 1977, in Busan, South Korea. His full name is Jo Tae-gyu. Career Beginnings He started his career in musical theater before transitioning into film and television. Kang debuted in musical performances in 2001 with stints in productions like "The Rocky Horror Show" and "Grease". Breakthrough Roles His breakthrough came in 2005 with the drama "Be Strong, Geum-soon!". He later starred in "Capital Scandal", "Hong Gil-dong", and "90 Days, Time to Love", showcasing his ability to play complex characters. Film Career Kang ventured into cinema with movies like "Rough Cut" (2008) and "My Girlfriend Is an Agent" (2009). Recognition Throughout his career, Kang has received numerous awards and nominations, solidifying his status in Korean entertainment. 27 | https://arxiv.org/abs/2505.15962v1 |
arXiv:2505.15963v1 [cs.CV] 21 May 2025OViP: Online Vision-Language Preference Learning Shujun Liu∗ Fudan UniversitySiyuan Wang University of Southern CaliforniaZejun Li Fudan University Jianxiang Wang ByteDanceCheng Zeng ByteDanceZhongyu Wei Fudan University Abstract Large vision-language models (LVLMs) remain vulnerable to hallucination, often generating content misaligned with visual inputs. While recent approaches advance multi-modal Direct Preference Optimization (DPO) to mitigate hallucination, they typically rely on predefined or randomly edited negative samples that fail to reflect actual model errors, limiting training efficacy. In this work, we propose an Online Vision-language Preference Learning (OViP) framework that dynamically con- structs contrastive training data based on the model’s own hallucinated outputs. By identifying semantic differences between sampled response pairs and synthesizing negative images using a diffusion model, OViP generates more relevant supervision signals in real time. This failure-driven training enables adaptive alignment of both textual and visual preferences. Moreover, we refine existing evaluation protocols to better capture the trade-off between hallucination suppression and expressive- ness. Experiments on hallucination and general benchmarks demonstrate that OViP effectively reduces hallucinations while preserving core multi-modal capabilities. 1 Introduction Large vision-language models (LVLMs) [ 1,2,3,4,5] have demonstrated remarkable performance across a wide range of multi-modal tasks [ 6,7,8,9] by integrating pre-trained visual encoders with large language models (LLMs) to process and generate language grounded in visual inputs. However, LVLMs continue to struggle with persistent hallucination issues [ 10,11], often exhibiting incorrect references to visual content [ 12,13,11]. These errors manifest as misattributing object properties, describing nonexistent entities, or fabricating spatial relationships that do not align with the image. Such inconsistencies undermine the model’s faithfulness to the input and hinder further reasoning capabilities, significantly limiting the reliability of LVLMs in real-world applications. Recent success of Direct Preference Optimization (DPO) [ 14] in LLMs alignment has inspired the exploration of multi-modal DPO to mitigate hallucination in LVLMs [ 15,16,17,18]. However, early efforts directly extend the original DPO designs from LLMs to LVLMs by constructing preference pairs solely on textual responses given the same image input, primarily focusing on response- side preference optimization and showing limited effectiveness. Recent advancements incorporate additional preference pairs conditioned on varying image inputs while keeping the same response, optimizing both visual and textual preference optimization [ 19,20,21]. This paradigm provides a complementary training signal that encourages the model to attend more closely to visual content. However, prior work mainly relies on existing paired datasets [ 20] or expert-defined pattens to con- struct negative image inputs, using techniques such as random cropping [ 19], noise disruption [ 22], object removal [ 23], or human/LLMs generated element-replaced response for image editing [ 17]. ∗shujuanliu24@m.fudan.edu.cn Preprint. Under review. How many bats are pictured?There are nobats.There are twobats.Why is the cow in the truck?The cow is not in the truck. It appears to be standing by a fence ……Q:A:log𝑃!𝐴𝑄, =? -w.Image: -71.1-w/oImage: -63.1-Base: -64.9(overfitting!)After Training 𝑃!( |𝑄,; )=?two There are Base: 70.9% ;Offline: 42.7% ;Online: 7.1%𝑃!( |𝑄,; )=?no There are Base: 2.5% ;Offline: 39.5% ;Online: 88.2%Q:OViP= Online Preference Learning +Image-aware Online Preference learningMore valid optimizationImage-aware trainingLess overfittingTraining dataimage generatefilterreal-time sampletrainupdate 🔥Model Model train(a) offline Training(b) OViPTrainingDiffusion modelExternal | https://arxiv.org/abs/2505.15963v1 |
LLMFigure 1: Offline training (a) relies on static, predefined datasets and fails to adapt to the model’s evolving failure patterns, limiting its ability to address hallucinations effectively. Moreover, neglecting the role of visual input will result in overfitting to language priors. In contrast, OViP (b) combines online preference learning with image-aware training in a unified framework, enabling real-time data construction grounded in model behavior. These strategies are typically not explicitly tied to model failures, resulting in distribution mis- alignment between the generated negatives and the model’s actual hallucination behavior, thereby offering limited improvement and failing to support adaptive and continual online learning [ 24]. To address these limitations, we propose a failure-guided negative generation strategy that directly targets self-generated hallucinated responses, enabling the real-time creation of more in-distribution counterexamples through text-to-image generation. Specifically, we sample and filter positive and negative response pairs from the model’s textual outputs. Then we employ LLMs to generate an image prompt based on the negative response, particular emphasizing its differences from the positive response, and subsequently synthesize the corresponding negative image using a diffusion model [ 25]. Building upon this negative generation strategy, we further introduce an online vision-language preference learning framework (OViP) on both response and image sides, to dynamically construct and learn from preference data during training. Similar to reinforcement learning paradigms, OViP samples LVLMs’ outputs throughout the training process and creates both response-centric and image-centric preference pairs in real time. As illustrated in Figure 1, these dual signals supervise the model to generate outputs more faithfully grounded in visual content. By continuously sampling and integrating new preference pairs based on emerging failure patterns, OViP enables adaptive learning that aligns with the evolving output distributions of the LVLMs. This ongoing adaptation mitigates the limitations of static datasets and reduces the reliance on extensive manual curation, resulting in more robust and generalizable vision-language alignment. We evaluate our framework across diverse multi-modal benchmarks, including hallucination-focused and general benchmarks. Through extensive experiments, we observe a notable trade-off between hallucination suppression and general language-vision capabilities, and some existing hallucination- elimination methods prove to overfit to hallucination-related metrics, achieving high scores without genuinely improving visual grounding. For example, some methods reduce hallucination by making descriptions or responses less informative. To address this, we refine existing evaluation protocols and introduce a more robust evaluation strategy that jointly assesses hallucination and visual-language competence. Experimental results demonstrate that our method consistently achieves improved per- formance on both the original and our refined metrics when applied to various LVLMs, demonstrating reduced hallucination and preserved general capability. 2 Methodology In this section, we first provide an overview of the Online Vision-Language Preference Learning (OViP) framework (Section 2.1). We then elaborate the process of constructing the online preference pairs during training (Section 2.2) followed by how to learn from these preference data (Section 2.3). 2.1 Overview As illustrated in Figure 2, our OViP framework is designed to dynamically construct real-time preference pairs during training, by collecting in-distribution success and failure responses along 2 Q: What color is the duck's beak? The duck's beak is orange. The | https://arxiv.org/abs/2505.15963v1 |
duck's beak is white. The duck's beak is white with red stripes. ……2 Score𝑟𝑖 The duck in the image has a white beak. The contrast …… The duck's beak is white. The duck's beak is orange.1. Sample 2. Evaluate 4. Generate5. Online training G.T.𝝅𝒕8 6……LLM reference3. Filter𝝉=𝐦𝐚𝐱(𝜹,𝟐𝝈𝒓)Calculate standard deviation: 𝝈𝒓 Select all non -overlapping pairs: 𝒓+−𝒓−>𝝉,𝒓+>𝝉𝒑𝒐𝒔,𝒓−<𝝉𝒏𝒆𝒈 YesExist? No 𝑟𝑤𝑜𝑟𝑠𝑡<𝜏𝑛𝑒𝑔? Yes The duck in the …… [Worst response]G.T.Positive Negative LLM + Diffusion Positive> > Positive Positive Negative Positive>𝓛𝑻𝒆𝒙𝒕 No Image𝓛𝑰𝒎𝒂𝒈𝒆 +𝝅𝒕+𝟏 𝝅𝒕 The duck in the image has a white beak. The contrast between …… pos pos pos neg update Figure 2: Overview of OViP. Given an image and a query, we employ the current model πtto generate multiple responses, which are then evaluated by an external LLM with reference to the ground truth. We filter and select response pairs and use a diffusion model to generate corresponding negative images. The collected data are used to update πt. The filtering strategy is detailed in Section 2.2. with their corresponding original and synthesized images. These preference pairs are then integrated into the next training iteration for direct preference optimization on both image and response sides, providing a continuous feedback loop that refines the model’s visual grounding and improves its ability to distinguish high-quality outputs from suboptimal ones. Specifically, given an input image I+, an instruction Q, and a reference response A∗, OViP first samples multiple candidate responses using the target model π. These responses are then filtered and selected to form positive and negative pairs (A+,A−). Based on the semantic discrepancies between the response pairs, contrastive images I−are further synthesized to describing the negative responses. Finally, both image-level and response-level contrastive losses are applied to update the target model π. A detailed workflow of the OViP algorithm is provided in Appendix C. 2.2 In-Distribution Preference Data Construction We adopt training-time inference to dynamically construct richer preference pairs reflecting in- distribution failures concurrently with the training process, expanding limited and static offline datasets. Specifically, our method involves three integral stages: (1) real-time generation of candidate outputs given visual inputs and instructions, (2) quality-aware sampling of informative preference pairs, and (3) inverse construction of input data conditioned directly on these sampled outputs. To ensure training stability, we implement dynamic sampling techniques and an experience buffer. Real-time Generation of Output Data At each training step s, given a visual input I+and its corresponding textual instruction Q, our model πsgenerates k= 16 candidate responses Ai(i= 1,2, . . . , k )through stochastic sampling. Each generated response is then individually evaluated by an LLM-based reward function (denoted as Gr), which assigns a numerical reward score to each response, reflecting its alignment with the ground-truth answer A∗. Ai∼πs ·|I+,Q ;ri= G r Ai,A∗ (1) Contrasting Response Pair Sampling To effectively learn from preference data, it is crucial to construct pairs with sufficient contrast in quality [ 26], yet usually struggled when the generated responses exhibit similar quality. To address this, we dynamically construct preference pairs by selecting response pairs within each batch that display significant score disparities. Specifically, for | https://arxiv.org/abs/2505.15963v1 |
each set of candidate responses {Ai}k i=1with corresponding rewards {ri}k i=1, we compute the standard deviation σrof the reward scores and select pairs (A+,A−)that satisfy: |r+−r−|>max ( δ,2σr) 3 where δis a fixed lower-bound margin. This criterion ensures that only response pairs exhibiting substantial contrast in reward scores are selected, effectively emphasizing informative differences between success and failure responses. Additionally, we enforce quality constraints by requiring that the accepted positive responses meet a predefined quality criterion (i.e., r+> τ pos), while rejected negative responses fall below a specified threshold (i.e., r−< τ neg). In cases where all candidate responses collectively perform poorly, we leverage offline ground-truth answers A∗as positive responses to guide the model learning effectively, a practice reminiscent of the mixed-policy approach in [27]. Dpair= (Q,I+,A+,A−) A+,A−∈ {Ai}k i=1, |r+−r−|>max( δ,2σr), r+> τpos, r−< τneg (2) Inverse Negative Image Synthesis After obtaining response pairs (Q,I+,A+,A−)∈ D pair, we synthesize negative images corresponding to negative responses while taking input images as positive. Specifically, we utilize an external LLM (denoted as Gdiff) to identify a set of semantic differences between the positive and negative responses, including entities, attributes, and spatial relations, and then generate a textual description T−= G diff(Q,A+,A−)that encapsulates the semantic content of the negative response A−. Subsequently, a diffusion-based image generation model (denoted as Diff) synthesizes a hard negative image as follows: I−= Diff( T−) (3) This inverse generation process, in which the image is conditioned on the textual output, ensures that the synthesized image captures hallucinated or incorrect content, providing more targeted supervision for hallucination mitigation. Moreover, as the generation is explicitly driven by response-level discrepancies, the resulting negative images exhibit higher semantic relevance and visual specificity. Dynamic Inference and Experience Buffer To stabilize batch-wise training while retaining the flexibility of online sampling, we maintain an experience buffer Bthat stores dynamically constructed contrastive training samples. At each training step, the current model πsperforms inference and response sampling, producing contrastive samples that are continuously added to B. This sampling process persists until the accumulated samples reach the predefined batch size N. Once |B| ≥ N, a batch of Nsamples is retrieved from Bfor loss computation and gradient updates. The remaining samples in the buffer are preserved for subsequent iterations, ensuring the training process to proceed smoothly even under variable sampling yields. 2.3 Image- and Response-Side Preference Optimization To effectively align both textual and visual modalities during training, we formulate a unified optimization framework that simultaneously considers response-level and image-level preference signals. The overall optimization objective consists of two complementary components. The first is the text DPO loss[ 14], which guides the model to learn response-level preferences conditioned on the input image and instruction: LText A+,A−;I+,Q = log σ β· logπθ(A+|I+,Q) πref(A+|I+,Q)−logπθ(A−|I+,Q) πref(A−|I+,Q) (4) In addition to response-level alignment, we incorporate a contrastive objective focused on the visual input. By keeping the query and response fixed, the model is required to learn preferences solely from differences in the visual input. On top of this, to further ensure that the model’s output maintains a reasonable and smooth probability distribution, we introduce the | https://arxiv.org/abs/2505.15963v1 |
image-free term πθ(A|Q)and implement the image-side loss as in [20]: LImage (I+,I−;Q,A+) =−logσ β1· logπθ(A+|I+,Q) πref(A+|I+,Q)−logπθ(A+|Q) πref(A+|Q) +β2· logπθ(A+|Q) πref(A+|Q)−logπθ(A+|I−,Q) πref(A+|I−,Q) (5) The overall loss function is then defined as: LOViP Q,I+,I−,A+,A− =LText A+,A−;I,Q +LImage I+,I−;Q,A+ (6) 4 3 Experiment 3.1 Experimental Setup Implementation Details We conduct our experiments on LLaV A-1.5-7B-hf and LLaV A-1.5-13B- hf[5], with CLIP ViT-L-336px as the visual encoder and Vicuna-7b/13b as the backbone respectively. The training dataset, sourced from [ 28], consists of 8,730 samples and 4,013 distinct image–query combinations, including image description, question answering, and some yes/no questions. Both the 7B and 13B models are trained for a single epoch using a cosine learning rate schedule with a global batch size of 16. We use LoRA [ 29] with a rank of 256 and alpha of 512. We set β=β1=β2= 0.1 in Eq. 4 and Eq. 5. Learning rates are 1e-6 for 7B model and 5e-7 for 13B model. Baselines We compare OViP with SFT, DPO [ 14], mDPO [ 19] and GRPO [ 30]. As the original versions of SFT, DPO and mDPO are offline methods, we additionally implement iterative DPO and GRPO to facilitate a more comprehensive comparison. Furthermore, we evaluate several prior works with publicly available model weights, including HA-DPO [ 31], HALV A [ 18], RLAIF-V [ 16] and OPA-DPO [ 28]. Among them, our OViP and OPA-DPO use the same original training data, which is a subset of the dataset used by RLAIF-V . 3.2 Evaluation Metrics The evaluation benchmarks are categorized into two types: hallucination-related evaluation and out-of-domain general capability evaluation. Hallucination-Related Evaluation. These benchmarks assess LVLMs’ hallucination as well as visual competence. We provide illustrative examples in Appendix A.2 to highlight the necessity for improving the metrics for MMHal, AMBER_gen and ObjectHal. •MMHal-Bench (MMHal) [ 32] is a model-evaluated question-answering benchmark covering 8 categories and 12 topics. While the original evaluation strategy uses GPT-4 to judge model responses, the use of a text-only model will introduce considerable judging-time hallucinations and errors (as pointed by [ 33]). We following [ 33] and use gpt-4o-2024-05-13 for evaluation. •AMBER generative (AMB gen) [34] is a judging-model-free benchmark for the image description task, comprising 1,004 samples. The metric Chair measures the object-level hallucination rate as the average precision of objects mentioned in the model’s descriptions, while Cover indicates the recall of objects. We observe a noticeable trade-off between these two metrics across various methods, where improvements in one often come at the expense of the other. To provide a more balanced and overall assessment, we introduce a new F1score calculated as the harmonic mean of Chair and Cover. •Object HalBench (ObjectHal) [ 35] evaluates object-level completeness and hallucination rates using off-the-shelf LLMs. The generation prompts are augmented from [ 15].Chair rdenotes the response-level hallucination rate. Like AMBER_gen, we also introduce an object-level F1 metric to comprehensively measures the balance between hallucination and object coverage. Objects extraction is performed using gpt-4o-2024-05-13 . •Llava-Bench-in-the-Wild (LV) [ 4] evaluates models’ abilities in visual understanding, reasoning, and instruction following, using 60 open-ended questions grounded in 24 diverse | https://arxiv.org/abs/2505.15963v1 |
images from real-world and abstract scenarios. The evaluation is conducted using gpt-4o-2024-05-13 . •AMBER discriminative (AMBER dis) [34] includes 14,216 “Yes/No” questions regarding objects in image. We use the F1-score as its metrics. To aggregate performance across five hallucination-related benchmarks, we introduce the Hallu- cination Reduction Index (HRI ) as a unified measure of overall improvement. Specifically, for each benchmark, we compute the performance change relative to the baseline and apply proportional rescaling to map the values approximately within the range [0,2]. The final HRI score is obtained by summing the rescaled improvements from all benchmarks. As a dynamic metric that depends on actual performance, detailed calculations for each specific HRI are provided in Appendix A.1. Out-of-domain General Capability Evaluation. In addition to the trade-off between hallucination reduction and completeness, we also observe a pronounced trade-off between hallucination reduction 5 Table 1: Main Results for OViP and other methods across different benchmarks. The five shaded metrics highlight the primary balanced and overall results for each benchmark. HRI (Hal- lucination Reduction Index) is the average improvement across five benchmarks. Acc Difis the total accuracy changes across TextVQA[ 37], RealworldQA[ 36], MMStar[ 39] and CVBench[ 38]. GPT4-V (†)’s results are cited from [ 40][34][41] for reference.‡indicates the use of original evaluation strategy.∗denotes methods with publicly released model weights trained on their own datasets, which we direct evaluate without re-training.♯signifies methods trained on datasets that are the same as or larger than ours, allowing for direct comparison. “2-ep” specifies results obtained after two epochs of training. We separate offline methods from non-offline methods for clearer comparison. AMB gen MMHal ObjectHal LV AMB disHRIGeneral Chair↓ Cover↑ F1↑ Score↑ Chair r↓ F1↑ Score↑ F1↑ Acc Dif GPT4-V†4.6 67.1 78.8 3.49‡13.6 - 95.3 87.4 - -LLaV A-1.5-7BBaseline 7.1 50.0 65.01 1.90 51.38 72.40 57.20 85.5 - - HA-DPO∗[31] 5.6 49.4 64.86 1.95 37.15 73.81 57.30 85.4 1.52 -11.59 HALVA∗[18] 5.7 52.9 67.78 2.12 43.40 76.01 58.60 86.5 9.08 -7.36 RLAIF-V∗♯[16] 3.1 49.8 65.79 2.54 9.35 69.78 58.90 86.4 1.37 -6.74 OPA-DPO∗♯[28] 2.4 45.2 61.79 2.78 6.37 63.26 64.80 86.7 -5.60 -11.82 SFT 3.5 50.6 66.39 2.52 20.60 70.30 52.20 86.1 -1.47 -8.07 DPO 3.7 48.9 64.86 2.35 26.60 71.95 56.70 86.8 1.65 -3.86 mDPO 3.4 48.6 64.67 2.55 25.45 73.92 55.80 86.1 2.99 -3.05 DPOiterative 3.9 48.7 64.64 2.32 27.11 72.33 56.40 86.5 1.31 -2.98 GRPO 2ep 4.8 51.2 66.59 2.45 34.98 73.83 58.70 86.8 6.75 -3.83 OViP 4.0 51.1 66.70 2.52 33.22 73.50 63.10 87.3 9.58 +0.88 OViP 2ep 4.0 51.6 67.12 2.65 29.54 74.18 60.90 87.4 10.00 -1.01LLaV A-1.5-13BBaseline 6.5 51.0 65.99 2.24 46.18 76.73 62.60 89.1 - - HALVA∗[18] 6.0 52.2 67.12 2.45 35.07 77.75 61.70 90.0 4.22 -5.45 OPA-DPO∗♯[28] 2.8 47.8 64.08 2.88 5.88 64.46 64.70 89.3 -7.05 -15.25 SFT 4.5 50.0 65.64 2.38 31.21 75.81 64.00 89.9 1.79 -1.24 DPO 3.6 50.6 66.37 2.53 25.00 75.00 65.30 89.6 2.42 +0.12 mDPO 3.9 50.1 65.86 2.51 21.79 75.35 64.50 89.5 1.78 -1.12 GRPO 2ep 3.8 52.4 67.84 2.38 23.76 75.55 66.70 90.4 4.96 -1.48 OViP 4.4 53.1 68.28 2.58 36.30 76.52 64.60 89.7 5.25 | https://arxiv.org/abs/2505.15963v1 |
+0.85 OViP 2ep 3.6 53.7 68.98 2.57 28.62 76.75 67.90 90.2 8.02 +2.02 and general visual capability, i.e., hallucination-targeted models often exhibit performance declines on non-hallucination benchmarks. To assess this impact, we further evaluate LVLMs’ capabilities on general benchmarks, including RealworldQA [ 36], TextVQA [ 37], CVBench [ 38], MMStar [ 39]. We aggregate the results across these benchmarks and compute the Accuracy Difference , serving as a unified metric to quantify overall performance variation after training. 3.3 Main Results Table 1 presents results of OViP and other methods across multiple benchmarks on various LVLM backbones. OViP consistently achieves significant improvements across most primary metrics while effectively preserving model’s general visual capabilities, in contrast to most other methods that exhibit varying degrees of degradation in general benchmarks. Moreover, OViP’s performance gains become increasingly pronounced with larger LVLM backbones, highlighting its scalability. We further observe that our OViP can continuously learn from itself to enhance performance, with ca- pabilities improving progressively over additional training epochs, demonstrating the self-improving nature of its online learning approach. Notably, even with just one training epoch, OViP surpasses HALV A and the two-epoch GRPO, both of which utilize twice the training data, underscoring the exceptional training efficiency of our approach. A critical phenomenon often overlooked in previous work [ 17,28,21,40,19,15,16] is that offline hallucination methods tend to make model outputs less informative, as evidenced by declined Cover metric in AMB gen. However, information omission is also a form of hallucination, especially in image captioning tasks, where omitting entities in the image is effectively equivalent to the model assuming that those entities do not exist. While many prior methods reduce conventional hallucinations, they inadvertently exacerbate omission-based hallucinations. In contrast, OViP achieves a more balanced trade-off between informativeness and hallucination reduction. 6 3.4 Ablation Study The Impact of Loss functions. We evaluated various combinations of loss functions for online pref- erence learning in hallucination mitigation to derive the final formulation in Equation 6. Our ablation study examines the effectiveness of different training objectives, including text-side ( LText), image- side and auxiliary losses. Specifically for image-side losses, we examine our image loss LImage alongside two variants Lbase Image andLImage −Sym. For auxiliary loss, we compare the anchor loss proposed by [ 19] and the bidirectional anchor loss, which enforce the probability of positive response to increase and the negative one to decrease. Detailed formulations are provided in Appendix B.1. Table 2: Results of different loss functions on hallucination-related benchmarks, with key metrics shown. OViP loss is LText+LImage . “−” means the removal of the corresponding loss from OViP. Loss FunctionsTraining from scratch Iterative training AMB gen MMHal AMB dis HRI ObjectHal MMHal AMB dis HRI OViP 65.38 2.35 86.6 4.32 74.14 2.70 85.7 7.94 − LText 66.61 2.12 85.4 4.23 74.01 2.60 86.1 7.71 − LImage 63.58 2.06 85.9 -2.29 72.63 2.40 86.6 4.56 LText+Lbase Image 65.10 2.17 86.2 4.08 73.45 2.58 86.3 7.50 LImage−Sym 65.54 2.09 85.6 -0.32 73.86 2.41 86.3 6.57 We first investigate the effectiveness of text-side and image-side losses, as well as different variants of image-side losses. | https://arxiv.org/abs/2505.15963v1 |
Specifically, we conduct experiments under two training regimes: (1) training from scratch, and (2) iterative training initialized with a DPO-pretrained model using the existing dataset, to ablate these losses on top of different initialized models with varying capabilities. As shown in Table 2, The full OViP loss consistently outperforms all variants under both training regimes , with the LImage andLText+Lbase Image performing comparably. However, when training from scratch, LImage exhibits inconsistent behavior: it significantly improves object-level metrics such as AMB gen, but underperforms on sequence-level metrics such as MMHal and Yes/No questions. In contrast, the text-only loss LText consistently performs the worst, highlighting the necessity of incorporating image-side supervision for mitigating hallucination effectively. 0 2 4 6 8 HRI3 2 1 012General Score Initial modelLoss Functions Text Image ImageSym Text+Imagebase Text+Image Aux. Loss No Anc. Anc. Bi-direc. Anc. Figure 3: Effect of auxiliary loss on text-side and image-side losses. HRI measures hallucination- reduction, while General Score reflects general performance change. Marker color indicates the primary loss function used, and marker shape denotes the usage of auxiliary loss. 47.048.049.050.051.052.0Cover -4.0-2.00.02.04.06.08.010.012.0HRIDPO (Offline) DPO (Online) OViP (Offline) OViP (Online)Figure 4: Results of offline and online training strategy with DPO and OViP. Cover measures the informativeness of model response from AM- BER generative benchmark. We further analyze the impact of auxiliary loss in Figure 3. Contrary to the findings in [ 19], we find that incorporating anchor loss consistently reduces general capability and increases hallucinations across all other loss combinations. Moreover, while applying bi-directional anchor loss slightly improves general capabilities, it does not necessarily enhance hallucination mitigation. Among them, the OViP loss without any auxiliary loss achieves the best performance on both hallucination reduction and general capabilities. We therefore adopt OViP loss as our default training loss function. Online v.s. Offline. Figure 4 demonstrates that online training significantly outperforms its offline counterpart. For the offline setting, we construct training data by sampling and filtering model 7 Table 3: Experiments about the training process. (a): Token-wise average logprobability. Base DPO Off DPOOn IND -0.90 -0.97 -1.08 OOD -1.23 -1.17 -1.17(b): Sampling with temp. = 1.2, num _return _sequences = 16 Steps 0 10 100 500 Num. 14.4 15.2 15.3 15.5(c): Negative image construction R.Crop Offline Online HRI 9.76 9.23 10.88 General 0.34 -0.88 1.44 outputs prior to training. Whether using DPO or our proposed OViP loss, online training consistently outperforms offline training by a margin of no less than 4 points on the HRI metric within just one epoch. This improvement is not limited to the aggregated metric: across every individual benchmark and each corresponding metric, online training yields more stable and superior performance. Detailed results are provided in Appendix B.3. Another notable observation is that online training also enhances the informativeness of model outputs. Even when trained solely with DPO, the Cover score remains above 50. In contrast, previous studies [16, 28, 21] using similar datasets typically exhibit a drop in this aspect. 4 Further Study 0 0.5 1 2 epoch6 4 2 0246810HRI DPOOffline DPOIterative DPOOffPolicy DPOOnline GRPOOnline OViPOffPolicy OViPOnline Figure 5: Results using differ- | https://arxiv.org/abs/2505.15963v1 |
ent training strategy. Offline represents training with existing dataset. Off Policy means train- ing with sampled data and Iter- ative means the dataset of the epoch 2 is from model sampling. (y) yHallucination suppression High-quality promotion Hallucination areaOnline DPO Offline DPO Base Model(y) y Positive distribution driftHallucination areaOViP Online DPO Base ModelFigure 6: Comparison of response quality distributions after train- ing with different methods. The x-axis represents the semantic quality of model responses, with the leftmost region indicating severe hallucinations. The y-axis denotes the probability density over sampled responses. A rightward shift in the distribution corresponds to higher overall response quality and reduced hallu- cination. Online Learners Gets More Figure 5 illustrates how hallucination-related metrics evolve during iterative training under different optimization strategies. A key finding is that all non-online methods exhibit varying degrees of overfitting in multi-round training , as evidenced by a decline in the HRI. In contrast, only online strategies consistently yield stable improvements. Notably, training with external datasets leads to severe overfitting in the second round. Some prior works (such as [28,21]) adopt at least two rounds of training on small, static datasets. While they report significant gains on certain metrics—such as AMB genChair and ObjectHal Chair r—a more holistic view reveals substantial overfitting. In Table 1, HA-DPO, RLAIF-V , and OPA-DPO fall into this category. Both Online DPO and Off-Policy DPO exhibit an initial drop in performance, meanwhile, GRPO and OViP show relatively slow improvement during the early stages of training. This phenomenon is attributed to the model’s initially skewed output distribution, which has a low perplexity over few responses it generates (Table 3(a)(b)). At this stage, the model tends to forget more than it learns, leading to a temporary decline in performance. As training progresses and the distribution gradually smooths out, the model begins to receive more diverse and informative supervision, allowing it to shift toward higher-score regions. In contrast, Off-Policy DPO continues to rely on fixed data sampled from the early-stage distribution. As the model evolves, this static data becomes increasingly misaligned with the model’s current distribution, leading to ineffective learning due to the mismatch between the training samples and the model’s evolving behavior. 8 Learning Dynamics To investigate output distribution shifts, we focus on hard queries in the test set and compare the distributions of sampled responses across models. Our key findings are: (1) Online learning enables both hallucination suppression and promotion of high-quality responses, effectively optimizing both ends of the distribution; (2) Image contrastive learning leads to overall quality improvement. Moreover, as shown in Table 3(c), the quality of negative image samples also matters. Hard negatives constructed via online strategies yield the best training results. 5 Related Work 5.1 LVLM Hallucination A wide range of prior work has explored synthetic data construction for mitigating hallucination in VLMs, which can be broadly categorized into image-related synthesis and text-only synthesis. On the image side, several approaches leverage entity extraction and masking to perform targeted image editing, generating visually similar but semantically distinct counterfactuals [ 17,23]. In contrast, HallusionBench [ 42] adopts a manual approach, carefully | https://arxiv.org/abs/2505.15963v1 |
crafting counterfactual images to probe specific failure modes. Other works take a generative perspective: SynthVLM [ 43] and SynthEmbedding [ 44] utilize off-the-shelf models to synthesize new images or directly generate image embeddings for hallucination-aware training. Meanwhile, a larger body of research focuses on text-side data augmentation. V oCoT [ 45] introduces new prompting patterns and response types to generate hallucination-prone QA data at scale. Other works such as [ 22,18,33] introduce noise through perturbation, masking, or controlled corruption to simulate erroneous responses. More recent approaches [ 40,15] aim to detect and correct hallucinated content at varying levels of granularity, from token-level edits to full-sequence rewrites. These efforts significantly improve the diversity and coverage of supervision signals available for training hallucination-robust VLMs. 5.2 Allocating Computation on Training Sample Construction Recent research has increasingly adopted the paradigm of allocating additional computation during training to get better training samples. Several studies utilize reinforcement learning with human or AI-generated feedback to guide VLM outputs. RLHF-V [ 15] leverages fine-grained human annotations to correct hallucinated content, while RLAIF-V [ 16] replaces human labels with feedback from ensembles of open-source models, significantly reducing annotation overhead. Similarly, OPA- DPO [ 28] employs an on-policy editing step prior to DPO, aligning training samples closely with model predictions to enhance data efficiency. Active learning methods further embody this approach, selectively querying uncertain or diverse samples to maximize training informativeness [ 46], while CLIP-based methods dynamically filtered self-generated samples for high-quality training pairs [ 47]. Other methods integrate auxiliary reward models or evaluators during training, providing continuous and adaptive feedback loops [ 32,48]. Finally, recent approaches incorporate reasoning or editing mechanisms directly into training, using iterative self-feedback or generative data augmentation techniques to dynamically refine model outputs [ 31,49]. These strategies improve model alignment and factuality by enriching the quality and relevance of supervision signals during training. 6 Conclusion In this work, we propose the Online Vision-language Preference Learning (OViP) framework to efficiently address the pervasive hallucination problem in LVLMs. By integrating online preference learning with image-aware training, OViP enables real-time construction of high-quality contrastive data during training—without relying on a teacher LVLM. Furthermore, to better assess the trade-offs between hallucination reduction and overall performance, we refine and extend existing evaluation protocols, proposing a more comprehensive metric suite. Experimental results demonstrate that OViP significantly outperforms prior offline/online training approaches, achieving substantial hallucination reduction while preserving general vision-language capabilities. Our investigation into training-time dynamics also sheds light on the underlying mechanisms behind OViP’s effectiveness. 9 References [1]J.-B. Alayrac, J. Donahue, P. Luc, A. Miech, I. Barr, Y . Hasson, K. Lenc, A. Mensch, K. Millican, M. Reynolds et al. , “Flamingo: a visual language model for few-shot learning,” Advances in neural information processing systems , vol. 35, pp. 23 716–23 736, 2022. [2]J. Chen, D. Zhu, X. Shen, X. Li, Z. Liu, P. Zhang, R. Krishnamoorthi, V . Chandra, Y . Xiong, and M. Elhoseiny, “Minigpt-v2: large language model as a unified interface for vision-language multi-task learning,” arXiv preprint arXiv:2310.09478 , 2023. [3]L. Chen, J. Li, X. Dong, P. Zhang, C. He, | https://arxiv.org/abs/2505.15963v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.