text
string
source
string
as shown in Figure 9. System Prompt: Aconversation between User and Assistant .The user asks aquestion, and theAssistant solves itstep bystep.The assistant first thinks about thereasoning process inthe mind andthen provides theuser with theanswer, i.e.,reasoning process here ...theanswer is: \boxed{answer }. Figure 9: The system prompt used for all experiments. C Case Study To further analyze the improvements of TAPO over conventional GRPO, we compare their reason- ing processes on representative mathematical problems from the MATH dataset in Figure 10 and Figure 11. 18 In Figure 10, we observe how both methods approach an arithmetic mean problem. GRPO produces a solution with scattered notation and repetitive statements, particularly evident in its final steps where it repeatedly states “The problem involves ... final answer is 46.” In contrast, TAPO demonstrates a more structured approach by explicitly introducing a “step by step” thinking process. The thought- augmented process methodically builds upon each reasoning step, clearly identifying the relationship between the original sum (984), the highest score (98), and the remaining scores to derive the lowest score (46). This structured approach results in a more readable and logically coherent solution. Figure 11 presents a more challenging problem involving complex numbers and Vieta’s formulas. Here, the limitations of GRPO become more pronounced. While GRPO initially applies the correct formula, its reasoning process deteriorates into incoherent text fragments and coding artifacts (e.g., “Tre localVEC?” and various non-mathematical expressions). This demonstrates how GRPO struggles with maintaining coherent reasoning for complex problems. In contrast, TAPO maintains its structured approach throughout, clearly stating the problem context, applying Vieta’s formulas with proper explanation, and presenting a clean, coherent solution without extraneous text or errors. D Future Directions While TAPO demonstrates significant improvements in reasoning performance, our current work primarily focuses on mathematical reasoning and knowledge-intensive reasoning tasks. In future research, we plan to explore the application of thought-augmented reinforcement learning to more diverse tasks and domains, particularly retrieval-augmented generation and multimodal reasoning scenarios that integrate visual and textual information. In summary, TAPO establishes a strong foundation for effectively integrating external knowledge with reinforcement learning for enhanced reasoning capabilities across various AI applications. The arithmetic mean of 12 scores is 82. When the highest and lowest scores are removed, the new mean becomes 84. If the highest of the 12 scores is 98, what is the lowest score? version},(elements票价 2. Given that the highest score is 98:3. After removing the highest and lowest scores, there are 10 scores left: So, we have: 4. Solve for the sum of the highest and lowest scores: 5. We know the highest score is 98: 6. Substitute into the equation for the sum of the highest and lowest scores:Thus, the lowest score is: The lowest score is . The problem involves … final answer is . The problem involves … final answer is . …(repeat)∑12i=1xi=12×82=98412∑i=1xi−x1−x12=984−x1−x1211∑i=2xi=10×84=840984−x1−x12=840x1+x12=984−840=144x1=98x198+x12=14446464646Let's think step by step. We consider the existing reasoning steps, then give the next one-step thought.1.The arithmetic mean of 12 scores is 82. This means that the sum of the 12 scores is.2.When the highest and lowest scores are removed, the
https://arxiv.org/abs/2505.15692v2
new mean becomes 84. Since there are now 10 scores, the sum of these 10 scores is.3.The highest score is 98. Therefore, the sum of the remaining 11 scores (excluding the highest and lowest scores) is Solving for the lowest score, we get: 4.Check what the question asks. The question asks for the lowest score, and we found it to be 46.5.The answer is: 12×82=98410×84=840984−98−lowest score=840.984−98−lowest score=840⟹lowest score=984−98−840=4646 Question GRPO TAPO Figure 10: Comparison of GRPO and TAPO for a simple algorithm problem from the MATH dataset. 19 Let $a, b, c$ be distinct complex numbers such that Compute the value of the product 2a3+7a2−8a+5=0,2b3+7b2−8b+5=0,2c3+7c2−8c+5=0. To solve for the product , we start by noting that , , and are the roots of the polynomial equation According to Vieta's formulas, for a cubic polynomial of the form , the product of the roots taken one at a time (i.e., the product of the roots) is given by . Here, the polynomial is , so we have , ,, and . Therefore, the product of the roots is . Thus, the value of the product is Tre localVEC? \end{aligned} \]\].getLong local around? joint Playlist? local]. former]\end{sh down\end{document}'}local around'? local剩a by Yuval保存 Four local around?$a$}$ local certainly yet? local^{a}$ local⼴ $ local[ local]^ local越来越少 local} abcabc2x3+7x2−8x+5=0.ax3+bx2+cx+d=0−da2x3+7x2−8x+5=0a=2b=7c=−8d=5abc=−52abcabc=−52Let's think step by step. The numbers , , and are the roots of the polynomial equation By Vieta's formulas, for the polynomial , the product of the roots taken one at a time is given by , provided the roots are distinct. In this case, the polynomial is , so the product of the roots is given by .Therefore, the answer is: abc2x3+7x2−8x+5=0.ax3+bx2+cx+d=0−da2x3+7x2−8x+5=0abcabc=−52abc=−52 Question GRPO TAPOFigure 11: Comparison of GRPO and TAPO for a difficult algorithm problem from the MATH dataset. 20
https://arxiv.org/abs/2505.15692v2
Can Large Language Models be Effective Online Opinion Miners? Ryang Heo Yongsik Seo Junseong Lee Dongha Lee† Yonsei University {ryang1119, ysseo, brulee, donalee}@yonsei.ac.kr Abstract The surge of user-generated online content presents a wealth of insights into customer pref- erences and market trends. However, the highly diverse, complex, and context-rich nature of such content poses significant challenges to tra- ditional opinion mining approaches. To address this, we introduce Online Opinion Mining Benchmark ( OOMB ), a novel dataset and eval- uation protocol designed to assess the ability of large language models (LLMs) to mine opin- ions effectively from diverse and intricate on- line environments. OOMB provides extensive (entity, feature, opinion) tuple annotations and a comprehensive opinion-centric summary that highlights key opinion topics within each con- tent, thereby enabling the evaluation of both the extractive and abstractive capabilities of mod- els. Through our proposed benchmark, we con- duct a comprehensive analysis of which aspects remain challenging and where LLMs exhibit adaptability, to explore whether they can effec- tively serve as opinion miners in realistic online scenarios. This study lays the foundation for LLM-based opinion mining and discusses di- rections for future research in this field1. 1 Introduction The explosive growth of user-generated content has fundamentally transformed marketing strate- gies and business decision-making. Companies now analyze vast amounts of user opinions scat- tered across platforms such as social media, review sites, and online communities to understand how consumers truly perceive their products and ser- vices (Rahayu et al., 2021; Chen et al., 2022). As a result, opinion mining —the task of extracting and analyzing opinions from online text—has become a core capability in today’s data-driven landscape. †Corresponding author 1https://github.com/ryang1119/ Online-Opinion-Mining I read an article on the 2024 Hyundai Tucson, and it highlights the improved interior design. The new layout seems more .. The 2024 Tesla Model 3offers an incredibly smoothridewith its advanced suspension system.< Existing Online Opinion Mining Scenario >The opinions present in this review are as follows:(2024 Tesla Model 3 ,ride, incredibly smooth),< Realistic Online Opinion Mining Scenario > Totally agree! 😊 😊In my opinion,user-friendlyinfotainment systemis essential. It just .. The opinions present in this review are as follows:(2024 Hyundai Tucson, seats, better ergonomics), …< Abstractive Opinion Mining >< Extractive Opinion Mining > Users opinions diverge across the 2024 Hyundai Tucsonand 2025 Toyota RAV4, illustrating trade-offs between ...I noticed that too! ButIpersonallypreferthe2025 Toyota RAV4’scabin—itfeelsmorespacious 😊Interesting!!I’vesatinboth, and theTucson’sseatsfeltmoreergonomic. Wereyoucomparing..Itriedthemidtrims. MaybeTucsonwinsoncomfort, butthe RAV4’s … 🤔 🤔I heard theTucson’ssound systemhas been upgraded. More immersive audio! 🤩I hope it’s comparable to ...Figure 1: Existing opinion mining scenarios assume a simple input structure ( Upper ). In contrast, our study facilitates both extractive and abstractive opinion mining in complex, multi-threaded web discussions, enabling flexible and context-aware mining ( Lower ). Existing opinion mining approaches have primar- ily focused on identifying and extracting opinion expressions or spans within text ( ˙Irsoy and Cardie, 2014; Xia et al., 2021; Li et al., 2022; Zhang et al., 2022a,b). Over time, these methods have evolved to incorporate sentiment analysis (Zhao et al., 2020; Zhang et al., 2021; Seo et al., 2024), allowing for
https://arxiv.org/abs/2505.15695v1
a deeper understanding of user preferences. Despite these advances, existing approaches still face two critical limitations. (1) Underrepresen- tation of real-world input complexity : Previ- ous benchmarks predominantly focus on single- sentence reviews (Peng et al., 2019; Cai et al., 2021; 1arXiv:2505.15695v1 [cs.CL] 21 May 2025 Benchmark#Test Avg Avg TupleContent TypesTask Examples #Tokens #Tuples Components Ext. Abs. ASTE (Peng et al., 2019) 1,468 15.7 1.7 ( a, o, s ) Reviews ✓ ✗ ACOS (Cai et al., 2021) 1,399 15.2 1.5 ( a, c, o, s ) Reviews ✓ ✗ ASQP (Zhang et al., 2021) 1,081 14.9 1.5 ( a, c, o, s ) Reviews ✓ ✗ DiaASQ-EN (Li et al., 2022) 100 179.7 8.5 ( t, a, o, s ) Conversation ✓ ✗ OOMB (Ours) 600 648.7 14.4 ( e, f, o ) Reviews, Blogs, Conversation ✓ ✓ Table 1: A comparison of our benchmark to existing opinion related benchmarks. Each tuple component represents the following: a: aspect, c: aspect category, o: opinion, s: sentiment, t: target, e: entity, and f: feature. Zhang et al., 2021) or preprocessed dialogue sce- narios (Li et al., 2022). However, in real-world online environments, user opinions appear in far more complex and structurally diverse formats. In practice, opinion streams span multi-party threaded discussions, long-form narratives with interleaved pros/cons, and domain-specific markers (e.g., emo- jis, slang, abbreviations) that introduce implicit sen- timent signals (Figure 1 Lower). The absence of a setting that comprehensively captures these realis- tic and diverse forms of opinion expression makes it difficult to assess under what conditions and to what extent large language models (LLMs) can effectively perform opinion mining. This gap poses a significant challenge to evaluating the util- ity of LLMs and understanding their applicability to real-world applications. (2) Confinement to extraction-centric tasks : As mentioned earlier, most prior tasks have focused on extracting opinion spans or structured tuples from input texts. However, this extraction-centric approach can excessively simplify or compress the nuanced contextual information and emotional nuances that are essential for strategic decision- making. For instance, the tuple ( “Tesla Model 3”, “interior”, “larger than the previous model” ) fails to capture critical contextual background—such as whether the user inspected the vehicle in person or harbored an implicit purchase intent. In real-world industry settings, marketers and product teams are more interested in cohesive, topic-level insights rather than isolated fragments of information (Yuan et al., 2015; Santos and Gonçalves, 2021; Han et al., 2023b). These observations highlight the need to explore opinion mining paradigms that move be- yond raw extraction and aim to preserve the emotions, contextual subtleties, and user intent embedded in real-world discourse. To address these challenges, we propose Online Opinion Mining Benchmark, named OOMB , a novel benchmark specifically tailored to evaluate the opinion mining capabilities of LLMs acrossrealistic, complex, and diverse online scenarios. Unlike previous datasets, OOMB incorporates con- tent from structurally distinct platforms—including blogs, review sites, Reddit threads, and YouTube comments—capturing long-form content, single & multi-user interactions representative of authen- tic online discussions. Each content instance is enriched with dual-layer annotations: (1) struc- tured sets
https://arxiv.org/abs/2505.15695v1
of ( entity, feature, opinion ) tuples reflect- ing explicit user perspectives, and (2) context-rich, opinion-centric summaries organized around key thematic insights from a marketer’s viewpoint. Building upon this benchmark, we introduce two complementary tasks: (1) Feature-centric opin- ion extraction (FOE) evaluates whether LLMs can accurately extract structured opinions from on- line content and (2) Opinion-centric insight gen- eration (OIG) assesses whether LLMs can mine high-level topics and insights from user opinions expressed in online content. We conduct extensive experiments on ten proprietary and open-source LLMs to provide an in-depth analysis of their re- spective capabilities and limitations. The evalu- ation results demonstrate that while the models struggle with extracting structured opinions from online content, they exhibit relatively strong adapt- ability in synthesizing diverse user opinions into meaningful insights. Based on these findings, we discuss key takeaways and potential future direc- tions to further advance the field of opinion mining. Specifically, our contributions are as follows: •We present OOMB, a realistic and richly an- notated benchmark that evaluates LLMs across structurally diverse online content using both structured tuples and insight-oriented summaries. •We define two complementary tasks FOE and OIG—to jointly assess extraction and abstraction capabilities of LLMs from diverse online content. •We extensively evaluate both proprietary and open-source LLMs, highlighting their strengths, limitations, and opportunities for further work. 2 < User-Generated Online Data Collection > 2025 Kia EV9:4 reasons to love it, 2 reasons to think twice Blog Reason to love it #1: StylingIf you’re looking for something boxythat oozes“SUV” while actually being a family hauler in disguise, the Kia EV9 has your number. …Reason to think twice it #1: Ho-HumInterior design I’ve been driving the 2025 Ford Explorerfor a few months now, and it’s been an excellentfamily SUV. The interioris spacious, with ..2025 Ford ExplorerA Well-Rounded Family SUV ⭐ ⭐ ⭐ ⭐ ⭐(5/5)Review Site 2025 Chevrolet TraverseSpacious but Lacks Refinement ⭐ ⭐ ⭐(3/5)I recently purchased the 2025Chevrolet Traverse, it has somegreat qualities, but there are a fewdisappointmentsas well.First, .. These days, the EV market is really hot !! What car would you recommend? 🙂 🙂Reddit I agree too!It has smoothhandling..The Nissan Leafis affordableand offers a ..That's true! The maintenancecosts..The Tesla Model 3is fantasticwhen it … Oh really? What about the interior? YoutubeIn video, the Volvo XC90’ssafetyfeaturesare really excellent! 🤩Absolutely! Pilot Assistsystemmakes highway driving so much convenient Not just the XC90, but Volvo as a brand takes safetyveryseriously. 👍 but isn’t the maintenance costa bithigh? 😂 😂 🎥2025 Volvo XC90 Is It the Safest Luxury SUV? < Human-in-the-Loop Tuple Annotation > Text LLM …Tuple Set1Tuple Set NHuman Annotator Gold Tuple Set < Human-in-the-Loop Summary Annotation > Text CandidateSummary Set Gold Summary Human AnnotatorLLM ValidateRefine Select Best One Gold Tuple SetDistinctTuple SetsFigure 2: The overview of our OOMB benchmark construction pipeline. 2 OOMB Benchmark In this section, we introduce the construction of OOMB , a benchmark designed to effectively repre- sent real-world online content. Figure 2 illustrates the overall construction pipeline. 2.1 Data Collection To reflect realistic user-generated content and a wide range of online structures, we collect tex- tual data from four
https://arxiv.org/abs/2505.15695v1
different sources: Blog ,Re- view Site ,Reddit , and YouTube . Blog and re- view site provide detailed long-form posts and spe- cific car reviews, while reddit and youtube capture multi-threaded and single-threaded discussions, re- spectively. Specifically, we curate the sources of each website from Feedspot,2a platform that orga- nizes and manages content across various topics. More details about our data collection process and sources are provided in Appendix A.1. 2.2 Data Annotation For each collected user-generated content, we con- struct a dual-layer annotation for each content, con- sisting of both structured opinion tuples and free- form opinion-centric summaries. Following recent studies demonstrating that LLMs with advanced reasoning capabilities can serve as effective tools for data annotation (He et al., 2024; Tan et al., 2024), we adopt a human-in-the-loop process in which LLM is first used as the initial annotator, and human annotators verify and refine them to ensure high-quality, reliable labels. The detailed annotation process is described in Appendix A.2. 2https://www.feedspot.com/Entity-feature-opinion tuple We annotate each content with structured (entity, feature, opinion) tuples that capture user perspectives. In contrast to the commonly used (aspect, category, opinion, sentiment) schema, this design more closely re- flects how real users express opinions—typically without explicit category or sentiment labels, but rather through direct mentions of entity features. Specifically, entity refers to the specific subject or object under discussion in the content (e.g., “V olvo XC90”). feature indicates a characteristic, attribute, or component of the entity that a user mentions or evaluates (e.g., “interior design”). opinion repre- sents the subjective or objective judgment, reaction, experience, evaluation, or feedback regarding a fea- ture (e.g., “luxurious”). If a feature is implicit and does not appear explicitly in the text, it is labeled as “NULL” following (Cai et al., 2021). In all other scenarios, each component of the tuple is assumed to be an explicitly mentioned span in the content. Tuple annotation To maximize the coverage and diversity of (entity, feature, opinion) tuple sets from each input content, we perform five rounds of zero- shot prompting using GPT-4o-mini. We then take the union of all generated tuples and remove du- plicates to form a distinct preliminary tuple sets. Subsequently, five trained human annotators review every candidate tuple for correctness, eliminate hal- lucinated entries, and complement any missing tu- ples. To support consistent decision-making, we design detailed task-specific annotation guidelines and conduct a one-week training session for all annotators, including case-based instruction and edge-case discussions. This process was applied 3 to every content, thereby ensuring high coverage, consistency, and reliability in the final annotations. Opinion-centric summary We annotate each content with an opinion-centric summary, a free- form text that organizes diverse opinions into high- level topics for meaningful insights. Specifically, from a marketing manager’s perspective, opinions are grouped into broad categories, highlighting fre- quently mentioned or standout aspects to reveal key trends. This summary follows a three-to-five-line form , providing a cohesive structure for clear and concise representation of core discussions. Summary annotation We generate five indepen- dent candidate summaries using the input content and the associated final set of tuples
https://arxiv.org/abs/2505.15695v1
as input. Then, the same five human annotators review each sum- mary from the perspective of a marketing manager and select the highest-quality one that best captures the opinions in the final tuples at the topic level. Similar to the tuple annotation process, we design detailed annotation guidelines to ensure consistent decision-making, and all five annotators undergo a one-week training session. If all candidate sum- maries are deemed insufficient in quality, the an- notators collaboratively rewrite a new summary that more accurately reflects the key insights. For the selected summary, the annotators collabora- tively refine and finalize it by checking for missing opinions, eliminating hallucinations, and ensuring conciseness in a three-to-five-line format. 2.3 Statistics and Analysis As shown in Table 1, unlike previous benchmarks, OOMB features substantially longer average token lengths and a significantly higher number of tuples, making it considerably more challenging. Addi- tionally, it covers a broader and more diverse range of content types while supporting two tasks: extrac- tion and abstraction. This dual-task setup enables the evaluation of LLMs in more realistic settings by reflecting the complexity and variability of real- world opinion expressions. Detailed our bench- mark statistics are presented in Appendix A.3. 3 Experiments 3.1 Feature-centric opinion extraction (FOE) Problem formulation Similar to existing opin- ion mining approaches (Fan et al., 2019; Xia et al., 2021), this task aims to enable LLMs to accurately identify and extract structured set of opinion tuplesfrom the given input content. Formally, given a content c, our goal is to identify and extract a set of tuples T={(ei, fi, oi)}N i=1, where eirepresents the entity, fithe feature, and oithe opinion. Evaluation protocol To evaluate structured opin- ion extraction capabilities of LLMs, we utilize three types of tuple matching evaluation meth- ods. (1) Exact Match (EM) : Consistent with exist- ing opinion-related extraction tasks (Zhang et al., 2022a; Xia et al., 2021), a predicted tuple is consid- ered correct only if all its elements exactly match the corresponding elements in the gold tuple. (2) Relaxed Match (RM) : To provide a more flexible evaluation beyond strict exact matching, we eval- uate the similarity of each tuple component using both lexical and semantic matching. A tuple is con- sidered a relaxed match if the similarity score of all its components exceeds a predefined threshold of 0.7, formally defined as: RM(tp, tg) = 1,if∀x∈ {e, f, o},Sim(xp, xg)≥0.7 0,otherwise where tp= (ep, fp, op)andtg= (eg, fg, og)are the predicted and gold tuples, respectively. Draw- ing on recent works (Han et al., 2023a; Li et al., 2024), we utilize the Python’s difflib library3to compute token-level overlap scores for lexical sim- ilarity (L-RM), while employing a Sentence Trans- former4for semantic similarity (S-RM). (3) Con- textual Match (CM) : Inspired by (Fu et al., 2023; Fane et al., 2025), we design a method that lever- ages the reasoning capabilities of LLMs to match tuples in a manner similar to human judgment. Specifically, we utilize GPT-4o to evaluate both predicted and gold tuples, enabling the model to count how many tuples match. This metric allows recognition of semantically equivalent
https://arxiv.org/abs/2505.15695v1
tuples even when surface forms differ significantly, used the prompt shown in Table 17. Note that for both RM and CM, we measure recall by counting each gold tuple at most once to avoid double counting, even if multiple predicted tuples match the same gold tu- ple. For all evaluation metrics, we primarily use the F1 score while also reporting precision and recall. 3.2 Opinion-centric insight generation (OIG) Problem formulation This task aims to analyze whether LLMs can group scattered opinions from 3https://docs.python.org/3/library/difflib.html 4We useall-MiniLM-L6-v2 , a lightweight model optimized for efficient sentence similarity computation. 4 ModelsEM L-RM S-RM CM Pre Rec F1 Pre Rec F1 Pre Rec F1 Pre Rec F1 Proprietary LLMs GPT-4o-mini 3.91 1.99 2.62 11.29 5.70 7.52 15.50 7.76 10.27 65.07 36.61 43.19 GPT-4o 7.27 5.18 6.02 15.86 11.39 13.20 21.23 15.31 17.71 59.34 45.88 48.28 Claude-3.5-Haiku 6.13 3.02 4.01 15.02 7.51 9.94 20.60 10.36 13.68 63.70 37.25 43.62 Claude-3.5-Sonnet 11.12 6.32 7.97 22.97 13.02 16.46 29.30 16.52 20.93 62.00 39.83 44.90 Open-source LLMs Llama3-8B-Instruct 8.49 6.28 7.17 16.75 12.18 14.02 21.33 15.43 17.80 51.92 42.51 43.18 Llama3-70B-Instruct 7.26 5.57 6.27 15.21 11.66 13.13 19.94 15.18 17.15 53.15 42.55 43.67 Gemma2-9B-it 6.37 4.51 5.25 14.17 10.17 11.78 17.73 12.59 14.64 53.71 41.93 43.61 Gemma2-27B-it 7.05 5.61 6.20 14.33 11.77 12.82 19.29 15.83 17.25 52.58 42.98 43.42 Qwen2.5-7B-Instruct 6.55 4.18 5.05 12.97 8.32 10.02 18.22 11.83 14.18 52.48 39.15 41.14 DeepSeek-7B-chat 3.00 1.63 2.07 5.86 3.13 4.02 8.20 4.34 5.61 49.25 30.33 33.12 Table 2: Performance comparison of various LLMs for the FOE task across diverse tuple matching metrics. user-generated online content into high-level topics, providing context-aware and meaningful insights. Formally, given content c, our objective is to gen- erate a free-form text summary Sthat cohesively encapsulates user opinions into high-level topics. Evaluation protocol To broadly assess the qual- ity of opinion insight summaries generated by the model across various aspects, we employ both lexi- cal and semantic automated evaluation metrics. For lexical evaluation, we adopt ROUGE-1,10, L (Lin, 2004), which measure word overlap between the reference and generated summaries. For semantic evaluation, we leverage BERTScore (BS) (Zhang et al., 2019) and A3CU (Liu et al., 2023b). BS computes similarity between the reference and gen- erated texts using contextual embeddings, while A3CU compares texts without extracting atomic content units, providing a human-aligned assess- ment of content similarity. For both ROUGE and A3CU, we report F1 scores. Moreover, to ensure a systematic and comprehensive evaluation, we also conduct reference-free assessments using an LLM as the judge. Inspired by (Siledar et al., 2024), we design the following six well-defined criteria: Faithfulness, Coverage, Specificity, Insightfulness, Intent andFluency . This analysis extends beyond automated lexical and semantic metrics, providing a broader perspective on the abstractive opinion mining capabilities of LLMs. A detailed descrip- tion is provided in Appendix B.3. 3.3 Experimental setup Models We conduct extensive experiments on two types of LLMs: (1) Proprietary LLMs that are available via APIs, such as GPT-4o-mini, GPT- 4o (OpenAI et al., 2024), and Claude 3.5 Haiku, Sonnet (Anthropic, 2024). (2) Open-source LLMsMetric Pearson rSpearman ρKendall τ
https://arxiv.org/abs/2505.15695v1
EM 0.4505 0.4722 0.4215 L-RM 0.4584 0.4754 0.4244 S-RM 0.5514 0.5531 0.4937 CM 0.8337 0.8155 0.7279 Table 3: Correlation coefficients between each metric and human judgment ( p-value < 0.05) based on pair- wise comparisons by five human evaluators. Detailed experimental settings are provided in Appendix B.2. such as Llama-3-Instruct (8B, 70B, Grattafiori et al. 2024), Gemma 2-it (9B, 27B, Team et al. 2024b), Qwen2.5-7B-Instruct (Yang et al., 2024), and DeepSeek-7B-chat (Bi et al., 2024). Implementation details Following recent stud- ies demonstrating the reasoning capabilities of LLMs in zero-shot settings (Wang et al., 2024; Qin et al.; Liu et al., 2024; Chhabra et al., 2024), we perform both tasks using zero-shot prompting. This means the models rely solely on their pre- trained knowledge without any task-specific fine- tuning. To ensure consistent and reliable perfor- mance across all experiments, we set the temper- ature to 0 for all generations. Our more detailed experimental setup presented in Appendix B. 4 Results and Discussion In this section, we present the main findings of our study. Each subsection addresses the research ques- tion— Can LLMs serve as effective online opinion miners? —from various perspectives, supported by detailed experimental results and analyses. 4.1 RQ1: What makes it challenging for LLMs to extract structured opinions? Performance on tuple extraction As shown in Table 2, LLMs consistently struggle to extract 5 1 5 10 15 20 25 Number of Inferences2030405060CM Precision 1 5 10 15 20 25 Number of Inferences303540455055606570CM Recall Tuple Extraction Performance Based on Multiple Inferences GPT-4o-mini GPT-4oClaude-3-5-Haiku Claude-3-5-SonnetLlama-3-70B-Instruct Gemma-2-27B-itFigure 3: Performance comparison of various LLMs on the FOE task, increasing the numbers of inferences. structured opinions. Specifically, even the best- performing model fails to achieve an F1 of 30 on both the rigid EM metric and the more relaxed RM metric, demonstrating significantly low perfor- mance. In contrast, employing the CM leads to a notable and consistent improvement in both tuple matching accuracy and overall recall. This metric effectively leverages LLMs’ reasoning capabilities to mirror human judgment and has been shown to align most closely with human evaluations (see Ta- ble 3). Nevertheless, even with CM, most models fail to both accurately predict the correct tuples and comprehensively cover all tuples present in the input content , revealing inherent limitations in LLMs’ extraction capabilities. This highlights that structured opinion extraction remains a highly com- plex and challenging task for LLMs, particularly in the context of realistic online content. Effect of multiple inference on tuple extraction performance To investigate how extensively an LLM can extract structured tuples from content, we perform multiple inference iterations per single input and measure the model’s extraction perfor- mance. For evaluation, we take the union of all tuples generated across iterations, remove dupli- cates, and consider only the unique (entity, feature, opinion) tuple sets. To capture a broader range of tuple sets, we set the temperature to 1.0 dur- ing inference. As shown in Figure 3, most mod- els generate a significantly larger number of pre- dicted tuples as the number of inference iterations increases, but the number of correctly matched tu-
https://arxiv.org/abs/2505.15695v1
ples does not keep pace. Notably, recall improves significantly across most models but eventually reaches a plateau, where the rate of increase di- minishes. This implies that LLMs recognize a fixed set of opinions within the content, makingContent : Interior materials remain perfectly adequate for the price of the truck and Honda’s reputation. Gold : (‘2021 honda ridgeline’, ‘interior’, ‘adequate for the price’) Predicted : (‘2021 honda ridgeline’, ‘interior materials’, ‘perfectly adequate’) EM:✗ L-RM: ✗ S-RM: ✓ CM:✗ Content : It’s not a tower of power by any stretch but gets the job done, even with a payload of swingset. Gold : (‘2021 honda ridgeline’, ‘power’, ‘gets the job done’) Predicted : (‘2021 honda ridgeline’, ‘engine’, ‘gets the job done’) EM:✗ L-RM: ✗ S-RM: ✗ CM:✓ Content : The major update to the Ridgeline for the 2021 model year isn’t in its powertrain (remains the same), interior (reclaims a physical volume knob) Gold : (‘2021 honda ridgeline’, ‘interior’, ‘reclaims a physical volume knob’) Predicted : (‘2021 honda ridgeline’, ‘volume knob’, ‘excellent’) EM:✗ L-RM: ✗ S-RM: ✗ CM:✗ Table 4: Examples of comparisons between gold and predicted tuples for structured opinion extraction. it challenging to cover every opinion merely by increasing the number of inference iterations. Therefore, improving the extraction capabilities of LLMs requires exploring alternative strategies beyond merely repeating the inference process. Case study: LLMs’ extraction capability We conduct a case study to identify key failure pat- terns that limit LLMs’ ability to extract struc- tured opinions. Table 4 illustrates the compar- ison between the gold tuples and GPT-4o’s pre- dicted tuples for actual input content across EM, L-RM, S-RM, and CM. Despite being explicitly instructed in the input prompt to extract spans as-is, LLM often produces semantically related but non- identical spans—“interior materials” instead of “in- terior”—substitutes related concepts such as “en- gine” for “power”, and even hallucinates opinions like “excellent” in place of “reclaims a physical volume knob”. These patterns indicate that LLMs tend to transform or reinterpret textual information rather than extracting it verbatim as structured tu- ples. Such behavior underscores a fundamental limitation of LLMs in this task and suggests that structured extraction may not be an effective ap- proach for opinion mining with LLMs. 4.2 RQ2: How insightfully can LLMs generate abstractive opinion summaries? Automated evaluation results Table 5 reports the performance for the OIG task, using both lexi- cal and semantic evaluation metrics. While models show strong word-level overlap (R-1 and R-L) in their summaries, they exhibit significantly lower bigram recall (R-2), highlighting difficulty in sus- 6 ModelsLexical Semantic R-1 R-2 R-L BS A3CU Proprietary LLMs GPT-4o-mini 39.30 14.05 34.58 90.35 38.49 GPT-4o 39.36 14.77 34.85 89.86 38.39 Claude-3.5-Haiku 33.47 10.06 29.00 88.50 31.91 Claude-3.5-Sonnet 33.60 09.47 29.53 88.79 31.67 Open-source LLMs Llama3-8B-Instruct 37.48 13.15 33.43 89.91 30.50 Llama3-70B-Instruct 37.61 13.04 33.18 90.15 31.48 Gemma2-9B-it 35.03 11.47 30.99 88.25 31.16 Gemma2-27B-it 35.40 11.69 31.02 90.08 34.09 Qwen2.5-7B-Instruct 33.84 10.87 27.94 89.56 25.34 DeepSeek-7B-chat 35.03 10.68 30.72 76.89 25.80 Table 5: Performance comparison of various LLMs for the OIG task across automated evaluation metrics. taining coherent phrase
https://arxiv.org/abs/2505.15695v1
structures. Additionally, they achieve relatively high BS; their performance on A3CU remains substantially lower, suggesting that LLMs often capture surface-level semantic similarity but struggle to reflect deeper, human- aligned content understanding. Thus, to thoroughly gauge LLMs’ abstractive strengths—particularly their capture of intent, subtle sentiment shifts, and deeper insights beyond surface semantics—a mul- tifaceted evaluation framework is needed. LLM-Judge evaluation across multiple perspec- tives To comprehensively analyze how well mod- els generate abstractive opinion summaries, we con- duct a reference-free evaluation using an LLM as the judgemeter. From the results in Figure 4, we derive the following key conclusions: (1) LLMs consistently provide natural and readable sum- maries while preserving the original content without distortion or unnecessary modification. This demonstrates their strength in faithfulness and fluency, ensuring that the generated summaries re- main accurate and coherent. (2) However, LLMs struggle to capture implicit user intentions, nu- anced expressions, and meaningful insights that are not explicitly stated in the input content. This limitation is reflected in lower scores for in- sightfulness and intent, indicating that while LLMs can summarize well, they lack deeper abstraction and contextual understanding. Impact of structured opinions on summary Figure 6 demonstrates that augmenting opinion tuples during opinion-centric summary generation not only leads to substantial improvements in auto- matic evaluation metrics, but also provides practi- cal benefits from a user perspective. In particular, SpecificityIntentFluencyCoverageFaithfulnessInsightfulnessSpecificityIntentFluencyCoverageFaithfulnessInsightfulnessFigure 4: Radar charts for LLM-Judge evaluations of the OIG task. Comparison of the average model perfor- mance across different content types (Left) . Compari- son of performance across different models (Right) . the notable gains in coverage andinsightfulness suggest that the model becomes more effective at capturing key opinions and delivering more infor- mative summaries. Conversely, slight decreases in intent andfluency indicate that the added structure may sometimes interfere with natural expression and tone preservation. These results suggest that in- tegrating structured opinion tuples into the insight generation pipeline is a key strategy for effective opinion mining, while also highlighting the need for continued research into both the extractive and abstractive capabilities of LLMs. 4.3 RQ3: Do LLMs effectively adapt to diverse online text environments? To assess the adaptability of LLMs to the highly varied nature of online content, we analyze their performance across several dimensions. Figure 5 presents a comparative analysis of how different LLMs perform when these attributes vary. LLMs struggle with dense and lengthy content Scenarios involving long content or a large number of tuples inherently present verbose and opinion- rich online content. The more densely packed the information within these texts, the more LLMs noticeably struggle to extract opinions and derive insights. This observation highlights the consider- able challenges LLMs face in opinion mining when dealing with highly condensed and information- dense content. Therefore, it is crucial to explore more adaptive and effective mining approaches tai- lored to such complex scenarios (e.g., long-form user-generated texts, multi-thread discussions). LLMs are robust in complex entity and multi- user environments In contrast, in environments where multiple users participate or the entity com- plexity increases, both extraction and opinion in- 7 200 400 600 800 100020304050CM
https://arxiv.org/abs/2505.15695v1
F1 510 15 20 25 30 35 40 45 1 2 3 4 5 6 1 3 5 7 9 10 200 400 600 800 1000 Number of Tokens202530354045A3CU 510 15 20 25 30 35 40 45 Number of Tuples 1 2 3 4 5 6 Number of Entities 1 3 5 7 9 10 Number of Users 31.6%35.3% 14.4% 8.0%10.7% 0102030405060 15.0%22.4% 21.6% 15.8% 8.9%7.0%5.1%2.4% 1.8%53.8% 21.4% 8.8%5.8%3.0%7.2% 0102030405060 Ratio of Contents49.8% 21.1% 8.8%6.5% 2.4%11.3% 198221 90 5067 050100150200250300350 94140 135 99 56443215 11337 134 55361945 0100200300 Number of Contents312 132 5541 1571Performance Based on Attributes GPT-4o-mini GPT-4o Claude-3-5-Haiku Claude-3-5-Sonnet Llama-3-70B-Instruct Gemma-2-27B-itFigure 5: Performance comparison of various LLMs based on changes in different attributes within online content. CM F1 scores for the FOE task (Upper) , and A3CU scores for the OIG task (Lower) . GPT-4o miniGPT-4o Claude HaikuClaude SonnetGemma2 27BLlama3 70B253035404550A3CU39.1 38.9 31.832.333.835.245.3 45.1 38.842.2 40.043.6 Faith. Covg. Specif. Insgt. Intnt. Flcy.3.23.43.63.84.04.24.4 Human-Evaluated Score4.1 3.9 3.8 3.74.14.4 4.1 4.1 3.9 3.94.04.3Impact of Structured Opinions on Summary Content Only Content with Gold Tuple Figure 6: Comparison of the OIG task performance between content-only input and input with gold tuples. Automated metric performance by model (Left) and human evaluation of GPT-4o outputs (Right) . Detailed human evaluation is presented in Appendix B.3. sight generation show a relatively weaker down- ward trend. These results indicate that in settings such as forum discussions or multi-user comment threads, LLMs do not face significant challenges in extracting information and synthesizing opinions. This finding implies that forums and communities tend to consist of simple comments or relatively easily recognizable subtopics, allowing LLMs to identify and summarize key opinions with ease. 5 Related Work Early studies on opinion mining (Pang et al., 2008) primarily focused on identifying and classifying opinion-related expressions or spans within text (Yang and Cardie, 2013; ˙Irsoy and Cardie, 2014; Katiyar and Cardie, 2016; Xia et al., 2021; Liu et al., 2021; Zhang et al., 2022b). In particular, extracting opinions about specific aspects of prod- ucts and services received significant attention (Fan et al., 2019; Wu et al., 2020; Zhao et al., 2020; Chen et al., 2020), and subsequent work extended this to jointly predict sentiment, enabling more complex and insightful analyses (Peng et al., 2019; Cai et al.,2021; Zhang et al., 2021; Li et al., 2022; Kim et al., 2024a; Seo et al., 2024; Bai et al., 2024). Recently, large language models (LLMs) (Ope- nAI et al., 2024; Grattafiori et al., 2024; Team et al., 2024a) have demonstrated remarkable zero-shot and in-context learning capabilities across a range of tasks, including information extraction (Kim et al., 2024b; Perot et al., 2024; Liu et al., 2024) and abstractive summarization (Chhabra et al., 2024; Tang et al., 2024; Siledar et al., 2024). While these advances suggest that LLMs have great potential in opinion mining, existing benchmarks fall short of capturing the complexity of real-world inputs and remain focused on simplified, structured extraction settings. As a result, they fail to fully assess the true potential of LLMs in this domain. To bridge this gap, we introduce the OOMB
https://arxiv.org/abs/2505.15695v1
benchmark, which encompasses a wide spectrum of realistic online content and enables comprehensive investigation of both extraction and abstractive capacities of LLMs. 6 Conclusion In this paper, we introduce OOMB, a novel bench- mark designed to assess LLMs’ capabilities in both structured opinion extraction and insight-oriented opinion generation across diverse and realistic on- line content scenarios. To the best of our knowl- edge, OOMB is the first comprehensive benchmark for evaluating LLMs in both structured and ab- stractive opinion mining tasks under real-world conditions. Our research reveals the dual chal- lenge of precise opinion extraction and contextual insight generation, highlighting the need for fu- ture research to improve the effectiveness of both approaches. This work lays the foundation for LLM-based opinion mining and serves as a step- ping stone for future research in this field. 8 Limitations Despite its contributions, this study has several lim- itations, each of which also suggests promising di- rections for future research and practical extensions. First, although OOMB includes a diverse range of user-generated online content, it is currently con- fined to the vehicle domain, which may limit its generalizability to other areas such as electronics or healthcare. However, since the benchmark con- struction pipeline—including data collection, tuple annotation, and summary generation—is designed to be domain-agnostic, it can be easily extended to other fields with only minor adjustments to data sourcing and annotation guidelines. Second, the current benchmark does not take into account user-specific information (user pro- files). In real-world applications, factors such as user expertise, preferences, usage context, and prior sentiment trends play a critical role in shaping ac- tionable insights. Integrating user metadata or inter- action history would enable a natural extension of the framework toward user-aware opinion mining. While this direction is beyond the current scope, enriching OOMB with such annotations and mod- eling could open up new avenues for personalized opinion mining, allowing LLMs to produce more tailored and context-sensitive outputs. Third, although we adopted a human–machine collaborative annotation pipeline (Sharif et al., 2024; Seo et al., 2025) to construct high-quality labels, opinion extraction and summarization inher- ently involve subjective judgment. To mitigate this, we established detailed annotation guidelines and a multi-stage validation process; nevertheless, some degree of annotation variance is unavoidable. Fu- ture work may explore more systematic approaches to subjectivity, such as crowdsourced consensus an- notation, uncertainty-aware learning frameworks, or prompt ensemble methods. Ethical Statement This study strictly adhered to ethical guidelines throughout the process of data collection and us- age. Data crawling was conducted solely for non- commercial research purposes and performed at a controlled rate to avoid overloading servers or caus- ing potential DDoS attacks. When collecting user reviews, personal information such as reviewer IDs, names, and locations was intentionally excluded, focusing only on text and dates to ensure user pri- vacy. However, we cannot entirely rule out thepossibility that the review text may contain per- sonal details, hate speech, or inappropriate content. All data samples were collected and annotated in compliance with the terms and conditions of their respective sources. By making our dataset and mod- els accessible,
https://arxiv.org/abs/2505.15695v1
we aim to foster academic progress in generative event extraction research. References Anthropic. 2024. Claude 3.5 haiku and sonnet. Yinhao Bai, Zhixin Han, Yuhua Zhao, Hang Gao, Zhuowei Zhang, Xunzhi Wang, and Mengting Hu. 2024. Is compound aspect-based sentiment analysis addressed by LLMs? In Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 7836–7861, Miami, Florida, USA. Association for Computational Linguistics. Xiao Bi, Deli Chen, Guanting Chen, Shanhuang Chen, Damai Dai, Chengqi Deng, Honghui Ding, Kai Dong, Qiushi Du, Zhe Fu, et al. 2024. Deepseek llm: Scal- ing open-source language models with longtermism. arXiv preprint arXiv:2401.02954 . Hongjie Cai, Rui Xia, and Jianfei Yu. 2021. Aspect- category-opinion-sentiment quadruple extraction with implicit aspects and opinions. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) , pages 340–350. Shaowei Chen, Jie Liu, Yu Wang, Wenzheng Zhang, and Ziming Chi. 2020. Synchronous double-channel recurrent network for aspect-opinion pair extraction. InProceedings of the 58th Annual Meeting of the As- sociation for Computational Linguistics , pages 6515– 6524, Online. Association for Computational Lin- guistics. Tao Chen, Premaratne Samaranayake, XiongYing Cen, Meng Qi, and Yi-Chen Lan. 2022. The impact of online reviews on consumers’ purchasing decisions: Evidence from an eye-tracking study. Frontiers in Psychology , 13:865702. Anshuman Chhabra, Hadi Askari, and Prasant Moha- patra. 2024. Revisiting zero-shot abstractive sum- marization in the era of large language models from the perspective of position bias. In Proceedings of the 2024 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Pa- pers) , pages 1–11, Mexico City, Mexico. Association for Computational Linguistics. Zhifang Fan, Zhen Wu, Xin-Yu Dai, Shujian Huang, and Jiajun Chen. 2019. Target-oriented opinion words extraction with target-fused neural sequence labeling. InProceedings of the 2019 Conference of the North 9 American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) , pages 2509–2518, Minneapolis, Minnesota. Association for Computa- tional Linguistics. Enfa Fane, Md Nayem Uddin, Oghenevovwe Iku- mariegbe, Daniyal Kashif, Eduardo Blanco, and Steven Corman. 2025. BEMEAE: Moving beyond exact span match for event argument extraction. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies (Volume 1: Long Papers) , pages 5734–5749, Albuquerque, New Mexico. Association for Compu- tational Linguistics. Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. 2023. Gptscore: Evaluate as you desire. Preprint , arXiv:2302.04166. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al- Dahle, Aiesha Letman, Akhil Mathur, Alan Schel- ten, Alex Vaughan, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mi- tra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston Zhang, Aurelien Ro- driguez, Austen Gregerson, Ava Spataru, Baptiste Roziere, Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong, Cristian Canton Ferrer, Cyrus Nikolaidis, Damien Al- lonsius, Daniel Song,
https://arxiv.org/abs/2505.15695v1
Danielle Pintz, Danny Livshits, Danny Wyatt, David Esiobu, Dhruv Choudhary, Dhruv Mahajan, Diego Garcia-Olano, Diego Perino, Dieuwke Hupkes, Egor Lakomkin, Ehab AlBadawy, Elina Lobanova, Emily Dinan, Eric Michael Smith, Filip Radenovic, Francisco Guzmán, Frank Zhang, Gabriel Synnaeve, Gabrielle Lee, Georgia Lewis An- derson, Govind Thattai, Graeme Nail, Gregoire Mi- alon, Guan Pang, Guillem Cucurell, Hailey Nguyen, Hannah Korevaar, Hu Xu, Hugo Touvron, Iliyan Zarov, Imanol Arrieta Ibarra, Isabel Kloumann, Is- han Misra, Ivan Evtimov, Jack Zhang, Jade Copet, Jaewon Lee, Jan Geffert, Jana Vranes, Jason Park, Jay Mahadeokar, Jeet Shah, Jelmer van der Linde, Jennifer Billock, Jenny Hong, Jenya Lee, Jeremy Fu, Jianfeng Chi, Jianyu Huang, Jiawen Liu, Jie Wang, Jiecao Yu, Joanna Bitton, Joe Spisak, Jongsoo Park, Joseph Rocca, Joshua Johnstun, Joshua Saxe, Jun- teng Jia, Kalyan Vasuden Alwala, Karthik Prasad, Kartikeya Upasani, Kate Plawiak, Ke Li, Kenneth Heafield, Kevin Stone, Khalid El-Arini, Krithika Iyer, Kshitiz Malik, Kuenley Chiu, Kunal Bhalla, Kushal Lakhotia, Lauren Rantala-Yeary, Laurens van der Maaten, Lawrence Chen, Liang Tan, Liz Jenkins, Louis Martin, Lovish Madaan, Lubo Malo, Lukas Blecher, Lukas Landzaat, Luke de Oliveira, Madeline Muzzi, Mahesh Pasupuleti, Mannat Singh, Manohar Paluri, Marcin Kardas, Maria Tsimpoukelli, Mathew Oldham, Mathieu Rita, Maya Pavlova, Melanie Kam- badur, Mike Lewis, Min Si, Mitesh Kumar Singh,Mona Hassan, Naman Goyal, Narjes Torabi, Niko- lay Bashlykov, Nikolay Bogoychev, Niladri Chatterji, Ning Zhang, Olivier Duchenne, Onur Çelebi, Patrick Alrassy, Pengchuan Zhang, Pengwei Li, Petar Va- sic, Peter Weng, Prajjwal Bhargava, Pratik Dubal, Praveen Krishnan, Punit Singh Koura, Puxin Xu, Qing He, Qingxiao Dong, Ragavan Srinivasan, Raj Ganapathy, Ramon Calderer, Ricardo Silveira Cabral, Robert Stojnic, Roberta Raileanu, Rohan Maheswari, Rohit Girdhar, Rohit Patel, Romain Sauvestre, Ron- nie Polidoro, Roshan Sumbaly, Ross Taylor, Ruan Silva, Rui Hou, Rui Wang, Saghar Hosseini, Sa- hana Chennabasappa, Sanjay Singh, Sean Bell, Seo- hyun Sonia Kim, Sergey Edunov, Shaoliang Nie, Sha- ran Narang, Sharath Raparthy, Sheng Shen, Shengye Wan, Shruti Bhosale, Shun Zhang, Simon Van- denhende, Soumya Batra, Spencer Whitman, Sten Sootla, Stephane Collot, Suchin Gururangan, Syd- ney Borodinsky, Tamar Herman, Tara Fowler, Tarek Sheasha, Thomas Georgiou, Thomas Scialom, Tobias Speckbacher, Todor Mihaylov, Tong Xiao, Ujjwal Karn, Vedanuj Goswami, Vibhor Gupta, Vignesh Ramanathan, Viktor Kerkez, Vincent Gonguet, Vir- ginie Do, Vish V ogeti, Vítor Albiero, Vladan Petro- vic, Weiwei Chu, Wenhan Xiong, Wenyin Fu, Whit- ney Meers, Xavier Martinet, Xiaodong Wang, Xi- aofang Wang, Xiaoqing Ellen Tan, Xide Xia, Xin- feng Xie, Xuchao Jia, Xuewei Wang, Yaelle Gold- schlag, Yashesh Gaur, Yasmine Babaei, Yi Wen, Yiwen Song, Yuchen Zhang, Yue Li, Yuning Mao, Zacharie Delpierre Coudert, Zheng Yan, Zhengxing Chen, Zoe Papakipos, Aaditya Singh, Aayushi Sri- vastava, Abha Jain, Adam Kelsey, Adam Shajnfeld, Adithya Gangidi, Adolfo Victoria, Ahuva Goldstand, Ajay Menon, Ajay Sharma, Alex Boesenberg, Alexei Baevski, Allie Feinstein, Amanda Kallet, Amit San- gani, Amos Teo, Anam Yunus, Andrei Lupu, An- dres Alvarado, Andrew Caples, Andrew Gu, Andrew Ho, Andrew Poulton, Andrew Ryan, Ankit Ramchan- dani, Annie Dong, Annie Franco, Anuj Goyal, Apara- jita Saraf, Arkabandhu Chowdhury, Ashley Gabriel, Ashwin Bharambe, Assaf Eisenman, Azadeh Yaz- dan, Beau James, Ben Maurer, Benjamin Leonhardi, Bernie Huang, Beth Loyd, Beto De Paola, Bhargavi
https://arxiv.org/abs/2505.15695v1
Paranjape, Bing Liu, Bo Wu, Boyu Ni, Braden Han- cock, Bram Wasti, Brandon Spence, Brani Stojkovic, Brian Gamido, Britt Montalvo, Carl Parker, Carly Burton, Catalina Mejia, Ce Liu, Changhan Wang, Changkyu Kim, Chao Zhou, Chester Hu, Ching- Hsiang Chu, Chris Cai, Chris Tindal, Christoph Fe- ichtenhofer, Cynthia Gao, Damon Civin, Dana Beaty, Daniel Kreymer, Daniel Li, David Adkins, David Xu, Davide Testuggine, Delia David, Devi Parikh, Diana Liskovich, Didem Foss, Dingkang Wang, Duc Le, Dustin Holland, Edward Dowling, Eissa Jamil, Elaine Montgomery, Eleonora Presani, Emily Hahn, Emily Wood, Eric-Tuan Le, Erik Brinkman, Este- ban Arcaute, Evan Dunbar, Evan Smothers, Fei Sun, Felix Kreuk, Feng Tian, Filippos Kokkinos, Firat Ozgenel, Francesco Caggioni, Frank Kanayet, Frank Seide, Gabriela Medina Florez, Gabriella Schwarz, Gada Badeer, Georgia Swee, Gil Halpern, Grant Herman, Grigory Sizov, Guangyi, Zhang, Guna Lakshminarayanan, Hakan Inan, Hamid Shojanaz- 10 eri, Han Zou, Hannah Wang, Hanwen Zha, Haroun Habeeb, Harrison Rudolph, Helen Suk, Henry As- pegren, Hunter Goldman, Hongyuan Zhan, Ibrahim Damlaj, Igor Molybog, Igor Tufanov, Ilias Leontiadis, Irina-Elena Veliche, Itai Gat, Jake Weissman, James Geboski, James Kohli, Janice Lam, Japhet Asher, Jean-Baptiste Gaya, Jeff Marcus, Jeff Tang, Jen- nifer Chan, Jenny Zhen, Jeremy Reizenstein, Jeremy Teboul, Jessica Zhong, Jian Jin, Jingyi Yang, Joe Cummings, Jon Carvill, Jon Shepard, Jonathan Mc- Phie, Jonathan Torres, Josh Ginsburg, Junjie Wang, Kai Wu, Kam Hou U, Karan Saxena, Kartikay Khan- delwal, Katayoun Zand, Kathy Matosich, Kaushik Veeraraghavan, Kelly Michelena, Keqian Li, Ki- ran Jagadeesh, Kun Huang, Kunal Chawla, Kyle Huang, Lailin Chen, Lakshya Garg, Lavender A, Leandro Silva, Lee Bell, Lei Zhang, Liangpeng Guo, Licheng Yu, Liron Moshkovich, Luca Wehrst- edt, Madian Khabsa, Manav Avalani, Manish Bhatt, Martynas Mankus, Matan Hasson, Matthew Lennie, Matthias Reso, Maxim Groshev, Maxim Naumov, Maya Lathi, Meghan Keneally, Miao Liu, Michael L. Seltzer, Michal Valko, Michelle Restrepo, Mihir Pa- tel, Mik Vyatskov, Mikayel Samvelyan, Mike Clark, Mike Macey, Mike Wang, Miquel Jubert Hermoso, Mo Metanat, Mohammad Rastegari, Munish Bansal, Nandhini Santhanam, Natascha Parks, Natasha White, Navyata Bawa, Nayan Singhal, Nick Egebo, Nicolas Usunier, Nikhil Mehta, Nikolay Pavlovich Laptev, Ning Dong, Norman Cheng, Oleg Chernoguz, Olivia Hart, Omkar Salpekar, Ozlem Kalinli, Parkin Kent, Parth Parekh, Paul Saab, Pavan Balaji, Pe- dro Rittner, Philip Bontrager, Pierre Roux, Piotr Dollar, Polina Zvyagina, Prashant Ratanchandani, Pritish Yuvraj, Qian Liang, Rachad Alao, Rachel Rodriguez, Rafi Ayub, Raghotham Murthy, Raghu Nayani, Rahul Mitra, Rangaprabhu Parthasarathy, Raymond Li, Rebekkah Hogan, Robin Battey, Rocky Wang, Russ Howes, Ruty Rinott, Sachin Mehta, Sachin Siby, Sai Jayesh Bondu, Samyak Datta, Sara Chugh, Sara Hunt, Sargun Dhillon, Sasha Sidorov, Satadru Pan, Saurabh Mahajan, Saurabh Verma, Seiji Yamamoto, Sharadh Ramaswamy, Shaun Lind- say, Shaun Lindsay, Sheng Feng, Shenghao Lin, Shengxin Cindy Zha, Shishir Patil, Shiva Shankar, Shuqiang Zhang, Shuqiang Zhang, Sinong Wang, Sneha Agarwal, Soji Sajuyigbe, Soumith Chintala, Stephanie Max, Stephen Chen, Steve Kehoe, Steve Satterfield, Sudarshan Govindaprasad, Sumit Gupta, Summer Deng, Sungmin Cho, Sunny Virk, Suraj Subramanian, Sy Choudhury, Sydney Goldman, Tal Remez, Tamar Glaser, Tamara Best, Thilo Koehler, Thomas Robinson, Tianhe Li, Tianjun Zhang, Tim Matthews, Timothy Chou, Tzook Shaked, Varun V ontimitta, Victoria Ajayi, Victoria Montanez, Vijai Mohan, Vinay Satish
https://arxiv.org/abs/2505.15695v1
Kumar, Vishal Mangla, Vlad Ionescu, Vlad Poenaru, Vlad Tiberiu Mihailescu, Vladimir Ivanov, Wei Li, Wenchen Wang, Wen- wen Jiang, Wes Bouaziz, Will Constable, Xiaocheng Tang, Xiaojian Wu, Xiaolan Wang, Xilun Wu, Xinbo Gao, Yaniv Kleinman, Yanjun Chen, Ye Hu, Ye Jia, Ye Qi, Yenda Li, Yilin Zhang, Ying Zhang, Yossi Adi, Youngjin Nam, Yu, Wang, Yu Zhao, Yuchen Hao, Yundi Qian, Yunlu Li, Yuzi He, Zach Rait, ZacharyDeVito, Zef Rosnbrick, Zhaoduo Wen, Zhenyu Yang, Zhiwei Zhao, and Zhiyu Ma. 2024. The llama 3 herd of models. Preprint , arXiv:2407.21783. Ridong Han, Tao Peng, Chaohao Yang, Benyou Wang, Lu Liu, and Xiang Wan. 2023a. Is information extraction solved by chatgpt? an analysis of per- formance, evaluation criteria, robustness and errors. arXiv preprint arXiv:2305.14450 . Yi Han, Gaurav Nanda, and Mohsen Moghaddam. 2023b. Attribute-sentiment-guided summarization of user opinions from online reviews. Journal of Mechanical Design , 145(4):041402. Xingwei He, Zhenghao Lin, Yeyun Gong, A-Long Jin, Hang Zhang, Chen Lin, Jian Jiao, Siu Ming Yiu, Nan Duan, and Weizhu Chen. 2024. AnnoLLM: Making large language models to be better crowdsourced an- notators. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies (Volume 6: Industry Track) , pages 165–190, Mexico City, Mexico. Association for Computational Linguistics. Ozan ˙Irsoy and Claire Cardie. 2014. Opinion mining with deep recurrent neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 720– 728, Doha, Qatar. Association for Computational Linguistics. Arzoo Katiyar and Claire Cardie. 2016. Investigating LSTMs for joint extraction of opinion entities and relations. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 919–929, Berlin, Germany. Association for Computational Linguis- tics. Jieyong Kim, Ryang Heo, Yongsik Seo, SeongKu Kang, Jinyoung Yeo, and Dongha Lee. 2024a. Self- consistent reasoning-based aspect-sentiment quad prediction with extract-then-assign strategy. In Find- ings of the Association for Computational Linguistics: ACL 2024 , pages 7295–7303, Bangkok, Thailand. As- sociation for Computational Linguistics. Seoyeon Kim, Kwangwook Seo, Hyungjoo Chae, Jiny- oung Yeo, and Dongha Lee. 2024b. Verifiner: Verification-augmented ner via knowledge-grounded reasoning with large language models. Preprint , arXiv:2402.18374. Bobo Li, Hao Fei, Fei Li, Yuhan Wu, Jinsong Zhang, Shengqiong Wu, Jingye Li, Yijiang Liu, Lizi Liao, Tat-Seng Chua, et al. 2022. Diaasq: A benchmark of conversational aspect-based sentiment quadruple analysis. arXiv preprint arXiv:2211.05705 . Yinghao Li, Rampi Ramprasad, and Chao Zhang. 2024. A simple but effective approach to improve structured language model output for information extraction. arXiv preprint arXiv:2402.13364 . 11 Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out , pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Siyi Liu, Yang Li, Jiang Li, Shan Yang, and Yunshi Lan. 2024. Unleashing the power of large language mod- els in zero-shot relation extraction via self-prompting. InFindings of the Association for Computational Linguistics: EMNLP 2024 , pages 13147–13161, Mi- ami, Florida, USA. Association for Computational Linguistics. Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and
https://arxiv.org/abs/2505.15695v1
Chenguang Zhu. 2023a. G-eval: Nlg evaluation using gpt-4 with better human align- ment. Preprint , arXiv:2303.16634. Yixin Liu, Alexander Fabbri, Yilun Zhao, Pengfei Liu, Shafiq Joty, Chien-Sheng Wu, Caiming Xiong, and Dragomir Radev. 2023b. Towards interpretable and efficient automatic reference-based summarization evaluation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Process- ing, pages 16360–16368, Singapore. Association for Computational Linguistics. Ziheng Liu, Rui Xia, and Jianfei Yu. 2021. Comparative opinion quintuple extraction from product reviews. InProceedings of the 2021 conference on empirical methods in natural language processing , pages 3955– 3965. OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Ale- man, Diogo Almeida, Janko Altenschmidt, Sam Alt- man, Shyamal Anadkat, Red Avila, Igor Babuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haim- ing Bao, Mohammad Bavarian, Jeff Belgum, Ir- wan Bello, Jake Berdine, Gabriel Bernadett-Shapiro, Christopher Berner, Lenny Bogdonoff, Oleg Boiko, Madelaine Boyd, Anna-Luisa Brakman, Greg Brock- man, Tim Brooks, Miles Brundage, Kevin Button, Trevor Cai, Rosie Campbell, Andrew Cann, Brittany Carey, Chelsea Carlson, Rory Carmichael, Brooke Chan, Che Chang, Fotis Chantzis, Derek Chen, Sully Chen, Ruby Chen, Jason Chen, Mark Chen, Ben Chess, Chester Cho, Casey Chu, Hyung Won Chung, Dave Cummings, Jeremiah Currier, Yunxing Dai, Cory Decareaux, Thomas Degry, Noah Deutsch, Damien Deville, Arka Dhar, David Dohan, Steve Dowling, Sheila Dunning, Adrien Ecoffet, Atty Eleti, Tyna Eloundou, David Farhi, Liam Fedus, Niko Felix, Simón Posada Fishman, Juston Forte, Isabella Ful- ford, Leo Gao, Elie Georges, Christian Gibson, Vik Goel, Tarun Gogineni, Gabriel Goh, Rapha Gontijo- Lopes, Jonathan Gordon, Morgan Grafstein, Scott Gray, Ryan Greene, Joshua Gross, Shixiang Shane Gu, Yufei Guo, Chris Hallacy, Jesse Han, Jeff Harris, Yuchen He, Mike Heaton, Johannes Heidecke, Chris Hesse, Alan Hickey, Wade Hickey, Peter Hoeschele, Brandon Houghton, Kenny Hsu, Shengli Hu, Xin Hu, Joost Huizinga, Shantanu Jain, Shawn Jain, Joanne Jang, Angela Jiang, Roger Jiang, HaozhunJin, Denny Jin, Shino Jomoto, Billie Jonn, Hee- woo Jun, Tomer Kaftan, Łukasz Kaiser, Ali Ka- mali, Ingmar Kanitscheider, Nitish Shirish Keskar, Tabarak Khan, Logan Kilpatrick, Jong Wook Kim, Christina Kim, Yongjik Kim, Jan Hendrik Kirch- ner, Jamie Kiros, Matt Knight, Daniel Kokotajlo, Łukasz Kondraciuk, Andrew Kondrich, Aris Kon- stantinidis, Kyle Kosic, Gretchen Krueger, Vishal Kuo, Michael Lampe, Ikai Lan, Teddy Lee, Jan Leike, Jade Leung, Daniel Levy, Chak Ming Li, Rachel Lim, Molly Lin, Stephanie Lin, Mateusz Litwin, Theresa Lopez, Ryan Lowe, Patricia Lue, Anna Makanju, Kim Malfacini, Sam Manning, Todor Markov, Yaniv Markovski, Bianca Martin, Katie Mayer, Andrew Mayne, Bob McGrew, Scott Mayer McKinney, Christine McLeavey, Paul McMillan, Jake McNeil, David Medina, Aalok Mehta, Jacob Menick, Luke Metz, Andrey Mishchenko, Pamela Mishkin, Vinnie Monaco, Evan Morikawa, Daniel Mossing, Tong Mu, Mira Murati, Oleg Murk, David Mély, Ashvin Nair, Reiichiro Nakano, Rajeev Nayak, Arvind Neelakantan, Richard Ngo, Hyeonwoo Noh, Long Ouyang, Cullen O’Keefe, Jakub Pachocki, Alex Paino, Joe Palermo, Ashley Pantuliano, Giambat- tista Parascandolo, Joel Parish, Emy Parparita, Alex Passos, Mikhail Pavlov, Andrew Peng, Adam Perel- man, Filipe de Avila Belbute Peres, Michael Petrov, Henrique Ponde de Oliveira Pinto, Michael, Poko- rny, Michelle Pokrass, Vitchyr H. Pong, Tolly Pow- ell, Alethea
https://arxiv.org/abs/2505.15695v1
Power, Boris Power, Elizabeth Proehl, Raul Puri, Alec Radford, Jack Rae, Aditya Ramesh, Cameron Raymond, Francis Real, Kendra Rimbach, Carl Ross, Bob Rotsted, Henri Roussez, Nick Ry- der, Mario Saltarelli, Ted Sanders, Shibani Santurkar, Girish Sastry, Heather Schmidt, David Schnurr, John Schulman, Daniel Selsam, Kyla Sheppard, Toki Sherbakov, Jessica Shieh, Sarah Shoker, Pranav Shyam, Szymon Sidor, Eric Sigler, Maddie Simens, Jordan Sitkin, Katarina Slama, Ian Sohl, Benjamin Sokolowsky, Yang Song, Natalie Staudacher, Fe- lipe Petroski Such, Natalie Summers, Ilya Sutskever, Jie Tang, Nikolas Tezak, Madeleine B. Thompson, Phil Tillet, Amin Tootoonchian, Elizabeth Tseng, Preston Tuggle, Nick Turley, Jerry Tworek, Juan Fe- lipe Cerón Uribe, Andrea Vallone, Arun Vijayvergiya, Chelsea V oss, Carroll Wainwright, Justin Jay Wang, Alvin Wang, Ben Wang, Jonathan Ward, Jason Wei, CJ Weinmann, Akila Welihinda, Peter Welinder, Ji- ayi Weng, Lilian Weng, Matt Wiethoff, Dave Willner, Clemens Winter, Samuel Wolrich, Hannah Wong, Lauren Workman, Sherwin Wu, Jeff Wu, Michael Wu, Kai Xiao, Tao Xu, Sarah Yoo, Kevin Yu, Qim- ing Yuan, Wojciech Zaremba, Rowan Zellers, Chong Zhang, Marvin Zhang, Shengjia Zhao, Tianhao Zheng, Juntang Zhuang, William Zhuk, and Bar- ret Zoph. 2024. Gpt-4 technical report. Preprint , arXiv:2303.08774. Bo Pang, Lillian Lee, et al. 2008. Opinion mining and sentiment analysis. Foundations and Trends ®in information retrieval , 2(1–2):1–135. Haiyun Peng, Lu Xu, Lidong Bing, Fei Huang, Wei Lu, and Luo Si. 2019. Knowing what, how and why: A 12 near complete solution for aspect-based sentiment analysis. In AAAI Conference on Artificial Intelli- gence . Vincent Perot, Kai Kang, Florian Luisier, Guolong Su, Xiaoyu Sun, Ramya Sree Boppana, Zilong Wang, Zifeng Wang, Jiaqi Mu, Hao Zhang, Chen-Yu Lee, and Nan Hua. 2024. LMDX: Language model-based document information extraction and localization. In Findings of the Association for Computational Lin- guistics: ACL 2024 , pages 15140–15168, Bangkok, Thailand. Association for Computational Linguistics. Yanzhao Qin, Tao Zhang, Yanjun Shen, Wenjing Luo, Yan Zhang, Yujing Qiao, Zenan Zhou, Wentao Zhang, Bin CUI, et al. Sysbench: Can llms follow system message? In The Thirteenth International Confer- ence on Learning Representations . Agus Rahayu, Dian Herdiana Utama, and Ririe No- vianty. 2021. The impact of online customer reviews on purchase intention in online marketplace. In 5th Global Conference on Business, Management and Entrepreneurship (GCBME 2020) , pages 471–477. Atlantis Press. Susana Santos and Helena Martins Gonçalves. 2021. The consumer decision journey: A literature review of the foundational models and theories and a future perspective. Technological Forecasting and Social Change , 173:121117. Kwangwook Seo, Donguk Kwon, and Dongha Lee. 2025. Mt-raig: Novel benchmark and evalu- ation framework for retrieval-augmented insight generation over multiple tables. arXiv preprint arXiv:2502.11735 . Yongsik Seo, Sungwon Song, Ryang Heo, Jieyong Kim, and Dongha Lee. 2024. Make compound sentences simple to analyze: Learning to split sentences for aspect-based sentiment analysis. In Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 11171–11184, Miami, Florida, USA. Association for Computational Linguistics. Omar Sharif, Joseph Gatto, Madhusudan Basak, and Sarah Masud Preum. 2024. Explicit, implicit, and scattered: Revisiting event extraction to capture com- plex arguments. In Proceedings of the 2024 Confer- ence on Empirical Methods
https://arxiv.org/abs/2505.15695v1
in Natural Language Pro- cessing , pages 12061–12081, Miami, Florida, USA. Association for Computational Linguistics. Tejpalsingh Siledar, Swaroop Nath, Sankara Muddu, Rupasai Rangaraju, Swaprava Nath, Pushpak Bhat- tacharyya, Suman Banerjee, Amey Patil, Sudhan- shu Singh, Muthusamy Chelliah, and Nikesh Gar- era. 2024. One prompt to rule them all: LLMs for opinion summary evaluation. In Proceedings of the 62nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers) , pages 12119–12134, Bangkok, Thailand. Association for Computational Linguistics.Zhen Tan, Dawei Li, Song Wang, Alimohammad Beigi, Bohan Jiang, Amrita Bhattacharjee, Man- sooreh Karami, Jundong Li, Lu Cheng, and Huan Liu. 2024. Large language models for data annotation and synthesis: A survey. Preprint , arXiv:2402.13446. An Tang, Xiuzhen Zhang, Minh Dinh, and Erik Cam- bria. 2024. Prompted aspect key point analysis for quantitative review summarization. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 10691–10708, Bangkok, Thailand. Association for Computational Linguistics. Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Rivière, Mihir Sanjay Kale, Juliette Love, Pouya Tafti, Léonard Hussenot, Pier Giuseppe Sessa, Aakanksha Chowdhery, Adam Roberts, Aditya Barua, Alex Botev, Alex Castro- Ros, Ambrose Slone, Amélie Héliou, Andrea Tac- chetti, Anna Bulanova, Antonia Paterson, Beth Tsai, Bobak Shahriari, Charline Le Lan, Christo- pher A. Choquette-Choo, Clément Crepy, Daniel Cer, Daphne Ippolito, David Reid, Elena Buchatskaya, Eric Ni, Eric Noland, Geng Yan, George Tucker, George-Christian Muraru, Grigory Rozhdestvenskiy, Henryk Michalewski, Ian Tenney, Ivan Grishchenko, Jacob Austin, James Keeling, Jane Labanowski, Jean-Baptiste Lespiau, Jeff Stanway, Jenny Bren- nan, Jeremy Chen, Johan Ferret, Justin Chiu, Justin Mao-Jones, Katherine Lee, Kathy Yu, Katie Milli- can, Lars Lowe Sjoesund, Lisa Lee, Lucas Dixon, Machel Reid, Maciej Mikuła, Mateo Wirth, Michael Sharman, Nikolai Chinaev, Nithum Thain, Olivier Bachem, Oscar Chang, Oscar Wahltinez, Paige Bai- ley, Paul Michel, Petko Yotov, Rahma Chaabouni, Ramona Comanescu, Reena Jana, Rohan Anil, Ross McIlroy, Ruibo Liu, Ryan Mullins, Samuel L Smith, Sebastian Borgeaud, Sertan Girgin, Sholto Douglas, Shree Pandya, Siamak Shakeri, Soham De, Ted Kli- menko, Tom Hennigan, Vlad Feinberg, Wojciech Stokowiec, Yu hui Chen, Zafarali Ahmed, Zhitao Gong, Tris Warkentin, Ludovic Peran, Minh Giang, Clément Farabet, Oriol Vinyals, Jeff Dean, Koray Kavukcuoglu, Demis Hassabis, Zoubin Ghahramani, Douglas Eck, Joelle Barral, Fernando Pereira, Eli Collins, Armand Joulin, Noah Fiedel, Evan Sen- ter, Alek Andreev, and Kathleen Kenealy. 2024a. Gemma: Open models based on gemini research and technology. Preprint , arXiv:2403.08295. Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupati- raju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, Johan Ferret, Peter Liu, Pouya Tafti, Abe Friesen, Michelle Casbon, Sabela Ramos, Ravin Kumar, Charline Le Lan, Sammy Jerome, Anton Tsitsulin, Nino Vieillard, Piotr Stanczyk, Sertan Girgin, Nikola Momchev, Matt Hoffman, Shantanu Thakoor, Jean-Bastien Grill, Behnam Neyshabur, Olivier Bachem, Alanna Wal- ton, Aliaksei Severyn, Alicia Parrish, Aliya Ah- mad, Allen Hutchison, Alvin Abdagic, Amanda 13 Carl, Amy Shen, Andy Brock, Andy Coenen, An- thony Laforge, Antonia Paterson, Ben Bastian, Bilal Piot, Bo Wu, Brandon Royal, Charlie Chen, Chintu Kumar, Chris Perry, Chris Welty, Christopher
https://arxiv.org/abs/2505.15695v1
A. Choquette-Choo, Danila Sinopalnikov, David Wein- berger, Dimple Vijaykumar, Dominika Rogozi ´nska, Dustin Herbison, Elisa Bandy, Emma Wang, Eric Noland, Erica Moreira, Evan Senter, Evgenii Elty- shev, Francesco Visin, Gabriel Rasskin, Gary Wei, Glenn Cameron, Gus Martins, Hadi Hashemi, Hanna Klimczak-Pluci ´nska, Harleen Batra, Harsh Dhand, Ivan Nardini, Jacinda Mein, Jack Zhou, James Svens- son, Jeff Stanway, Jetha Chan, Jin Peng Zhou, Joana Carrasqueira, Joana Iljazi, Jocelyn Becker, Joe Fer- nandez, Joost van Amersfoort, Josh Gordon, Josh Lipschultz, Josh Newlan, Ju yeong Ji, Kareem Mo- hamed, Kartikeya Badola, Kat Black, Katie Mil- lican, Keelin McDonell, Kelvin Nguyen, Kiranbir Sodhia, Kish Greene, Lars Lowe Sjoesund, Lau- ren Usui, Laurent Sifre, Lena Heuermann, Leti- cia Lago, Lilly McNealus, Livio Baldini Soares, Logan Kilpatrick, Lucas Dixon, Luciano Martins, Machel Reid, Manvinder Singh, Mark Iverson, Mar- tin Görner, Mat Velloso, Mateo Wirth, Matt Davi- dow, Matt Miller, Matthew Rahtz, Matthew Watson, Meg Risdal, Mehran Kazemi, Michael Moynihan, Ming Zhang, Minsuk Kahng, Minwoo Park, Mofi Rahman, Mohit Khatwani, Natalie Dao, Nenshad Bardoliwalla, Nesh Devanathan, Neta Dumai, Nilay Chauhan, Oscar Wahltinez, Pankil Botarda, Parker Barnes, Paul Barham, Paul Michel, Pengchong Jin, Petko Georgiev, Phil Culliton, Pradeep Kup- pala, Ramona Comanescu, Ramona Merhej, Reena Jana, Reza Ardeshir Rokni, Rishabh Agarwal, Ryan Mullins, Samaneh Saadat, Sara Mc Carthy, Sarah Cogan, Sarah Perrin, Sébastien M. R. Arnold, Se- bastian Krause, Shengyang Dai, Shruti Garg, Shruti Sheth, Sue Ronstrom, Susan Chan, Timothy Jor- dan, Ting Yu, Tom Eccles, Tom Hennigan, Tomas Kocisky, Tulsee Doshi, Vihan Jain, Vikas Yadav, Vilobh Meshram, Vishal Dharmadhikari, Warren Barkley, Wei Wei, Wenming Ye, Woohyun Han, Woosuk Kwon, Xiang Xu, Zhe Shen, Zhitao Gong, Zichuan Wei, Victor Cotruta, Phoebe Kirk, Anand Rao, Minh Giang, Ludovic Peran, Tris Warkentin, Eli Collins, Joelle Barral, Zoubin Ghahramani, Raia Hadsell, D. Sculley, Jeanine Banks, Anca Dragan, Slav Petrov, Oriol Vinyals, Jeff Dean, Demis Hass- abis, Koray Kavukcuoglu, Clement Farabet, Elena Buchatskaya, Sebastian Borgeaud, Noah Fiedel, Ar- mand Joulin, Kathleen Kenealy, Robert Dadashi, and Alek Andreev. 2024b. Gemma 2: Improving open language models at a practical size. Preprint , arXiv:2408.00118. Jiayin Wang, Fengran Mo, Weizhi Ma, Peijie Sun, Min Zhang, and Jian-Yun Nie. 2024. A user-centric multi- intent benchmark for evaluating large language mod- els.arXiv preprint arXiv:2404.13940 . Meixi Wu, Wenya Wang, and Sinno Jialin Pan. 2020. Deep Weighted MaxSAT for Aspect-based Opinion Extraction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process-ing (EMNLP) , pages 5618–5628, Online. Association for Computational Linguistics. Qingrong Xia, Bo Zhang, Rui Wang, Zhenghua Li, Yue Zhang, Fei Huang, Luo Si, and Min Zhang. 2021. A unified span-based approach for opinion mining with syntactic constituents. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 1795–1804. An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, Guanting Dong, Hao- ran Wei, Huan Lin, Jialong Tang, Jialin Wang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Ma, Jianxin Yang, Jin Xu, Jingren Zhou, Jinze Bai, Jinzheng He, Junyang Lin, Kai Dang, Keming Lu, Ke-
https://arxiv.org/abs/2505.15695v1
qin Chen, Kexin Yang, Mei Li, Mingfeng Xue, Na Ni, Pei Zhang, Peng Wang, Ru Peng, Rui Men, Ruize Gao, Runji Lin, Shijie Wang, Shuai Bai, Sinan Tan, Tianhang Zhu, Tianhao Li, Tianyu Liu, Wenbin Ge, Xiaodong Deng, Xiaohuan Zhou, Xingzhang Ren, Xinyu Zhang, Xipin Wei, Xuancheng Ren, Xuejing Liu, Yang Fan, Yang Yao, Yichang Zhang, Yu Wan, Yunfei Chu, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, Zhifang Guo, and Zhihao Fan. 2024. Qwen2 techni- cal report. Preprint , arXiv:2407.10671. Bishan Yang and Claire Cardie. 2013. Joint inference for fine-grained opinion extraction. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 1640–1649, Sofia, Bulgaria. Association for Computational Linguistics. Xiaojun Yuan, Ning Sa, Grace Begany, and Huahai Yang. 2015. What users prefer and why: a user study on effective presentation styles of opinion summariza- tion. In Human-Computer Interaction–INTERACT 2015: 15th IFIP TC 13 International Conference, Bamberg, Germany, September 14-18, 2015, Pro- ceedings, Part II 15 , pages 249–264. Springer. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. ArXiv , abs/1904.09675. Wenxuan Zhang, Yang Deng, Xin Li, Yifei Yuan, Li- dong Bing, and Wai Lam. 2021. Aspect sentiment quad prediction as paraphrase generation. arXiv preprint arXiv:2110.00796 . Wenxuan Zhang, Xin Li, Yang Deng, Lidong Bing, and Wai Lam. 2022a. A survey on aspect-based senti- ment analysis: Tasks, methods, and challenges. IEEE Transactions on Knowledge and Data Engineering , 35(11):11019–11038. Xin Zhang, Guangwei Xu, Yueheng Sun, Meis- han Zhang, Xiaobin Wang, and Min Zhang. 2022b. Identifying chinese opinion expressions with extremely-noisy crowdsourcing annotations. Preprint , arXiv:2204.10714. 14 He Zhao, Longtao Huang, Rong Zhang, Quan Lu, and Hui Xue. 2020. SpanMlt: A span-based multi-task learning framework for pair-wise aspect and opinion terms extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics , pages 3239–3248, Online. Association for Computational Linguistics. 15 A Benchmark Construction Details A.1 Data Source To construct a diverse and representative dataset for online opinion mining, we collected user-generated content from four distinct web content types: Blog, Reddit ,Review Site , and YouTube . Blog We collected data from well-established au- tomotive blogs, including The Drive5, Autoblog6, and CarExpert.7Blog content is primarily writ- ten by experts and car owners, often providing de- tailed and comprehensive insights on a single entity. Compared to other content types, blog posts tend to be longer and more structured, covering multiple aspects of a vehicle in depth. Reddit We collected data from the r/cars subred- dit8, a community of car enthusiasts. Users freely share their opinions about various vehicles through a multi-threaded structure, where multiple partici- pants engage in open discussions. This interactive nature generates diverse automotive perspectives through community discussions, making it a valu- able source for opinion mining. Review Site We collected data from Edmunds9, an automotive review platform where users provide star ratings along with detailed reviews for specific vehicles. Review sites explicitly encourage opinion sharing, leading to more direct and detailed user feedback. These structured reviews combine rat- ings
https://arxiv.org/abs/2505.15695v1
with detailed feedback, making them rich in straightforward user opinions. YouTube We gathered comments from automo- tive YouTube channels10 11listed by Feedspot12, focusing on channels with large subscriber bases providing car reviews and analysis. When YouTu- bers share their vehicle reviews, viewer opinions and reactions appear in the comments. YouTube uses a single-threaded structure where viewers can leave comments and engage in discussions through 5https://www.thedrive.com/category/ car-reviews 6https://www.autoblog.com/reviews/ 7https://www.carexpert.com.au/ car-reviews 8https://www.reddit.com/r/cars/ 9https://www.edmunds.com/car-reviews/ 10https://www.youtube.com/ @AutoTraderTV 11https://www.youtube.com/channel/ UCsqjHFMB_JYTaEnf_vmTNqg 12https://videos.feedspot.com/car_ youtube_channels/replies. This structure allows for community partic- ipation through viewer responses to both the video content and other comments, creating an interactive space for opinion sharing. A.2 Human-in-the-loop Annotation Details To ensure high-quality, consistent annotations, we adopted a human-in-the-loop process in which GPT-4o-mini13serves as the initial annotator and human annotators14subsequently verify and refine its outputs. Prompts used to solicit these initial annotations are provided in Table 24 for tuples and Table 26 for summaries. All annotators underwent one full week of training on our detailed guidelines (Table 25 for tuples; Table 27 for summaries) be- fore beginning any annotation work. This entire annotation–refinement process was applied to ev- ery sample in the dataset, ensuring higher overall quality and consistency in the resulting annotations. Tuple verification and refinement were per- formed via our annotation UI (see Figure 8 and 9), which displays the original content, each tuple’s components with existence flags, and highlighted evidence sentences. Summary verification and re- finement were conducted using a separate annota- tion UI (see Figure 10), which presents the content text, the associated gold tuple, and the working draft of the summary side by side for comparison and iterative improvement. A.3 Dataset Statistics We provide detailed statistics on key at- tributes—namely, the number of samples, average token count, number of users, and number of tuples (i.e., opinions)—for each of the four content types collected: blogs, Reddit, review sites, and YouTube. As shown in Table 6, we categorize the values of each attribute into predefined ranges to illustrate the distribution of samples across different levels. Token lengths were measured using the NLTK word_tokenize library15. A.4 Dataset Analysis Additionally, Figure 7 provides the distribution of feature-opinion topics in our dataset. Feature keywords The t-SNE visualization shows that feature keywords form well-separated 13gpt-4o-mini-2024-07-18 14We recruit undergraduates and graduates who are profi- cient in English and knowledgeable in the automotive domain. 15https://www.nltk.org/api/nltk. tokenize.word_tokenize.html 16 clusters according to major product aspects. Cate- gories like Interior & Design ,Driving Experience , andPerformance & Powertrain appear frequently and show high cohesion. Engine & Driving Per- formance andInfotainment $ Digital Systems are located near the core clusters, reflecting semantic proximity, while Overall Vehicle appears more scat- tered, indicating higher lexical variability. Outlier points on the periphery suggest rare or ambiguous feature expressions that may require finer handling. Opinion keywords Opinion keywords also ex- hibit meaningful clustering patterns. Price & Value andSpecs & Performance form tight clusters, while Utilitarian and Emotional Evaluations overlap in the center, suggesting a blend of practical and emo- tional judgments. Tech & Functionality Evalua- tions appear in a distinct region, separate from general Positive & Negative
https://arxiv.org/abs/2505.15695v1
sentiment expressions, highlighting their specialized nature. Points be- tween clusters reflect nuanced or polysemous opin- ions, suggesting the need for flexible sentiment understanding models. B Experimental Details B.1 Evaluation Models Proprietary LLMs We used the most up-to-date versions of OpenAI APIs16and Anthropic AI17. Specifically, we used the following models: •GPT-4o-mini :gpt-4o-mini-2024-07-18 •GPT-4o :gpt-4o-2024-08-06 •Claude-3.5-Haiku : claude-3-5-haiku-20241022 •Claude-3.5-Sonnet : claude-3-5-sonnet-20241022 Open-sourced LLMs We used Hugging Face model cards and ran them on two NVIDIA A100 GPUs. Specifically, we used the following models: •Llama3-8B-Instruct : meta-llama/meta-llama-3-8b-instruct •Llama3-70B-Instruct : meta-llama/meta-llama-3-70b-instruct •Gemma2-9B-it :google/gemma-2-9b-it •Gemma2-27B-it :google/gemma-2-27b-it •Qwen2.5-7B-Instruct : Qwen/Qwen2.5-7B-Instruct 16https://openai.com/index/openai-api/ 17https://www.anthropic.com/•DeepSeek-7B-chat : deepseek-ai/deepseek-llm-7b-chat B.2 Feature-centric opinion extraction RM metric threshold selection We set the threshold of the Relaxed Match (RM) metric to 0.7, as it empirically provides the optimal bal- ance—capturing meaningful semantic similarities without being overly permissive. Prior information extraction (IE) research has highlighted that exact span matching may underestimate model perfor- mance due to its overly strict nature. To alleviate this issue, previous studies have proposed overlap- based evaluations with thresholds set at 0.5 (Han et al., 2023a) or 0.75 (Sharif et al., 2024). As shown in Table 7 and 8, lower thresholds tend to excessively acknowledge partial overlaps, inflating recall to an unrealistic degree. Conversely, higher thresholds often miss semantically valid matches due to minor textual variations, causing the RM metric performance to converge toward Exact Match (EM) scores and consequently lose its intended flexibility. Thus, our experiments confirm that a threshold of 0.7 achieves optimal RM perfor- mance, which we subsequently adopt for our main experiments. Human Alignment in Tuple Matching This ex- periment aims to identify the most appropriate matching metric for reliably evaluating LLMs’ tu- ple extraction performance. To this end, we investi- gate which of the three evaluation metrics used in the FOE task—Exact Match (EM), Relaxed Match (RM), and Contextual Match (CM)—best aligns with human judgment. First, we randomly select 100 tuples predicted by GPT-4o given the input con- tent texts. Then, five human annotators evaluated the validity of each predicted-gold tuple pair using binary judgments: 1 if they considered the pair to be a match, and 0 otherwise. Based on these judg- ments, we computed the correlation coefficients between human agreement and each metric. Table 3 reports the Pearson r, Spearman ρ, and Kendall τcorrelation coefficients, averaged across the five annotators over the 100 samples. Across all correlation metrics, CM achieved the highest alignment with human judgment, indicating that Contextual Match best reflects how humans assess tuple matching. These findings suggest that CM serves as the most reliable and appropriate metric for evaluating the performance of LLMs in tuple extraction tasks. 17 B.3 Opinion-centric insight generation LLM-Judge Evaluation Criteria To evaluate summary quality across diverse criteria, we use GPT-4o and randomly sample 100 pieces of content from each of the four content types. The scoring scale for each evaluation follows the previous NLG evaluation framework, G-EV AL (Liu et al., 2023a), and is measured on a scale from 1 to 5. We adopt the following six criteria: •Faithfulness : Evaluate whether the summary faithfully reflects the original review
https://arxiv.org/abs/2505.15695v1
without dis- tortion and check for any hallucinations. •Coverage : Evaluate whether the summary effec- tively captures and represents the key opinions expressed in the review. •Specificity : Evaluate whether the summary presents meaningful and relevant details rather than being vague or overly generic. •Insightfulness : Evaluate whether the summary provides meaningful insights that enhance under- standing or decision-making for the reader. •Intent : Evaluate whether the summary accu- rately preserves the author’s original tone, intent, and nuances without altering the emotional or stylistic essence of the review. •Fluency : Evaluate whether the summary is natu- rally written, grammatically correct, and easy to read. Case study: annotated opinion-centric summary Table 28 presents representative examples of gold opinion-centric summaries for each content type. Case study: LLMs’ summarize capability We illustrate our LLM-Judge evaluation protocol with two case studies: one on a review site (Tables 29 and 31), and one on a Reddit (Tables 30 and 32). In each set, the first table shows the input con- tent, the model-generated summary (GPT-4o), the gold opinion-centric summary, and the automatic metric A3CU score, while the second table pro- vides a reference-free, six-dimension LLM-Judge assessment complete with per-dimension scores and detailed reasoning. Although the A3CU metric assigns the review site example a high score (60.91) and the Reddit example a low score (10.12), our reference-free LLM-Judge evaluation reveals that the model’s performance on the Reddit content is in fact stronger across several human-aligned dimen-sions—particularly Coverage andSpecificity . This divergence highlights the limitations of purely reference-based, automatic metrics in capturing the nuanced, insight-oriented qualities of opinion summaries. We therefore conclude that for OIG evaluation, combining automatic reference-based metrics with a reference-free, human-aligned judg- ing protocol yields a more comprehensive and reli- able assessment of LLMs’ true insight-generation capabilities. Human Evaluation We assess the quality of the generated summary through a human evaluation conducted on Amazon Mechanical Turk (AMT). Specifically, we randomly sample 200 examples from our benchmark and ask three human judges per example to evaluate summaries generated by GPT-4o under two settings: 1) using only the input content, and 2) using both the input content and gold tuples. Each judge rates the quality of the summaries on a 1 to 5 scale across six criteria. The AMT interface used for human evaluation is presented in Figure 11, 12, 13, 14, 15, 16 and 17. B.4 Performance of LLMs by content type We report the Feature-centric opinion extraction performance of various LLMs for each content type in Table 9, 10, 11 and 12 , and the Opinion-centric insight generation performance in Table 13, 14. B.5 Prompts We present prompts used in our experiments: •Data Annotation : The prompt designed for Feature-centric opinion extraction is shown in Table 24 and the prompt for Opinion-centric insight generation is shown in Table 26 •Feature-centric opinion extraction : The prompt designed for Feature-centric opinion extraction is shown in Table 15. •Opinion-centric insight generation : The prompt designed for Opinion-centric insight generation is shown in Table 16. •Contextual Match : The prompt used for per- forming Contextual Match is provided in Ta-
https://arxiv.org/abs/2505.15695v1
ble 17. •LLM-Judge Evaluation : The prompt for evaluating the LLM-Judge is presented in Ta- ble 18, 19, 20, 21, 22 and 23. 18 Attribute RangeContent TypeTotal Ratio(%) Blog Reddit Review Site YouTube Tuples≤10 8 79 14 125 226 37.7 ≤15 14 49 33 32 128 21.3 ≤20 19 21 33 21 94 15.7 ≤25 20 15 13 7 55 9.2 ≤30 20 8 8 8 44 7.3 ≤35 34 5 5 9 53 8.8 Entities1 58 53 89 128 328 54.7 2 44 34 12 39 129 21.5 3 9 24 3 17 53 8.8 4 2 24 0 7 33 5.5 5 1 10 1 3 15 2.5 ≥6 1 32 1 8 42 7.0 Tokens≤200 0 100 0 81 181 30.2 ≤400 3 58 85 68 214 35.7 ≤1000 9 14 21 44 88 14.7 ≤2000 39 3 0 8 50 8.3 ≤3000 64 2 0 1 67 11.2 Users1 58 47 89 128 322 53.7 2 44 35 12 39 130 21.7 3 9 24 3 17 53 8.8 4 2 26 0 7 35 5.8 ≥5 2 45 2 11 60 10.0 Total 115 177 106 202 600 100.0 Table 6: Statistics of the OOMB dataset across different attributes. 50 0 50 100100 50 050100Feature Keywords Interior & Design Performance & Powertrain Driving Experience Price & ValueOverall Vehicle Infotainment & Digital Systems Engine & Driving Performance 100 50 0 50 100100 50 050100Opinion Keywords Price & Value Utilitarian Evaluations Positive & NegativeT ech & Functionality Evaluations Specs & Performance Emotional Evaluations Figure 7: Visualization of feature keywords (Left) and opinion keywords (Right) extracted via K-means clustering using t-SNE. Each point represents a keyword, and colors indicate different topic clusters. 19 ModelsEM L-RM ≥0.7 L-RM ≥0.8 L-RM ≥0.9 Pre Rec F1 Pre Rec F1 Pre Rec F1 Pre Rec F1 Proprietary LLMs GPT-4o-mini 3.91 1.99 2.62 11.29 5.70 7.52 7.72 3.68 4.98 4.80 2.29 3.10 GPT-4o 7.27 5.18 6.02 15.86 11.39 13.20 11.90 8.51 9.92 8.43 6.03 7.03 Claude-3.5-Haiku 6.13 3.02 4.01 15.02 7.51 9.94 10.61 4.97 6.77 7.17 3.36 4.57 Claude-3.5-Sonnet 11.12 6.32 7.97 22.97 13.02 16.46 17.81 9.61 12.48 12.57 6.78 8.81 Open-source LLMs Llama3-8B-Instruct 8.49 6.28 7.17 16.75 12.18 14.02 13.13 9.17 10.80 9.85 6.88 8.10 Llama3-70B-Instruct 7.26 5.57 6.27 15.21 11.66 13.13 12.07 9.07 10.36 8.67 6.52 7.44 Gemma2-9B-it 6.37 4.51 5.25 14.17 10.17 11.78 9.98 7.27 8.41 7.11 5.18 5.99 Gemma2-27B-it 7.05 5.61 6.20 14.33 11.77 12.82 10.80 9.08 9.87 7.55 6.35 6.90 Qwen2.5-7B-Instruct 6.55 4.18 5.05 12.97 8.32 10.02 9.99 6.14 7.60 7.35 4.51 5.59 DeepSeek-7B-chat 3.00 1.63 2.07 5.86 3.13 4.02 4.88 2.31 3.14 3.78 1.79 2.43 Table 7: Ablation study on the FOE task using three L-RM thresholds ( ≥0.7,≥0.8,≥0.9). ModelsEM S-RM ≥0.7 S-RM≥0.8 S-RM≥0.9 Pre Rec F1 Pre Rec F1 Pre Rec F1 Pre Rec F1 Proprietary LLMs GPT-4o-mini 3.91 1.99 2.62 15.50 7.76 10.27 9.86 4.70 6.37 5.48 2.61 3.54 GPT-4o 7.27 5.18 6.02 21.23 15.31 17.71 14.76 10.55 12.31 9.39 6.72 7.83 Claude-3.5-Haiku 6.13 3.02 4.01 20.60 10.36 13.68 13.41 6.28 8.56 8.55 4.00 5.45 Claude-3.5-Sonnet 11.12
https://arxiv.org/abs/2505.15695v1
6.32 7.97 29.30 16.52 20.93 21.08 11.37 14.78 14.08 7.59 9.87 Open-source LLMs Llama3-8B-Instruct 8.49 6.28 7.17 21.33 15.43 17.80 15.52 10.84 12.77 10.59 7.39 8.71 Llama3-70B-Instruct 7.26 5.57 6.27 19.94 15.18 17.15 13.80 10.37 11.85 9.44 7.09 8.10 Gemma2-9B-it 6.37 4.51 5.25 17.73 12.59 14.64 11.73 8.55 9.89 7.73 5.64 6.52 Gemma2-27B-it 7.05 5.61 6.20 19.29 15.83 17.25 13.09 11.01 11.96 8.18 6.88 7.48 Qwen2.5-7B-Instruct 6.55 4.18 5.05 18.22 11.83 14.18 12.36 7.59 9.41 8.38 5.15 6.38 DeepSeek-7B-chat 3.00 1.63 2.07 8.20 4.34 5.61 5.68 2.69 3.65 3.94 1.87 2.53 Table 8: Ablation study on the FOE task using three S-RM thresholds ( ≥0.7,≥0.8,≥0.9). ModelsEM L-RM S-RM CM Pre Rec F1 Pre Rec F1 Pre Rec F1 Pre Rec F1 Proprietary LLMs GPT-4o-mini 2.40 1.08 1.49 10.17 4.58 6.31 13.44 6.05 8.34 49.14 22.55 29.49 GPT-4o 4.67 3.53 4.02 14.27 10.78 12.29 17.91 13.53 15.41 38.21 28.34 30.63 Claude-3.5-Haiku 3.31 1.44 2.00 10.15 4.41 6.15 14.96 6.50 9.07 46.95 21.98 28.75 Claude-3.5-Sonnet 5.68 2.91 3.85 18.75 9.61 12.71 25.51 13.07 17.29 45.30 23.97 30.08 Open-source LLMs Llama3-8B-Instruct 4.36 2.78 3.39 11.74 7.48 9.14 14.82 9.44 11.54 39.53 25.06 29.20 Llama3-70B-Instruct 5.86 4.41 5.04 12.34 9.28 10.59 15.86 11.93 13.61 35.42 27.00 28.96 Gemma2-9B-it 3.38 2.80 3.06 10.01 8.36 9.15 11.98 9.92 10.86 33.45 26.90 27.64 Gemma2-27B-it 2.54 2.54 2.54 8.19 8.22 8.20 10.95 10.98 10.96 30.76 27.42 26.27 Qwen2.5-7B-Instruct 3.81 1.96 2.59 8.83 4.54 6.00 12.95 6.67 8.80 28.15 18.26 20.24 DeepSeek-7B-chat 1.57 0.49 0.75 2.72 0.85 1.29 3.24 1.01 1.54 34.89 12.07 16.25 Table 9: Performance comparison of different models for FOE task across various evaluation metrics on Blog type. 20 ModelsEM L-RM S-RM CM Pre Rec F1 Pre Rec F1 Pre Rec F1 Pre Rec F1 Proprietary LLMs GPT-4o-mini 3.65 1.43 2.06 9.23 3.62 5.20 14.30 5.61 8.05 68.73 33.07 41.11 GPT-4o 9.74 5.65 7.15 18.18 10.54 13.34 23.87 13.84 17.52 63.59 41.83 47.70 Claude-3.5-Haiku 6.25 2.43 3.49 14.34 5.57 8.02 18.34 7.12 10.25 68.80 34.67 42.43 Claude-3.5-Sonnet 13.14 5.41 7.66 22.32 9.18 13.01 28.60 11.77 16.68 68.68 34.84 42.90 Open-source LLMs Llama3-8B-Instruct 9.28 5.61 6.99 17.31 10.46 13.04 22.45 13.56 16.91 56.18 40.66 43.77 Llama3-70B-Instruct 8.39 4.97 6.24 15.38 9.11 11.44 21.29 12.60 15.83 58.97 37.66 42.89 Gemma2-9B-it 8.10 4.53 5.81 15.78 8.83 11.32 21.39 11.97 15.35 61.53 40.76 45.74 Gemma2-27B-it 9.84 5.92 7.40 15.92 9.58 11.96 21.60 13.00 16.23 60.40 37.04 42.54 Qwen2.5-7B-Instruct 8.67 4.41 5.85 15.85 8.07 10.70 21.00 10.70 14.17 38.56 19.64 26.03 DeepSeek-7B-chat 3.71 1.83 2.45 7.66 3.78 5.06 9.67 4.77 6.39 53.42 32.32 36.14 Table 10: Performance comparison of different models for FOE task across various evaluation metrics on Reddit type. ModelsEM L-RM S-RM CM Pre Rec F1 Pre Rec F1 Pre Rec F1 Pre Rec F1 Proprietary LLMs GPT-4o-mini 4.87 2.93 3.66 13.24 7.97 9.95 17.72 10.67 13.32 77.83 48.15 57.65 GPT-4o 3.92 3.17 3.50 10.81 8.73 9.66 19.23 15.53 17.19 70.44 57.93 61.80 Claude-3.5-Haiku 5.54 3.40 4.21 19.10 11.72 14.53 27.70 17.00 21.07 73.41 46.84 56.01 Claude-3.5-Sonnet 14.01 9.44 11.28 29.16 19.64 23.47 36.64 24.68 29.49 70.94 49.52 57.14 Open-source LLMs
https://arxiv.org/abs/2505.15695v1
Llama3-8B-Instruct 9.82 7.56 8.54 21.31 16.41 18.54 28.46 21.92 24.77 64.15 50.72 54.89 Llama3-70B-Instruct 5.90 4.63 5.19 17.16 13.48 15.10 24.33 19.11 21.41 66.23 52.56 57.13 Gemma2-9B-it 7.24 5.22 6.06 16.99 12.25 14.24 21.87 15.77 18.32 63.11 48.66 53.49 Gemma2-27B-it 6.39 5.63 5.99 16.98 14.95 15.90 23.97 21.10 22.44 62.52 53.93 56.37 Qwen2.5-7B-Instruct 3.55 2.99 3.24 9.74 8.21 8.91 17.66 14.89 16.16 62.82 53.48 55.82 DeepSeek-7B-chat 2.22 1.11 1.48 5.25 2.64 3.51 10.50 5.28 7.02 63.69 34.16 42.16 Table 11: Performance comparison of different models for FOE task across various evaluation metrics on Review Site type. ModelsEM L-RM S-RM CM Pre Rec F1 Pre Rec F1 Pre Rec F1 Pre Rec F1 Proprietary LLMs GPT-4o-mini 4.81 2.62 3.39 12.87 7.02 9.09 16.50 9.00 11.65 63.77 42.11 45.49 GPT-4o 10.73 8.51 9.49 19.78 15.68 17.49 23.33 18.50 20.63 61.28 53.61 51.81 Claude-3.5-Haiku 9.13 4.75 6.25 16.27 8.46 11.13 21.41 11.13 14.64 63.03 43.5 46.79 Claude-3.5-Sonnet 11.61 7.62 9.20 20.95 13.75 16.60 25.47 16.72 20.19 60.1 48.79 48.93 Open-source LLMs Llama3-8B-Instruct 10.65 8.90 9.70 17.75 14.84 16.16 21.07 17.61 19.18 48.28 49.99 44.42 Llama3-70B-Instruct 9.93 8.95 9.42 17.78 16.02 16.86 21.19 19.09 20.08 50.54 51.07 45.76 Gemma2-9B-it 6.76 5.49 6.06 13.83 11.23 12.39 15.66 12.71 14.03 52.44 48.14 45.38 Gemma2-27B-it 9.45 8.36 8.87 16.22 14.34 15.22 20.64 18.25 19.37 50.8 50.02 45.46 Qwen2.5-7B-Instruct 10.45 7.62 8.81 17.38 12.66 14.65 21.18 15.43 17.85 52.97 45.63 43.92 DeepSeek-7B-chat 4.50 3.12 3.68 7.64 5.29 6.25 9.21 6.38 7.54 45.64 36.73 34.94 Table 12: Performance comparison of different models for FOE task across various evaluation metrics on Youtube type. 21 Blog Reddit Models Lexical Semantic Lexical Semantic R-1 R-2 R-L BS A3CU R-1 R-2 R-L BS A3CU Proprietary LLMs GPT-4o-mini 40.14 16.32 36.06 90.32 36.03 38.41 12.60 33.15 90.17 37.71 GPT-4o 37.06 14.68 33.03 89.18 31.34 41.13 15.33 36.15 90.22 40.65 Claude-3.5-Haiku 31.15 9.91 27.01 87.45 25.75 35.20 10.49 30.42 89.10 34.22 Claude-3.5-Sonnet 31.87 9.04 28.44 87.96 25.69 34.77 9.88 30.49 89.28 34.62 Open-source LLMs Llama3-8B-Instruct 35.77 14.10 32.63 89.52 30.27 33.73 9.26 29.58 85.82 28.52 Llama3-70B-Instruct 38.82 15.65 34.70 90.09 32.22 37.60 12.13 32.76 89.70 35.24 Gemma2-9B-it 35.86 13.55 33.45 89.77 28.95 35.00 10.31 30.54 88.36 30.72 Gemma2-27B-it 34.73 12.42 31.07 89.74 31.00 36.27 11.04 31.38 90.01 34.36 Qwen2.5-7B-Instruct 35.54 12.45 32.29 89.88 26.30 31.27 8.74 26.92 89.37 25.24 DeepSeek-7B-chat 34.79 11.75 30.95 73.04 21.09 35.21 10.22 30.89 75.50 25.98 Table 13: Performance comparison of different models for the OIG task using lexical and semantic metrics on the Blog and Reddit Types. Review Site YouTube Models Lexical Semantic Lexical Semantic R-1 R-2 R-L BS A3CU R-1 R-2 R-L BS A3CU Proprietary LLMs GPT-4o-mini 39.85 14.35 35.24 90.42 41.60 38.90 12.82 33.90 90.47 38.96 GPT-4o 39.44 14.50 34.98 89.83 41.07 40.15 14.67 35.65 90.28 41.45 Claude-3.5-Haiku 34.57 10.20 29.76 88.70 36.87 33.36 9.76 29.14 88.76 31.57 Claude-3.5-Sonnet 33.80 9.44 29.22 88.84 34.34 34.12 9.57 30.03 89.11 32.47 Open-source LLMs Llama3-8B-Instruct 40.97 15.85 36.60 90.05 38.24 34.77 9.88 30.49 89.66 33.89 Llama3-70B-Instruct 36.00 12.00 31.64 90.34 37.30 44.88 17.38 29.49 90.64 36.70 Gemma2-9B-it 34.10 11.17 30.06 88.45 35.43 34.84
https://arxiv.org/abs/2505.15695v1
10.54 30.70 86.36 29.41 Gemma2-27B-it 35.48 12.43 30.84 90.46 38.58 35.12 10.88 30.87 90.10 32.90 Qwen2.5-7B-Instruct 33.10 10.71 29.61 89.73 25.29 35.37 11.27 22.89 89.27 24.47 DeepSeek-7B-chat 33.74 9.81 29.01 83.65 29.12 36.47 10.84 32.14 77.44 27.21 Table 14: Performance comparison of different models for the OIG task using lexical and semantic metrics on the Review Site and YouTube Types. 22 Feature-centric opinion extraction (FOE) task prompt [Task Description] You are a car opinion miner for the user. Your task is to extract tuples in the form of (entity, feature, opinion) by identifying attributes, specific features, or components mentioned in the text and associating opinions with each feature. 1. A "entity" is the name of the car model with brand which opinions are expressed (e.g., "volvo xc90", "toyota camry", "Nissan Sentra"). 2. A "feature" as a specific characteristic, attribute, or component of an entity that users mention or evaluate (e.g., "interior design," "fuel efficiency," "safety features"). - The performance, design, or experience (e.g., "handling", "ride comfort"). - Distinct functions or technologies in a vehicle (e.g., "infotainment system"). - Physical parts or systems that make up the vehicle (e.g., "brake", "transmission"). 3. An "opinion" is a subjective or objective judgment, reaction, experience, evaluation, or feedback about the entity’s feature, including assessments of quality, performance, or value, as well as direct responses or reactions from users based on their experience. IMPORTANT : •Features and opinions MUST be extracted in the input text. Never generate words or terms that do not exist in the text. •However, if the feature corresponding to an opinion does not exist in the text and is implicit, the feature is treated as "NULL". • The opinion MUST be no more than 5 words. •The output must be in valid JSON format, but **DO NOT** include "json" code block delimiters (e.g., “‘json ... “‘). • Return **only** the JSON object, without any extra text, explanations, or comments. • Provide only the tuples. Do not mention your process or how you arrived at it. •Note: Return your results in JSON format only, with the following structure: {’opinion_tuple’: [{’entity’: <str>, ’feature’: <str>, ’opinion’: <str>}, {’entity’: <str>, ’feature’: <str>, ’opinion’: <str>}, ..., {’entity’: <str>, ’feature’: <str>, ’opinion’: <str>}]} [Content Text] ... Table 15: The prompt for Feature-centric opinion extraction. 23 Opinion-centric insight generation (OIG) task prompt [Task Description] You are a product and marketing manager at a global automotive company. Your task is to produce a free-form summary that categorizes and organizes a user text into higher-level insights, such that the report alone provides a clear understanding of the key opinions expressed. This summary should be written in natural, human-like language and structured around the core topics (features). Step to Follow : 1. Read and Understand • Examine the online text to identify its main points. 2. Organize Top-Level Topics • Group similar or related tuples into clear categories (e.g., “Engine Issues,” “Warranty Feedback,” etc.). • Reflect on the frequency or intensity of opinions if it helps convey importance. 3. Create a Three-Line Report •Line 1: Highlight the most frequently mentioned or emphasized features,
https://arxiv.org/abs/2505.15695v1
grouping related opinions into high-level categories. •Line 2: Focus on the features that a user strongly praised or criticized, incorporating the intensity or frequency of opinions where applicable. •Line 3: Provide a cohesive conclusion summarizing the overarching sentiment or key takeaway from a user’s text. 4. Write the Summary • Make it short (three to five lines). • Use clear, direct language. • Ensure that reading only this summary sufficiently conveys a user’s main viewpoints. What to Avoid: • Provide only the summary. Do not mention your process or how you arrived at it. •Do not include introductory phrases such as “Here is a summary of the review” or “Based on the review text.” • Do not directly copy sentences from the online text; rephrase and synthesize information. •Carefully analyze the given text to determine the number of users and decide whether to use "user" or "users" accordingly. Note: Return your results in JSON format only, with the following structure: {’summary’: <str>} [Content Text] ... Table 16: The prompt for Opinion-centric insight generation. 24 Contextual Match (CM) prompt [Task Description] You are given two lists of tuples, each in the form [(entity, feature, opinion), (entity, feature, opinion), ...]. One list represents the Gold (correct) tuples, and the other list represents the Model’s Predicted tuples. Your goal is to calculate four values: - matched_pred_tuple: The matched Predicted tuple. - matched_gold_tuple: The matched Gold tuple. Match Criteria : - Examine the two lists of tuples to identify their main points. - Convert each element (entity, feature, opinion) to lowercase before comparing (e.g., "kia soul" vs. "Kia Soul" are equivalent). - Allow flexibility when matching tuples by considering semantic equivalence, synonyms, rephrased expressions, or other valid variations that convey the same context or meaning. For instance, the following cases should be considered as valid matches: • Pred: ("toyota corolla", "brakes", "getting hot"), Gold: ("toyota corolla le", "brakes", "getting hot") • Pred: ("toyota camry", "null", "looks better"), Gold: ("camry", "looks", "better") •Pred: ("porsche 911", "performance", "can go effortlessly fast"), Gold: ("porsche 911", "drive", "effortlessly fast") •Pred: ("aston martin vanquish", "rear badge", "would look better"), Gold: ("aston martin vanquish", "badge", "better") - A tuple is considered a relaxed match if all three elements are semantically equivalent after applying these transformations. - Do not count a match more than once if there are duplicates. - Note: Return your results in JSON format only, with the following structure: {'matched_tuple_pair': [ {'matched_pred_tuple': ('entity', 'feature', 'opinion'), 'matched_gold_tuple': ('entity', 'feature', 'opinion') ... }, [Pred Tuples] ... [Gold Tuples] ... Table 17: The prompt for Contextual Match (CM). 25 LLM-Judge Evaluation prompt - (Faithfulness) [Task Description] You are provided with an online user’s automobile review text along with an opinion-centric insight summary that groups user opinions at the topic level to offer insights. Your task is to evaluate the summary based on its Faithfulness. Make sure you understand the following evaluation metric very clearly. Evaluation Criteria: Faithfulness (1-5) – Evaluate whether the summary accurately represents the original review without distorting its meaning, omitting key details, or introducing hallucinated information that was not present
https://arxiv.org/abs/2505.15695v1
in the original text. • Score 1: The summary completely distorts the original online content and contains many incorrect information. It cannot be trusted at all. •Score 2: The summary significantly misrepresents the original online content with several incorrect information. •Score 3: The summary partially reflects the original online content but has some minor incorrect information. •Score 4: The summary significantly misrepresents the original online content with several incorrect information. •Score 5: The summary completely reflects the original online content without any distortions and incorrect information. [Review Text:] ... [Summary:] ... Evaluation Steps: 1. Read through the review text provided. 2. Read the corresponding opinion-centric insight summary. 3. Evaluate whether the summary faithfully reflects the original review without any distortion. 4. Check if the summary contains any hallucinated information not present in the review. 5. Assign a score from 1 to 5 for Faithfulness, where 1 indicates very poor faithfulness and 5 indicates excellent faithfulness. Table 18: The prompt for LLM-Judge Evaluation (Faithfulness). LLM-Judge Evaluation prompt - (Coverage) [Task Description] You are provided with an online user’s automobile review text along with an opinion-centric insight summary that groups user opinions at the topic level to offer insights. Your task is to evaluate the summary based on its Coverage. Make sure you understand the following evaluation metric very clearly. Evaluation Criteria: Coverage (1-5) – Evaluate whether the summary effectively captures and represents the key opinions expressed in the review without omitting significant details or key points. Ensure that all essential opinions are included and accurately represented. •Score 1: The summary fails to capture any key opinions from the online content. The content is either completely missing or irrelevant to the original opinions. •Score 2: The summary captures only a small portion of key opinions. Many important opinions from the online content are missing. •Score 3: The summary captures some key opinions but misses others. The coverage is partial and could be more comprehensive. •Score 4: The summary effectively captures most key opinions from the online content. The coverage is good but may miss minor details. •Score 5: The summary comprehensively captures all key opinions from the online content. Nothing important is missing, and the coverage is complete. [Review Text:] ... [Summary:] ... Evaluation Steps: 1. Read through the review text provided. 2. Read the corresponding opinion-centric insight summary. 3. Evaluate whether the summary effectively captures and represents the key opinions expressed in the review. 4. Consider if any significant details or opinions are missing from the summary. 5. Assign a score from 1 to 5 for Coverage, where 1 indicates very poor coverage and 5 indicates excellent coverage. Table 19: The prompt for LLM-Judge Evaluation (Coverage). 26 LLM-Judge Evaluation prompt - (Specificity) [Task Description] You are provided with an online user’s automobile review text along with an opinion-centric insight summary that groups user opinions at the topic level to offer insights. Your task is to evaluate the summary based on its Specificity. Make sure you understand the following evaluation metric very clearly. Evaluation Criteria: Specificity (1-5) – Evaluate whether the summary presents meaningful
https://arxiv.org/abs/2505.15695v1
and relevant details by including concrete information rather than being vague or overly generic. Ensure that the summary provides clear, detailed points that directly relate to the review content. •Score 1: The summary is extremely vague and generic, lacking any meaningful details. It uses broad generaliza- tions without specific examples or descriptions. •Score 2: The summary includes very few specific details. Most information is presented in a general way without concrete examples. •Score 3: The summary includes some specific details but could be more precise. There is a mix of specific and generic information. •Score 4: The summary provides good specific details in most areas. The information is concrete and meaningful, though some minor points could be more detailed. •Score 5: The summary is highly specific throughout, providing precise and meaningful details. All information is concrete with relevant examples and descriptions. [Review Text:] ... [Summary:] ... Evaluation Steps: 1. Read through the review text provided. 2. Read the corresponding opinion-centric insight summary. 3. Evaluate whether the summary provides specific, concrete details and avoids overly general or ambiguous statements. 4. Assign a score from 1 to 5 for Specificity, where 1 indicates very poor specificity and 5 indicates excellent specificity. Table 20: The prompt for LLM-Judge Evaluation (Specificity). LLM-Judge Evaluation prompt - (Insightfulness) [Task Description] You are provided with an online user’s automobile review text along with an opinion-centric insight summary that groups user opinions at the topic level to offer insights. Your task is to evaluate the summary based on its insightfulness. Make sure you understand the following evaluation metric very clearly. Evaluation Criteria: Insightfulness (1-5) – Evaluate whether the summary provides meaningful insights that enhance understanding or decision-making for the reader. Ensure that the summary goes beyond a simple restatement of the review to offer unique interpretations or conclusions that add value. •Score 1: The summary provides no meaningful insights. It simply restates basic facts without adding any value for understanding or decision-making. •Score 2: The summary offers very limited insights. Most information is superficial and does not help readers gain deeper understanding. •Score 3: The summary provides some useful insights but could go deeper. It offers moderate value for understanding and decision-making. •Score 4: The summary provides good insights in most areas. The information is valuable for understanding and decision-making, though some points could be more insightful. •Score 5: The summary provides excellent insights throughout. All information meaningfully enhances under- standing and is highly valuable for decision-making. [Review Text:] ... [Summary:] ... Evaluation Steps: 1. Read through the review text provided. 2. Read the corresponding opinion-centric insight summary. 3. Evaluate whether the summary offers insightful, value-added interpretations that deepen understanding or guide decisions. 4. Assign a score from 1 to 5 for Insightfulness., where 1 indicates very poor Insightfulness. and 5 indicates excellent insightfulness. Table 21: The prompt for LLM-Judge Evaluation (Insightfulness). 27 LLM-Judge Evaluation prompt - (Intent) [Task Description] You are provided with an online user’s automobile review text along with an opinion-centric insight summary that groups user opinions at the topic level to offer insights. Your task is
https://arxiv.org/abs/2505.15695v1
to evaluate the summary based on its Intent. Make sure you understand the following evaluation metric very clearly. Evaluation Criteria: Intent (1-5) – Evaluate whether the summary accurately preserves the author’s original tone, intent, and nuances without altering the emotional or stylistic essence of the review. Consider if the summary maintains the original sentiment and communication style throughout. •Score 1: The summary completely fails to preserve the original tone and intent. The emotional essence and nuances are lost or significantly distorted. •Score 2: The summary largely misrepresents the original tone and intent. Many nuances are missed or altered, though some basic sentiments remain intact. •Score 3: The summary somewhat preserves the original tone and intent. Some nuances are captured while others are missed or altered. •Score 4: The summary generally preserves the original tone and intent well. Most nuances and emotional elements are accurately captured, with only minor alterations. •Score 5: The summary perfectly preserves the original tone, intent, and nuances. The emotional and stylistic essence is captured with complete accuracy. [Review Text:] ... [Summary:] ... Evaluation Steps: 1. Read through the review text provided. 2. Read the corresponding opinion-centric insight summary. 3. Evaluate whether the summary maintains the original tone, intent, and nuanced details of the review without altering its emotional or stylistic qualities. 4. Assign a score from 1 to 5 for Intent, where 1 indicates very poor preservation and 5 indicates excellent preservation. Table 22: The prompt for LLM-Judge Evaluation (Intent). LLM-Judge Evaluation prompt - (Fluency) [Task Description] You are provided with an online user’s automobile review text along with an opinion-centric insight summary that groups user opinions at the topic level to offer insights. Your task is to evaluate the summary based on its Fluency. Make sure you understand the following evaluation metric very clearly. Evaluation Criteria: Fluency (1-5) – Evaluate whether the summary is naturally written, grammatically correct, and easy to read. Consider whether the summary flows smoothly, uses proper grammar, and avoids awkward phrasing, ensuring it is accessible to the reader. •Score 1: The summary is extremely difficult to read. It is filled with errors, awkward phrasing, and lacks proper grammar. •Score 2: The summary is difficult to read. It contains many errors, awkward phrasing, and lacks proper grammar. •Score 3: The summary is somewhat difficult to read. It has some errors, awkward phrasing, and lacks proper grammar. •Score 4: The summary is generally easy to read. It has few errors, awkward phrasing, and lacks proper grammar. •Score 5: The summary is extremely easy to read. It is filled with errors, awkward phrasing, and lacks proper grammar. [Review Text:] ... [Summary:] ... Evaluation Steps: 1. Read through the review text provided. 2. Read the corresponding opinion-centric insight summary. 3. Evaluate whether the summary is written in a natural and grammatically correct manner with clear and smooth flow. 4. Assign a score from 1 to 5 for Fluency, where 1 indicates very poor fluency and 5 indicates excellent fluency. Table 23: The prompt for LLM-Judge Evaluation (Fluency). 28 Entity-feature-opinion tuple Annotation Prompt [Task Definition] You are a
https://arxiv.org/abs/2505.15695v1
car opinion miner for the user. I want to perform feature-centric opinion extraction which is identifying attributes, specific features, or components mentioned in the text and associating opinions with each feature. To maximize coverage and diversity, you will generate a comprehensive set of tuples using your reasoning and understanding of the text. You will receive a user-generated car-related text. Your task is to extract all possible tuples in the form of (entity, feature, opinion) that exist within the text. •Entity : the brand and model of a vehicle for which an opinion is expressed (e.g., "volvo xc90", "toyota camry", "nissan sentra"). •Feature : Feature: a specific characteristic, attribute, or component of the entity that users mention or evaluate. Includes aspects of performance, design, driving experience, features, systems, or parts (e.g., "ride comfort", "handling", "infotainment system", "interior", "brakes"). •Opinion : Opinion: a subjective or objective evaluation, reaction, or judgment about a feature. The opinion span must be taken exactly from the text and contain no more than 5 words. If the feature is implicit (not explicitly stated in the text), label it as "NULL". •Evidence Sentence : the exact sentence from the input text that contains both the feature and the opinion (or where the opinion is clearly expressed about the implicit feature). [Example] •Text: The EyeSight system of Toyota Camry SE is amazing compared to Toyotas whatever it is. •Entity : Toyota Camry SE •Feature : EyeSight system •Opinion : amazing compared to Toyotas whatever it is [Annotation Requirements] • All tuples must come only from the input text. Never invent or infer content that does not exist in the text. • Do not hallucinate any information or generate tuples not explicitly supported by the input. • Do not include any explanations, reasoning, or formatting outside the JSON object. •The output must be in valid JSON format, and contain only the JSON object as follows: {’opinion_tuple’: [{’entity’: <str>, ’feature’: <str>, ’opinion’: <str>, ’evidence’: <str>}, {’entity’: <str>, ’feature’: <str>, ’opinion’: <str>, ’evidence’: <str>}, ..., {’entity’: <str>, ’feature’: <str>, ’opinion’: <str>, ’evidence’: <str>}]} [Content Text] ... Table 24: OOMB Entity-feature-opinion tuple annotation prompt. 29 Entity-feature-opinion tuple Annotation & Verification Guideline [Task Definition] Your task is to analyze what kind of opinions users express about cars in online user-generated content. •Entity : Car brand & model (e.g., volvo xc90, toyota camry, Nissan Sentra) •Feature : A characteristic, attribute, or component of the mentioned entity (e.g., handling, ride comfort, infotainment system, ...) •Opinion : The user’s subjective/objective judgment, reaction, experience, evaluation, or feedback about the feature (e.g., terrible, incredibly stable) [Example] •Text: The EyeSight system of Toyota Camry SE is amazing compared to Toyotas whatever it is. •Entity : Toyota Camry SE •Feature : EyeSight system •Opinion : amazing compared to Toyotas whatever it is [Description] • Each web page displays a online content along with a single (entity, feature, opinion) tuple at a time. • A content can contain multiple (entity, feature, opinion) tuples. • Sentence refers to the evidence sentence for the feature and opinion. •feature_exist, opinion_exist : Whether the feature or opinion exists
https://arxiv.org/abs/2505.15695v1
in the sentence. •sentence_exist : Whether the sentence exists in the document on the right. •If a feature appears in the content, it is highlighted in bold red in both the Document Info and Document Text. • If an opinion appears in the document, it is highlighted in bold blue . • The evidence sentence is highlighted in bold black . •Each tuple is shown in the Data Information section, while the content text appears in the Document Text section. Annotators follow the process described below to perform accurate verification and refinement. [Annotation & Verification Process] (1) Entity Verification Check if the car brand and name (displayed in the entity field) is correctly identified from the document. The entity typically appears in titles, subtitles, or once in the main text. Confirm the context is indeed about this vehicle and the name is recorded exactly as it appears in the document. (2) Opinion Existence Check Verify that opinions exist explicitly in the document as words or phrases. When an opinion is found in the text (highlighted in blue), confirm opinion_exist is marked ’O’. If no explicit opinion is found in any sentence, verify opinion_exist is marked ’X’. (3) Feature-Opinion Relationship Verification Check if the feature (highlighted in red) being discussed in relation to the opinion exists explicitly in the document. For explicit features, verify feature_exist is marked ’O’ and the exact text from the document is used. For implicit features (not explicitly mentioned), verify they are marked as ’NULL’ but feature_exist is still ’O’. (4) Sentence Documentation Check Verify that the evidence sentence (in bold black) properly contains either both feature and opinion (when they appear in the same sentence) or spans from feature to opinion (when they appear in different sentences). Confirm sentence_exist is marked ’O’ when this evidence appears in the document. (5) Duplicate Tuple Check When multiple identical or similar feature-opinion pairs exist in the same document, verify that only one correct sample is kept (with proper feature_exist and opinion_exist marks) while others are marked ’X’ to avoid duplication. Table 25: OOMB Entity-feature-opinion tuple annotation and verification guideline. 30 Opinion-centric-summary Annotation Prmopt [Task Definition] You are a product and marketing manager at a global automotive company. You will be given a piece of user-generated automotive content, along with its final gold-standard set of (entity, feature, opinion) tuples. Your goal is to write a free-form opinion-centric summary that organizes and synthesizes the content into high-level, insightful categories. Here is definition of (entity, feature, opinion) tuple and example: •Entity : the brand and model of a vehicle for which an opinion is expressed (e.g., "volvo xc90", "toyota camry", "nissan sentra"). •Feature : Feature: a specific characteristic, attribute, or component of the entity that users mention or evaluate. Includes aspects of performance, design, driving experience, features, systems, or parts (e.g., "ride comfort", "handling", "infotainment system", "interior", "brakes"). •Opinion : Opinion: a subjective or objective evaluation, reaction, or judgment about a feature. The opinion span must be taken exactly from the text and contain no more than 5 words. If the feature is implicit
https://arxiv.org/abs/2505.15695v1
(not explicitly stated in the text), label it as "NULL". This summary should be written in natural language, structured around the core features discussed, and clearly convey the most salient and recurring opinions. Step 1. Read and Understand • Thoroughly examine both the input content and the associated tuples. • Identify the main opinion clusters and the intensity or frequency of key topics. Step 2. Organize into High-Level Topics • Group similar tuples into broad categories (e.g., "Interior Design", "Performance & Handling"). • Reflect on user sentiment and frequency to prioritize key points. Step 3. Generate a Three-to-Five-Line Summary • Line 1: Highlight the most frequently mentioned or emphasized features. • Line 2: Emphasize features that were praised or criticized with notable intensity or consensus. • Line 3+: Provide a concluding sentence summarizing the overall sentiment or takeaway. Important Instructions • The summary should be concise, natural, and informative—suitable for a marketing manager’s report • Ensure it covers the major opinions expressed in the tuple set • The tone should be neutral and professional, without exaggeration • Do not directly copy sentences from the content • Do not mention the annotation process or describe how the summary was generated • Do not include phrases like “Based on the review. . . ” or “Here is a summary. . . ” •Carefully analyze whether the input reflects a single user or multiple users, and adjust nouns/pronouns accordingly ("user" vs. "users"). • Note: Return your results in JSON format only, with the following structure: {’summary’: <str>} [Content Text] ... Table 26: OOMB: Opinion-centric summary annotation prompt. 31 Opinion-centric-summary Annotation & Verification Guideline [Task Definition] Your task is to analyze a text that summarizes the main topics and insights derived from online user-generated content. The provided summary is not a simple condensation, but an opinion-centric summary that synthesizes multiple users’ perspectives to deliver high-level insights useful for marketing and strategic decision-making. [Description] • Each page displays a online content, a single gold tuple from the content, and its corresponding summary. •Each summary is paired with a single gold tuple from the content and displayed on the page. When the page is turned, the next gold tuple appears. •Each summary and a single gold tuple is shown in the Data Information section, while the content text appears in the Document Text section. Annotators follow the process described below to perform accurate verification and refinement. [Annotation & Verification Process] (1) Read & Understand Original Text and Summary Carefully read the original user-generated content and Summary. (2) Check Factual Accuracy Thoroughly examine the summary for any hallucinations or factual inaccuracies that contradict information in the original content. Check whether all claims in the summary are directly supported by explicit statements in the source texts. Verify that no fabricated or assumed information is included, particularly for technical specifications, features, or entity attributes mentioned in the summary. (3) Check Subject Coherence Verify the correct attribution of opinions to appropriate subjects, considering the multi-user discussion context. Ensure opinions are not misattributed across different entities or users, especially in complex, multi-threaded discussions.
https://arxiv.org/abs/2505.15695v1
Check that the summary properly distinguishes between individual opinions and collective sentiments when aggregating views from multiple users. Confirm that referenced features are associated with their correct corresponding entities. (4) Check Sentiment Consistency Check for sentiment polarity errors, particularly in cases involving irony, sarcasm, or nuanced expressions. Ensure that positive opinions are not mistakenly presented as negative and vice versa. Verify that the summary accurately captures the tone and emotional valence of the original opinions, including subtle sentiment expressions that may be context-dependent. Confirm that intensity modifiers (e.g., "very," "somewhat," "extremely") are appropriately preserved when they significantly impact the expressed opinion. (5) Verify Three-Line Structure Line 1 (Frequent or Emphasized Features) Verification • Confirm that the summary accurately identifies and highlights features most frequently mentioned or empha- sized in the original text • Check frequency counts to verify that truly common themes are included in the summary • Check if minority opinions aren’t overrepresented or majority opinions underrepresented Line 2 (Strongly Praised or Criticized Features) Verification • Confirm the summary clearly identifies features that received particularly strong praise or criticism • Verify that the intensity of opinions is accurately conveyed (using appropriate intensity indicators) • Check that the distinction between mild opinions and strong sentiments is preserved Line 3 (Conclusion/Key Takeaway) Verification • Confirm the presence of a concise conclusion that synthesizes the overall sentiment • Verify this conclusion accurately reflects the predominant message across all original comments • Check that the conclusion doesn’t introduce new information not supported by the original text Table 27: OOMB: Opinion-centric summary annotation and verification process. 32 Input Content Opinion-centric Summary [YouTube] Title: Here’s Why Everyone Hates the Mercedes-AMG GLE63 Coupe Post #1 (Person 1): I actually like the ways these look. They look big and aggressive Comment #1 (Person 2): Said no one ever Comment #2 (Person 3): not everyone has good taste. I dont like it because it looks good but the coupe roof line, and that’s the point of the car. So it’s like it could be perfect but it’s not, to me at least. Looks wise. They should’ve changed the front too. Comment #3 (Person 4): no they look obnoxious and awkwardly proportioned. you’d have to be insane to spend over $10K more for a less efficient, less practical version of a much better car in every aspect. Comment #4 (Person 5): If u like dome shaped overpriced. . . Comment #5 (Person 6): It’s a fat sedanThe Mercedes-AMG GLE63 Coupe re- ceives mixed reviews, with users criti- cizing its design as awkwardly propor- tioned, less efficient, and less practi- cal compared to other models. Some find the coupe’s roofline and overall look obnoxious and overpriced, while a minority appreciate its big and ag- gressive appearance. Overall, the senti- ment leans negative, with the car’s aes- thetics and practicality being the main points of contention. [Reddit] Title : What are some versions/generations of cars that have been mostly under the radar? Post #11518 (Author: Person 1): Y34 Infiniti M45.340hp V8 luxury sedan with strong, square styling. Roughly 9,000 imported over the two
https://arxiv.org/abs/2505.15695v1
years on sale.Parts availability is a bitch. Comment #11519 (Author: Person 2, Reply to reply comment #11518): Good choice. An actual car that people dont talk about. There was also the M56 with a VK56 V8 that made 420 horsepower. Also rare. Heard one with an exhaust a while ago and it sounded pretty good. Comment #11520 (Author: Person 3, Reply to reply comment #11519): Yeah, the M56 is really cool and almost never gets talked about. Infiniti also continued the V8 when they renamed the M37M56 to the Q70, but a V8 Q70 is even rarer than an M56. Comment #11521 (Author: Person 4, Reply to reply comment #11518): And they look so rad. Quirky Japanese styling that wasnt super well received by the western market at the time, but few things look as good as a murdered out M45 Comment #11522 (Author: Person 5, Reply to reply comment #11518): Always loved these and thought of them as cars that should never have made it here stateside because of how JDM the styling is. Could be such a timeless car with some right modifications Comment #11523 (Author: Person 6, Reply to reply comment #11518): Most sinister looking car since the w109 MercedesThe Y34 Infiniti M45 is celebrated for its strong, square styling and poten- tial for timeless appeal with the right modifications, though parts availabil- ity is a challenge. Its quirky Japanese design was not initially well-received in the Western market, yet it is now appreciated for its unique and sinis- ter look, comparable to the w109 Mer- cedes. The M56 variant, with its 420 horsepower V8, is noted for its rarity and impressive sound, while the V8 Q70 is even rarer, adding to the allure of these distinctive models. [Reveiw Site] Title : Turn OFF Automatic Software Updates Took awhile but have grown from tolerating to appreciating this SUV . Replacement of a defective 12v battery and the dealer updating my car from the 2.0 to the 2.1 software resolved the initial issues that initially were bricking my XC60 B5. Second, I turned OFF the automatic software updates which you can do in check for software update submenu. Sticking with version 2.1 for now since it has made my car stable (at least on my B5 trim). 2.2 is available which includes Apple CarPlay but that version was JUST released and the initial bugs have not been worked out. So buggy that 2.2 is not available for an OTA update specifically for B5 trims since it causes known problems. While CarPlay is in high demand the native Google OS has been great and the Google map appears on the driver display which CarPlay currently wont. That said, recommend finding the software version that enables stability for your car/trim and then turn off automatic updates in the settings. For my XC60 B5 I am going to stick with 2.1 until forced to update or until the blogs indicate V olvo and Google are aligning better. Aside from the software gremlins this SUV is comfortable, super handsome in my r-design trim, and
https://arxiv.org/abs/2505.15695v1
feels a touch unique.The V olvo XC60 B5 receives mixed feedback, with users appreciating its comfort and unique design, particu- larly in the R-design trim. However, software issues are a significant con- cern, as the latest 2.2 version, which includes Apple CarPlay, is buggy and unavailable for OTA updates on B5 trims. Users recommend sticking with the stable 2.1 version and turning off automatic updates to maintain vehicle stability, while the native Google OS is praised for its functionality. [Blog] Title : The Little Things: 2024 Hyundai Kona’s Heat Buttons Are Oddly Satisfying The Hyundai Kona has a very open and room-like interior, with some well-placed buttons. I’ve been on an ergonomics kick lately, closely scrutinizing the button placement and cockpit layouts of every car I get into. The 2024 Hyundai Kona has a uniquely open and homey-feeling interior, but one aspect of it that struck me was how perfect the heated seat and steering wheel buttons are. Before your eyes roll out of your head at the thought of praising something as mundane as a button, consider this: Heated seat and steering wheel controls are features you might use every day, multiple times a day, for months and months while you own a car. It’s fun to talk about max skidpad g’s and 0-60 times, but realistically, how often do you take your vehicle to its extremes? When we’re talking about what it’s like to live with a car day in and day out, ergonomic touches can make or break the enjoyability of ownership. ...The 2024 Hyundai Kona is praised for its interior design, featuring a uniquely open and homey feel with well-placed, physical buttons that enhance user engagement and ease of use. The heated seat and steering wheel controls receive high marks for their perfect placement and functionality, including a rare two-level heat setting for the steering wheel. Despite some criticism of its front-end design and limited ac- celeration, the overall sentiment is pos- itive, highlighting the Kona’s thought- ful ergonomic touches and practical features. Table 28: Examples of OOMB opinion-centric-summaries by content type 33 Case Study: Review Site (1/2) Input Content Title : Turn OFF Automatic Software Updates Took awhile but have grown from tolerating to appreciating this SUV . Replacement of a defective 12v battery and the dealer updating my car from the 2.0 to the 2.1 software resolved the initial issues that initially were bricking my XC60 B5. Second, I turned OFF the automatic software updates which you can do in check for software update submenu. Sticking with version 2.1 for now since it has made my car stable (at least on my B5 trim). 2.2 is available which includes Apple CarPlay but that version was JUST released and the initial bugs have not been worked out. So buggy that 2.2 is not available for an OTA update specifically for B5 trims since it causes known problems. While CarPlay is in high demand the native Google OS has been great and the Google map appears on the driver display which CarPlay currently wont. That said, recommend finding
https://arxiv.org/abs/2505.15695v1
the software version that enables stability for your car/trim and then turn off automatic updates in the settings. For my XC60 B5 I am going to stick with 2.1 until forced to update or until the blogs indicate V olvo and Google are aligning better. Aside from the software gremlins this SUV is comfortable, super handsome in my r-design trim, and feels a touch unique. Model Generated SummaryThe reviewer initially faced issues with their V olvo XC60 B5 due to a defective 12v battery and problematic software updates, but these were resolved by upgrading to software version 2.1. They advise against automatic software updates, as version 2.2, though offering Apple CarPlay, is buggy and not recommended for B5 trims. The reviewer appreciates the native Google OS, especially its integration with the driver display, and plans to stick with version 2.1 for stability. Despite software challenges, they find the SUV comfortable, stylish in the R-design trim, and unique. Gold Summary The V olvo XC60 B5 receives mixed feedback, with users appreciating its comfort and unique design, particularly in the R-design trim. However, software issues are a significant concern, as the latest 2.2 version, which includes Apple CarPlay, is buggy and unavailable for OTA updates on B5 trims. Users recommend sticking with the stable 2.1 version and turning off automatic updates to maintain vehicle stability, while the native Google OS is praised for its functionality. A3CU 60.91 Table 29: Case Study (1) of LLM-Judge Evaluation for the OIG Task. Summary generated by GPT-4o. Case Study: Reddit (1/2) Input Content Title : Ford starts deliveries of enhancedrange etransit Comment 35276 (Author: Person 1) : hate the trend of just using an ICE hull for an EV , so uninterest- ing and ugly. EV allows you to change the form factor of cars. I like new startups that make interesting and practical looking cars. Comment 35277 (Author: Person 2, Reply to reply comment #35276) : Its a work van. Fleets like having an already existing supply of commonly used and broken parts much more than they like having pretty looking vans. Comment 35278 (Author: Person 3, Reply to reply comment #35276) : Yeah but cars are the way they are for a lot of other reasons. And it actually makes a ton of financial sense. Some of the hypothetical benefits are not really as beneficial when insurance, repair, maintenance, learning curve for mechanics are taken into consideration. Comment 35279 (Author: Person 4, Reply to reply comment #35276) : Vans are gonna be fuckin boxes no matter what, its the point Model Generated SummaryThe review discussion centers around the design and practicality of electric vehicles (EVs) using traditional internal combustion engine (ICE) designs. Person 1 expresses dissatisfaction with the trend of using ICE hulls for EVs, finding them uninteresting and aesthetically displeasing, and prefers innovative designs from new startups. In contrast, Person 2 argues that for work vans, practicality and the availability of parts are more important than aesthetics. Person 3 supports this view, highlighting the financial and practical benefits of using existing designs, considering factors like
https://arxiv.org/abs/2505.15695v1
insurance, repair, and maintenance. Person 4 bluntly states that vans are inherently boxy due to their functional purpose, implying that design changes are limited by their utility. Gold Summary The Ford e-Transit receives mixed feedback, with some users criticizing its design as uninteresting and ugly due to its use of an ICE hull, while others appreciate the practicality and financial sense of maintaining a familiar form. The vehicle’s reliance on commonly used parts is seen as beneficial for fleet maintenance, despite the lack of aesthetic appeal. Overall, the sentiment reflects a divide between those valuing innovative design and those prioritizing practicality and cost-effectiveness. A3CU 10.12 Table 30: Case Study (2) of LLM-Judge Evaluation for the OIG Task. Summary generated by GPT-4o. 34 Case Study: Review Site (2/2) Faithfulness : 5 Accuracy of Software and Battery Issues : The summary correctly notes the initial issues the user faced due to a defective 12v battery and the problematic software updates, which closely aligns with what the review text describes. It also correctly mentions how these issues were largely resolved by upgrading to version 2.1. Software Updates and Recommendations : The summary accurately captures the user’s advice against automatic software updates, specifically highlighting that version 2.2 is buggy, particularly for the B5 trim, which is consistent with the original review’s observations. It also correctly represents the reviewer’s intent to stick with version 2.1 for stability. Coverage : 4 Comprehensive Coverage of Main Issues : The summary effectively highlights the initial issues faced by the reviewer, namely the defective 12v battery and problematic software updates, and how these were resolved by upgrading to software version 2.1. These are the critical components of the review text. Inclusion of Recommendation and Software Concerns : The insight summary correctly encapsulates the reviewer’s advice against automatic updates due to the issues with version 2.2, emphasizing its buggy nature and incompatibility with B5 trims. It also mentions the preference to stay with version 2.1, another key opinion in the review. Minor Missing Details : Although the summary is generally accurate, it doesn’t mention the specific aspect of finding a stable software version and then turning off automatic updates, which adds a nuanced detail to the reviewer’s overall advice. This omission prevents the summary from achieving a perfect score. Specificity : 4 Resolution Details : The summary covers the resolution process involving the upgrade to software version 2.1 to address initial vehicle issues. It explicitly mentions the defects in the 12v battery and problematic updates, demonstrating concrete detail awareness about the user’s experience. Specific Features and Versions : The summary specifies the difference between software versions 2.1 and 2.2, noting the inclusion of Apple CarPlay in version 2.2. It captures the user’s preference for version 2.1 for stability reasons and references the native Google OS’s features, offering precise insights. General Impression and Recommendations : While largely specific in detail, certain generalizations are made, such as mentioning the comfort and uniqueness of the SUV without referencing specific features that contribute to these impressions. This is where the summary could benefit from slightly more
https://arxiv.org/abs/2505.15695v1
precise detail, warranting a score of 4 rather than a 5. Insightfulness : 4 Comprehensive Coverage : The summary effectively covers the key points of the review, including the issue with the battery, the software problems, and the resolution through the 2.1 software update. It emphasizes the importance of stability and cautions against automatic updates. This communicates the core guidance intended by the reviewer. Value-Added Advice : By recommending a strategic approach to software updates and emphasizing the stability offered by version 2.1, the summary provides actionable insights that can directly inform decision-making for potential users of the XC60 B5, particularly those concerned about software reliability. Balanced Perspective : The summary maintains a balance by acknowledging the non-software-related positives of the vehicle, such as comfort and aesthetic appeal, which enriches understanding for readers interested in a broader evaluation of the SUV’s attributes. However, it could delve deeper into why the Google OS is particularly advantageous or why the SUV’s design stands out, which slightly limits its insightfulness further. Intent : 5 Tone and Intent Preservation : The summary effectively captures the reviewer’s tone of cautious ap- preciation towards the vehicle, highlighting the resolve to prioritize stability over newer, but potentially problematic, software updates. The advice against automatic updates and emphasis on waiting for software stability clearly mirrors the reviewer’s thoughtful, pragmatic approach. Emotional Nuances : The emotional essence of the original review, which oscillates between initial frustration and eventual appreciation, is well-preserved. The summary accurately reflects the transition from dissatisfaction due to battery and software issues to satisfaction with the SUV’s comfort and aesthetic appeal. Detail and Specificity : Key details concerning the software update issues, particularly the problems associated with version 2.2 and the decision to stick with version 2.1, are thoroughly covered in the summary. The review’s specific mentions of the native Google OS and Apple’s CarPlay availability are adequately noted, maintaining the original’s nuanced reflections. Fluency : 5 Natural Flow and Language Usage : The summary is written in a manner that mimics a natural and conversational tone. It concisely recounts the original review’s main points without introducing excessive technical jargon or unnecessary detail. Grammar and Sentence Structure : The summary is void of grammatical errors or awkward phrasing, making it easy for the reader to follow. Sentences are constructed well, contributing to a smooth reading experience. Clarity and Cohesion : Each point in the summary builds logically upon the previous, maintaining clear cohesion throughout. This makes it accessible even to readers unfamiliar with the context of the review text. Table 31: Case Study of LLM-Judge Evaluation for the OIG Task. Summary generated by GPT-4o. 35 Case Study: Reddit (2/2) Faithfulness : 5 Accurate Representation : The summary accurately captures the main points and perspectives shared by the commenters without distorting their meaning. Each viewpoint, from aesthetics to practicality, is clearly represented as expressed in the original comments. No Omitted Key Details : The core opinions about the use of ICE hulls for EVs, the practical considerations for work vans, and the limited design flexibility due to utility needs
https://arxiv.org/abs/2505.15695v1
are all present in the summary. This shows no significant omissions. Absence of Hallucinated Information : The summary does not introduce any information that was not present in the original text, ensuring that it stays true to the opinions and insights provided by the commenters. This maintains the integrity of the review discussion. Coverage : 5 Comprehensive Representation : The summary thoroughly captures the primary opinions expressed by each participant in the review text. It details Person 1’s dislike for using ICE hulls in EVs, Person 2’s focus on practicality and cost-effectiveness discussed by Person 2, Person 3’s considerations of financial benefits, and Person 4’s functional view of van design. Inclusion of Key Points : No significant opinions are omitted. The contrasting views regarding aesthetic values versus practical functionality and financial considerations are clearly highlighted, showcasing the diverse perspectives discussed in the review. Conciseness and Clarity : The summary provides a concise yet clear representation of the review’s key discussions, effectively relaying the main points without unnecessary details, ensuring all significant opinions are well-covered. Specificity : 5 Concrete Alignment with Comments : The summary effectively captures the conversation between different users by directly referencing the points they make. It notes Person 1’s dissatisfaction with ICE-based EV designs and preference for innovative startups, Person 2’s emphasis on practicality and part availability for work vans, Person 3’s mention of financial and practical factors, and Person 4’s blunt assertion about vans’ inherent boxiness. These details perfectly align with the provided comments, making the summary highly specific. No Overgeneralization : At no point does the summary veer into vague or generic statements. Each opinion is not only linked to the respective commenter but also articulated with clear explanation of their rationale, demonstrating a detailed and precise understanding of the user opinions. Point-wise Clarity : The summary maintains a structurally coherent format by presenting each perspective distinctly. The inclusion of each contributor’s viewpoint provides a comprehensive view of the entire discussion, acknowledging the nuances in opinions rather than making an overarching generalization. This results in a precise and informative overview. Insightfulness : 4 Comprehensive Overview : The summary effectively synthesizes various perspectives from the commentary, covering not only the dissatisfaction with traditional designs from Person 1 but also contrasting practical views from others who value utility over aesthetics. This balanced presentation allows readers to see different angles of the ongoing design debate, providing a fuller context. Emphasis on Practical Factors : By highlighting the points about practicality—such as access to parts, financial logic, and the vans’ purpose—the summary goes beyond mere restatement of comments and offers a deeper understanding of the practical considerations influencing design decisions in the EV market. Concludes with Functional Insight : The summary concludes with Person 4’s point about the inherent boxy design of vans due to their function, driving home the idea that practicality often trumps aesthetic innovation in vehicle design. This insight allows readers to grasp why some design choices remain seemingly unchanged. Intent : 5 Tone and Intent Preservation : The summary accurately captures the tone and intent of each
https://arxiv.org/abs/2505.15695v1
comment within the discussion. Person 1’s dissatisfaction with the trend is clearly articulated, as is the practical- focused tone of Persons 2, 3, and 4. This indicates that the emotional elements of approval or disapproval were preserved effectively. Nuance and Specificity : The summary encapsulates the nuances in each participant’s perspective. It highlights Person 1’s preference for innovative design, contrasts this with the practical considerations highlighted by Person 2 and Person 3, and points out Person 4’s straightforward viewpoint regarding the inherent functionality of vans. Consistent Style and Balance : The summary maintains a balanced view and communicates both sides of the argument without straying from the original intent of the discourse. This illustrates a good level of detail while maintaining the integrity and style of the original review, indicating an effective preservation of stylistic and emotional essence. Fluency : 5 Grammar and Syntax : The summary is grammatically sound, with proper sentence structures, clear subject-verb agreement, and correct use of punctuation, enhancing readability. Flow and Coherence : The transition between points expressed by different persons is seamless, logically structured, allowing for a coherent understanding of conflicting viewpoints on EV and ICE designs. Clarity and Readability : The language used is straightforward and easy to follow, with precise vocabulary choices that appropriately convey complex opinions in an accessible manner. Table 32: Case Study of LLM-Judge Evaluation for the OIG Task. Summary generated by GPT-4o. 36 Figure 8: OOMB – Annotation UI used for Entity-feature-opinion tuple set annotation. This example shows a case where both the feature and opinion are present in the content. Figure 9: OOMB – Annotation UI used for Entity-feature-opinion tuple set annotation. This example shows a case where the feature is present in the content, but the opinion is not. Figure 10: OOMB – Annotation UI used for opinion-centric summary annotation. 37 Figure 11: The interface for human evaluation (Instruction part). Figure 12: The interface for human evaluation (Faithfulness). Figure 13: The interface for human evaluation (Coverage). 38 Figure 14: The interface for human evaluation (Specificity). Figure 15: The interface for human evaluation (Insightfulness). Figure 16: The interface for human evaluation (Intent). Figure 17: The interface for human evaluation (Fluency). 39
https://arxiv.org/abs/2505.15695v1
arXiv:2505.15696v1 [cs.CL] 21 May 2025MaxPoolBERT: Enhancing BERT Classification via Layer- and Token-Wise Aggregation Maike Behrendt Heinrich Heine University Düsseldorf ‘maike.behrendt@hhu.deStefan Sylvius Wagner Heinrich Heine University Düsseldorf stefan.wagner@hhu.de Stefan Harmeling Technical University Dortmund stefan.harmeling@tu-dortmund.de Abstract The[CLS] token in BERT is commonly used as a fixed-length representation for classification tasks, yet prior work has shown that both other tokens and intermediate layers encode valuable contextual information. In this work, we pro- pose MaxPoolBERT, a lightweight extension to BERT that refines the [CLS] representation by aggregating information across layers and tokens. Specifically, we explore three mod- ifications: (i) max-pooling the [CLS] token across multiple layers, (ii) enabling the [CLS] token to attend over the entire final layer us- ing an additional multi-head attention (MHA) layer, and (iii) combining max-pooling across the full sequence with MHA. Our approach enhances BERT’s classification accuracy (espe- cially on low-resource tasks) without requiring pre-training or significantly increasing model size. Experiments on the GLUE benchmark show that MaxPoolBERT consistently achieves a better performance on the standard BERT- base model. 1 Introduction BERT (Bidirectional Encoder Representations from Transformers) (Devlin et al., 2019), is one of the best known Transformer-based (Vaswani et al., 2017) large language models. The core principle of BERT is the unsupervised pre-training approach on large corpora, enabling it to learn contextual word representations, which can then be used to solve various downstream tasks. Through fine-tuning, BERT adapts its representations to aggregate the most relevant information required for a given task. A key component of BERT’s architecture is the classification token (abbreviated [CLS] ), a special token that is prepended to every input sequence. During fine-tuning for downstream classification tasks, the [CLS] token serves as the only input to the classification head, which generates predictions for the task at hand. Through self-attention, the [CLS] token is expected to capture the sentence- Figure 1: MaxPoolBERT performs best on low- resource datasets. We show that MaxPoolBERT pro- vides significant improvements for smaller datasets in- dicating that the model learns a better representation during fine-tuning. level information necessary for downstream tasks. In this paper, we ask the question whether we can enrich the [CLS] with information from the layers below the top level. Toshniwal et al. (2020) systematically compared different text span representations using BERT, and found that max-pooling performs quite well across tasks, though its effectiveness varies across tasks. Fine-tuning reduces the effect that different span representations had in the comparison (Stanke- viˇcius and Lukoševi ˇcius, 2024). We know that the last layers of BERT change the most during fine-tuning and encode the most task- specific information (Rogers et al., 2020). This is why the [CLS] token embedding from the final layer is conventionally used for classification. How- ever, assuming that only the [CLS] token retains meaningful sentence-level information is mislead- ing. Prior studies have shown that all token em- beddings in the final layer contain sentence-level information (Rogers et al., 2020), and that using different token positions for classification results in only minor differences in accuracy (Goyal et al., 2020). Goyal et al. (2020) also found that embed- ding vectors in
https://arxiv.org/abs/2505.15696v1
the final layer exhibit high cosine similarity due to information mixing through self- attention. Motivated by these findings, we explore incre- mental modifications to the BERT-base architec- ture for sequence classification, aiming to enhance its performance on downstream tasks. We specif- ically focus on improving the informativeness of the[CLS] token by (i) incorporating more width information of the whole sequence, and (ii) incor- porating more depth information from additional layers. In the end we find that a mixture of these approaches leads to the best results. Contributions. 1.We introduce MaxPoolBERT, a simple yet effective extension to BERT that enriches the[CLS] token representation using max- pooling and attention mechanisms across lay- ers and tokens. 2.We systematically evaluate three architectural variants that incorporate width (token-level) and depth (layer-level) information into the [CLS] embedding. 3.We show that our proposed approach improves fine-tuning performance on 7 out of 9 GLUE tasks and achieves an average gain of 1.25 points over the BERT-base baseline. 4.We demonstrate that MaxPoolBERT is partic- ularly effective in low-resource scenarios, pro- viding improved stability and accuracy where training data is limited. Our best performing model will be made pub- licly available after the review process. 2 Related Work Much research has been done dedicated to im- proving and optimizing BERT’s training process through architectural modifications and fine-tuning strategies. Below, we discuss advancements in fine-tuning stability, text representations, model enhancements, and training efficiency. Stabilized BERT Fine-Tuning. The pre-training and fine-tuning paradigm for large language mod- els such as BERT (Devlin et al., 2019) has led to significant improvements across a wide range of NLP tasks while keeping computational costs man- ageable. However, fine-tuning remains unstabledue to challenges like vanishing gradients (Mos- bach et al., 2021) and limited dataset sizes (Zhang et al., 2021). Several studies have proposed tech- niques to address this instability. Zhang et al. (2021) explore re-initializing BERT layers before fine-tuning, demonstrating that retain- ing all pre-trained weights is not always beneficial for fine-tuning. They also show that extending fine- tuning beyond three epochs improves performance. Hao et al. (2020) examine how fine-tuning af- fects BERT’s attention, finding that higher layers change significantly while lower layers remain sta- ble. They propose a noise regularization method to enhance stability. Mosbach et al. (2021) identify high learning rates as a key issue that cause fine-tuning instability. They propose using small learning rates with bias correction and increasing training iterations until nearly zero training loss is achieved. Hua et al. (2021) introduce Layer-wise Noise Stability Regularization which further stabilizes fine-tuning through regularization. Xu et al. (2023) propose self-ensemble and self- distillation mechanisms that enhance fine-tuning stability without requiring architectural changes or external data. For an extensive, systematic review of methods for sentence-level embeddings from transformer models, we refer to Stankevi ˇcius and Lukoševi ˇcius (2024). Faster Training. In addition to stabilization, ar- chitectural enhancements have been introduced to boost BERT’s efficiency and effectiveness Goyal et al. (2020) propose to eliminate tokens after fine-tuning, to reduce the time of inference. They found out, that in the highest layer of BERT- base, the token representations are
https://arxiv.org/abs/2505.15696v1
very similar and carry similar information. Recently, Warner et al. (2024) introduce Mod- ernBERT, an updated version of BERT with an increased sequence length of 8192. ModernBERT incorporates architectural improvements such as GeGLU activations (Shazeer, 2020), Flash Atten- tion (Dao et al., 2022), and RoPE embeddings (Su et al., 2024). While other approaches improve the input em- bedding size of BERT (Nussbaum et al., 2024) or refine the pre-training process for GPU’s (Geip- ing and Goldstein, 2023; Portes et al., 2023; Izsak et al., 2021), our work specifically concentrates on optimizing the [CLS] token during fine-tuning, leveraging the information captured in BERT’s lay- ers after pre-training. Improved BERT Fine-Tuning. Lastly there have been approaches to enhance the classifica- tion performance of BERT through optimized fine- tuning strategies and enriched sentence representa- tions — areas that align closely with our approach, see also Stankevi ˇcius and Lukoševi ˇcius (2024) who provide a comprehensive survey of methods for ex- tracting sentence-level embeddings from BERT. Bao et al. (2021) construct sentence represen- tations for classification by selecting meaningful n-grams and combining sub-tokens of a pre-trained BERT model into span representations using a max- pooling approach. These span-representations are max-pooled again into a sentence representation which is concatenated with the [CLS] token em- bedding. Hu et al. (2024) develop a dynamically ad- justable BERT architecture that optimizes both width (i.e., number of attention heads, number of neurons in the feed forward layer and sequence length) and depth (i.e., number of layers). Their method leverages knowledge distillation to main- tain performance while reducing computational overhead. Chang et al. (2023) introduce Multi-CLS BERT, a framework that fine-tunes BERT multiple times with varying random seeds and a multi-task objec- tive to enhance performance. Chen et al. (2023) present HybridBERT, which incorporates a hybrid pooling network and drop masking during pre-training to accelerate training and improve downstream accuracy. Their approach also includes local max pooling. Sentence-BERT (Reimers and Gurevych, 2019) modifies the training objective to better capture semantic similarity at the sentence level, setting a benchmark for embedding-based classification. Recently, Galal et al. (2024) compared multiple aggregation strategies for text classification, includ- ing an average pooling and a self-attention layer on top of the output embeddings. They also ex- periment with including the information of the last BERT layers into account. Their experiments are focused on Arabic classification tasks such as sen- timent analysis on tweets. They find that freezing the BERT weights before fine-tuning boosts the classification results. Similar to our experiments, Lehe ˇcka et al. (2020) experiment with a pooling operation on the wholesequence ot text to improve BERT for large-scale multi-label text classification. They find that com- bining max and mean pooling information leads to the greatest improvement. 3 Refining the [CLS] Token BERT’s classification ( [CLS] ) token has a special function and serves as a summarizing token. In text classification tasks, it is the input to the classifier and is required to carry all the necessary informa- tion from the entire input sequence. This archi- tecture is particularly efficient for fine-tuning and inference because only
https://arxiv.org/abs/2505.15696v1
a single token is passed to the classifier. However, it has been shown that other token rep- resentations in the layers of BERT also capture sentence-level representations (Rogers et al., 2020). We investigate whether the informativeness of the [CLS] token embedding can be further enhanced during fine-tuning, to improve downstream classifi- cation results. To do this, we include more depth information from other BERT layers and also more width information from other tokens within the se- quence. We study different versions of fine-tuning BERT for sequence classification tasks. All variants are described below. 3.1 Preliminaries. Final-Layer [CLS] Representation. As a base- line we use the [CLS] token of the final encoder layer of a pre-trained vanilla BERT-base model (De- vlin et al., 2019) for classification (see Figure 2a). Recall that a single layer of BERT can be written as fi:RT×d→RT×d, (1) where iindicates the layer number (for BERT-base we have 12 layers), Tis the number of tokens, and dis the dimensionality of each token vector. We denote the values of the intermediate layers by y(i), y(1)=f1(x)y(i+1)=fi+1(y(i)) (2) The classification token of each layer is the first token, i.e., for a sequence of tokens y(i)= [t1i, ..., t Ti]in the ith layer, [CLS] i=t1i∈R1×d. (3) The embedding of the [CLS] token serves as the in- put for the classification head h, which we choose to be just a linear layer without an activation func- tion, since we are just interested in the plain expres- siveness of the refinement (instead of adding tanh as in the original BERT implementation): h:R1×d→R. (4) Thus, the baseline model for sequence classifica- tion can be written as: h◦CLS◦f12◦ ··· ◦ f1:RT×d→R (5) for BERT-base with 12 layers. Max pooling operation. In order to utilize not only the last layer but several layers, we have to define a flexible maximizing operation, that can work with several sequences of vectors. For this, we write Θ(k) t∈Rk×t×dfor the tensor that con- tains the first ttoken vectors (each ddimensional) of the last klayers. For instance, Θ(1) 1∈R1×1×dis the[CLS] token, and Θ(k) 1∈Rk×1×dcollects the token vectors the [CLS] token of the last klayers. Similarly, Θ(1) T∈R1×T×dcontains all token vec- tors of the last layer, and Θ(k) T∈Rk×T×dall token vectors of the last klayers. Next, we define an element-wise max pooling operation that maximize over the first dimension, i.e., max :Rk×t×d→Rt×d. (6) As pytorch code, this is for b-sized minibatches of shape b×k×t×dthe operation torch.max(Theta, dim=1) . 3.2 Depth-Wise [CLS] Pooling (Max CLS). Rogers et al. (2020) propose to take more layers into account for classification, as one possible im- provement for BERT’s fine-tuning. Inspired by this, we use vertical information (i.e., more depth) from the last klayers instead of only from the last layer by extracting the last k[CLS] embeddings [[CLS] 12−k+1, . . . , [CLS] 12]which corresponds to theΘ(k) 1∈Rk×1×d(using the notation of the pre- vious paragraph) and applying the element-wise max-pooling operation on the extracted tokens (see Figure 2b). The final layers of a BERT model are known to contain the task-specific information. With a max-pooling operation we add further
https://arxiv.org/abs/2505.15696v1
depth information from these task-specific layers into the [CLS] embedding just before classification. Exper- imenting with different values for k, we found that k= 3works best (see A.1.1 in the Appendix).3.3 Token-Wise Attention via Additional MHA Layer. An alternative way to enrich the information in the [CLS] token, is to consider horizontal information (i.e., more width, see Figure 2c). For this, we in- clude all tokens of the last layer. To obtain a single vector, we employ an additional multi-head atten- tion (MHA) layer on encoder output, but compute the attention only for the [CLS] token. Writing the MHA as usual (see (Vaswani et al., 2017)), MHA(Q,K,V) = [head 1, . . . , head h]W0 (7) where the heads are defined as head s=Attention (QWQ s, KWK s, V WV s).(8) we note that in our case Q=[CLS] 12andK= V=y(12). Thus the [CLS] token can attend to all other tokens once more before classification. Note that the additional MHA layer is not part of the pre- training process and is only added and initialized before the fine-tuning process. We use the default initialization of the Pytorch1multi-head attention implementation which is a Xavier uniform initial- ization (Glorot and Bengio, 2010). For the number of heads we choose h= 4. 3.4 Sequence-Wide Pooling with MHA (Max Seq+ MHA). Finally, we combine the additional depth andwidth information of Max CLSandMHA by extending the max-pooling operation to whole sequences of the last klayers by using max(Θ(k) t)∈Rk×t×d. We call this setup Max Seq+ MHA , since the max- imum is now along the whole sequence and the additional MHA layer aggregates the pooled infor- mation. We call this approach MaxPoolBERT in the following. We did consider other variations of these ideas as well: for instance instead a component-wise maximum operation, we calculated the norms of the token vectors and used those to select among the layers. However, the performance was not sig- nificantly different to the reported results (see Ap- pendix). 4 Experiments In order to evaluate each previously presented mod- ification of the BERT architecture for sequence classification, we fine-tune each model on different classification tasks and compare the results. 1https://pytorch.org/ Layer 1. . .Layer NClassification Head [CLS] Tok1 Tok2 . . . TokTE[CLS]E[CLS] (a)Baseline. Plain vanilla BERT for sequence classification, where the embedding of the [CLS] token of the final layer is used as input for the classification head.Layer 1. . .Layer NMaxClassification Head E[CLS] E[CLS] ETok1 ETokT E[CLS] ETok1 ETokT [CLS] Tok1 Tok2 . . . TokTk (b)Max CLS.A max-pooling operation is applied on the [CLS] tokens of the last klayers before classification. Layer 1. . .Layer NMulti-Head Attention LayerClassification Head E[CLS] E[CLS] ETok1 ETokT E[CLS] ETok1 ETokT [CLS] Tok1 Tok2 . . . TokT (c)MHA. An additional multi-head attention layer allows the [CLS] token to attend to all tokens of the last layer.Layer 1. . .Layer NMaxMulti-Head Attention LayerClassification Head E[CLS] E[CLS] ETok1 ETokT E[CLS] ETok1 ETokT [CLS] Tok1 Tok2 . . . TokTk (d)Max Seq+ MHA. A max-pooling operation on the whole sequence is combined with an additional MHA layer.
https://arxiv.org/abs/2505.15696v1
Figure 2: Comparison of four BERT architectures for sequence classification. (Left above) Classical BERT for sequence classification architecture. (Right above) Maximizing the token embeddings of the [CLS] token over the lastklayers. (Left below) Adding an additional MHA layer before classification. (Right below) MaxPoolBERT architecture : The inputs are the same as in the standard BERT model. After the Nth layer (N = 12 for BERT-base), we apply a max-pooling operation over the last klayers (we used k= 3). The [CLS] token can then attend to every token after the max-pooling and the resulting [CLS] token embedding is used for classification. Parameter Value learning rate 2e-5 epochs 4 batch size 32 warmup ratio 0.1 weight decay 0.01 Table 1: Hyperparameters used for all fine-tuning exper- iments. 4.1 Datasets GLUE benchmark. The General Language Un- derstading Evaluation (GLUE) benchmark (Wang et al., 2018) is a well known benchmark for nat- ural language understanding (NLU) and natural language inference (NLI) tasks. We evaluate on the following 9 tasks: •CoLA (Corpus of Linguistic Acceptability (Warstadt et al., 2019)): 10,657 sentences from linguistic publications, annotated for grammatical acceptability ( acceptable orun- acceptable ). •MRPC (Microsoft Research Paraphrase Cor- pus (Dolan and Brockett, 2005)): 5,800 sentence pairs from news source, annotated for paraphrase identification ( equivalent ,not equivalent ). •QNLI (Question NLI): an NLI dataset de- rived from the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016) con- taining question paragraph pairs. The task is to predict if the question is answered by the given paragraph ( entailment ,no entailment ). •MNLI (Multi-Genre NLI (Williams et al., 2018)): includes 433,000 sentence pairs, anno- tated with three different indicators for entail- ment: neutral ,contradiction andentailment . •SST-2 (The Stanford Sentiment Treebank (Socher et al., 2013)): 215,154 phrases anno- tated for sentiment analysis ( positive ornega- tive). •STS-B (Semantic Textual Similarity Bench- mark (Cer et al., 2017)): 8,630 sentence pairs annotated with a textual similarity score ([0,5]). •RTE (Recognizing Textual Entailment (Da- gan et al., 2006)): 5,770 sentence pairs anno- tated for entailment recognition ( entailment , no entailment ).•QQP (Quora Question Pairs): 795,000 pairs of questions from Quora, annotated for seman- tical similarity ( duplicate ,no duplicate ). •WNLI (Winograd NLI (Levesque et al., 2012)): 852 sentence pairs annotated for tex- tual entailment ( entailment ,no entailment ). 4.2 Experiment Details All experiments were run on a single NVIDIA A100 GPU. We used the Huggingface transformers and dataset libraries2to implement and train all of our models. Each model was fine-tuned three times for four epochs, with different three different random seeds and report the mean of all runs. All hyperparameters are listed in Table 1. The reported results were run on the validation sets of all GLUE tasks. 5 Results We report the results for all model variants in each task and analyze fine-tuning stability by measuring the standard deviation between runs with different seeds. 5.1 Performance Across GLUE Tasks The performance of each of our four variants on the GLUE benchmark tasks is presented in Table 2. For each task, at least one variant achieves higher performance
https://arxiv.org/abs/2505.15696v1
than the BERT baseline, indicating that our proposed methods for enriching the [CLS] token representation are effective. However, the magnitude of improvement varies across tasks. The Max CLS variant, which applies max- pooling over the [CLS] token representations from the last klayers, results in marginal to no improve- ment for most tasks. Notably, this variant achieves the best performance among all variants on QNLI and STS-B, suggesting that layer-wise max pool- ing can be beneficial for certain task types. Both tasks incorporate semantic matching between two texts, thus both require nuanced understanding of sentence meaning. TheMHA variant introduces an additional MHA layer, allowing the final-layer [CLS] token to attend to the full sequence before classification. This vari- ant consistently improves upon the baseline BERT model, indicating that this extra attention step, ef- fectively enhances the model’s ability to integrate global context. The biggest improvement is ob- served on the WNLI dataset, which has the fewest 2https://huggingface.co/ Model CoLA MRPC QNLI MNLI SST-2 STS-B RTE QQP WNLI MCC Acc. F1 Acc. m mm Acc. Sp. Acc. Acc. F1 Acc. Train Size 8.5k 3.7k 3.7k 105k 393k 393k 67k 5.75k 2.5k 364k 364k 634 Baseline 53.59 82.43 87.49 90.96 84.27 84.57 92.55 88.47 63.42 90.65 87.40 49.77 Max CLS 55.32 83.66 88.5 91.15 84.22 84.55 92.62 88.97 63.06 90.59 87.33 50.23 MHA 55.88 83.38 89.51 90.49 84.37 84.67 92.32 88.04 64.98 90.67 87.45 55.4 Max Seq+MHA 55.35 85.95 89.78 90.73 83.82 84.24 92.74 88.22 66.06 90.65 87.59 55.4 ∆ 2.29 3.52 2.29 0.19 0.24 0.1 0.19 0.5 3.61 0.02 0.19 5.63 Table 2: Performance on GLUE validation tasks (average of 3 seeds). The size of the training data set is highlighted in gray. We report Matthews correlation coefficient (MCC) for CoLA, accuracies for matched (m) and mismatched results (mm) for MNLI, and Spearman correlation (Sp.) for STS-B. Below we report the improvement from one variant over the baseline as ∆. Model GLUE avg. Baseline 79.63 Max CLS 80.02 MHA 80.70 Max Seq+MHA 80.88 ∆ 1.25 Table 3: Average performance across all GLUE tasks. MaxPoolBERT shows a consistent gain over BERT- base. training examples in the GLUE benchmark (634 training examples), suggesting that the added atten- tion is particularly helpful in low-resource settings. The Max Seq+MHA variant combines token- wise max-pooling over the sequence with the ad- ditional MHA layer. This configuration shows the most consistent improvements, achieving a higher performance than the baseline in 7out of 9tasks. As shown in Figure 1, the largest improvements are again seen on datasets with limited training data, such as CoLA, MRPC, RTE and WNLI. These find- ings suggest that combining sequence-level pooling with attention further enhances robustness in low resource settings. For clarity, Table 3 shows the average perfor- mance of each model variant across all GLUE tasks. TheMax Seq+ MHA variant, which we call Max- PoolBERT, achieves the highest overall average. While the average improvement over the baseline is 1.25 points, individual tasks show greater im- provements. 5.2 Stability on Low-Resource Tasks To assess fine-tuning stability, which is usually worse for smaller datasets (Devlin et al.,
https://arxiv.org/abs/2505.15696v1
2019; Lee et al., 2020; Dodge et al., 2020), we run all exper- iments with three different seeds for each GLUE task. We report the mean accuracy across runs (for CoLA we report Matthews correlation coefficient, for STS-B we report Spearman rank correlation),and include error bars showing the standard devia- tion of these three runs (see Figure 3 and Table 4). We observe that the stability in fine-tuning re- mains comparable across model variants for most datasets. However, improvements are observed for datasets with fewer training samples such as CoLA, MRPC, QQP and WNLI, where our variants exhibit reduced variability between runs. These findings suggest that our proposed modifications improve robustness in the low-sample regime. 6 Conclusion We introduced MaxPoolBERT, a lightweight yet ef- fective refinement of BERT’s classification pipeline that improves the representational quality of the [CLS] token. Our method leverages max-pooling across layers and tokens, and introduces a multi- head attention layer that allows the [CLS] token to re-aggregate contextual information before classi- fication. These modifications require no changes to pre-training and add minimal overhead to fine- tuning. Empirical results on the GLUE benchmark demonstrate that MaxPoolBERT outperforms stan- dard BERT-base across most tasks, with especially strong improvements in low-resource settings. This suggests that BERT’s native use of the final-layer [CLS] embedding underutilizes available informa- tion and that small architectural additions can en- hance generalization without sacrificing efficiency. Limitations While MaxPoolBERT improves downstream per- formance, several limitations remain: •No task-specific tuning. Our experiments use shared hyperparameters across tasks. Further gains could be possible with task-specific set- tings for pooling depth, attention heads, or training schedules. Figure 3: Accuracies for the GLUE benchmark with error bars. We show the standard deviation between three fine-tuning runs with three random seeds. Note that the y-axis is shifted but scaled equally across tasks. Model CoLA ↓ MRPC ↓ QNLI ↓ MNLI ↓ SST-2 ↓ STSB ↓ RTE ↓ QQP ↓ WNLI ↓ Baseline 6.34e-02 2.42e-02 2.08e-03 1.97e-03 1.99e-03 3.2e-03 1.78e-02 10.8e-04 5.86e-02 Max CLS 4.55e-02 2.02e-02 3.89e-03 2.73e-03 3.81e-03 3.8e-03 1.86e-02 9.26e-04 4.61e-02 MHA 4.3e-02 2.1e-02 5.69e-03 2.43e-03 3.63e-03 4.99e-03 1.86e-02 8.09e-04 4.61e-02 Max Seq+ MHA 4.22e-02 2.18e-02 5.11e-03 4.45e-03 3.64e-03 4.63e-03 1.96e-02 7.87e-04 4.31e-02 Table 4: Standard deviations for three fine-tuning runs with different random seeds. •Model size and generalization. Our work focuses on BERT-base. It remains to be tested whether these findings generalize to larger models like BERT-large or to other trans- former variants. •Scope of evaluation. We focus on sentence- level classification tasks in GLUE. The appli- cability of our approach to other tasks, such as token classification, generation, or cross- lingual transfer, is not yet evaluated. In the future we aim to further investigate how to optimize the fine-tuning of small BERT mod- els. While larger models often yield better per- formance, smaller models are crucial in real-time or resource-contrained environments. The BERT training paradigm following pre-training and fine-tuning has been predominant for several years and is widely used, so it is important to study whether further improvements can be made through small changes to this learning paradigm. Ethics Statement
https://arxiv.org/abs/2505.15696v1
This work does not involve the collection or use of sensitive personal data, nor does it generate or process harmful content. We use only publicly available datasets (GLUE benchmark) and follow standard evaluation protocols. Our modifications are intended to improve model efficiency and ro- bustness, particularly in low-resource settings. No human subjects were involved in this research. References Rongzhou Bao, Zhuosheng Zhang, and Hai Zhao. 2021. Span fine-tuning for pre-trained language models. In Findings of the Association for Computational Lin- guistics: EMNLP 2021 , pages 1970–1979, Punta Cana, Dominican Republic. Association for Compu- tational Linguistics. Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez- Gazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017) , pages 1–14, Vancouver, Canada. Association for Computational Linguistics. Haw-Shiuan Chang, Ruei-Yao Sun, Kathryn Ricci, and Andrew McCallum. 2023. Multi-CLS BERT: An efficient alternative to traditional ensembling. In Proceedings of the 61st Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers) , pages 821–854, Toronto, Canada. As- sociation for Computational Linguistics. Qian Chen, Wen Wang, Qinglin Zhang, Chong Deng, Ma Yukun, and Siqi Zheng. 2023. Improving bert with hybrid pooling network and drop mask. arXiv preprint arXiv:2307.07258 . Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The pascal recognising textual entailment chal- lenge. In Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment , pages 177–190, Berlin, Heidelberg. Springer Berlin Heidelberg. Tri Dao, Dan Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. 2022. Flashattention: Fast and memory-efficient exact attention with io-awareness. Advances in Neural Information Processing Systems , 35:16344–16359. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers) , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah Smith. 2020. Fine-tuning pretrained language models: Weight ini- tializations, data orders, and early stopping. arXiv preprint arXiv:2002.06305 . William B. Dolan and Chris Brockett. 2005. Automati- cally constructing a corpus of sentential paraphrases. InProceedings of the Third International Workshop on Paraphrasing (IWP2005) . Omar Galal, Ahmed H Abdel-Gawad, and Mona Farouk. 2024. Rethinking of bert sentence embedding for textclassification. Neural Computing and Applications , 36(32):20245–20258. Jonas Geiping and Tom Goldstein. 2023. Cramming: Training a language model on a single GPU in one day. In Proceedings of the 40th International Confer- ence on Machine Learning , volume 202 of Proceed- ings of Machine Learning Research , pages 11117– 11143. PMLR. Xavier Glorot and Yoshua Bengio. 2010. Understand- ing the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth Interna- tional Conference on Artificial Intelligence and Statis- tics, volume 9 of Proceedings of Machine Learning Research , pages 249–256, Chia Laguna Resort, Sar- dinia, Italy. PMLR. Saurabh Goyal, Anamitra Roy Choudhury, Saurabh Raje, Venkatesan Chakaravarthy, Yogish
https://arxiv.org/abs/2505.15696v1
Sabharwal, and Ashish Verma. 2020. PoWER-BERT: Acceler- ating BERT inference via progressive word-vector elimination. In Proceedings of the 37th International Conference on Machine Learning , volume 119 of Proceedings of Machine Learning Research , pages 3690–3699. PMLR. Yaru Hao, Li Dong, Furu Wei, and Ke Xu. 2020. In- vestigating learning dynamics of BERT fine-tuning. InProceedings of the 1st Conference of the Asia- Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Confer- ence on Natural Language Processing , pages 87–92, Suzhou, China. Association for Computational Lin- guistics. Ting Hu, Christoph Meinel, and Haojin Yang. 2024. A flexible bert model enabling width- and depth- dynamic inference. Computer Speech & Language , 87:101646. Hang Hua, Xingjian Li, Dejing Dou, Chengzhong Xu, and Jiebo Luo. 2021. Noise stability regularization for improving BERT fine-tuning. In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies , pages 3229–3241, Online. Association for Computational Linguistics. Peter Izsak, Moshe Berchansky, and Omer Levy. 2021. How to train BERT with an academic budget. In Pro- ceedings of the 2021 Conference on Empirical Meth- ods in Natural Language Processing , pages 10644– 10652, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Cheolhyoung Lee, Kyunghyun Cho, and Wanmo Kang. 2020. Mixout: Effective regularization to finetune large-scale pretrained language models. In Interna- tional Conference on Learning Representations . Jan Lehe ˇcka, Jan Švec, Pavel Ircing, and Luboš Šmídl. 2020. Adjusting bert’s pooling layer for large-scale multi-label text classification. In International Con- ference on Text, Speech, and Dialogue , pages 214– 221. Springer. Hector J. Levesque, Ernest Davis, and Leora Morgen- stern. 2012. The winograd schema challenge. In Proceedings of the Thirteenth International Confer- ence on Principles of Knowledge Representation and Reasoning , KR’12, page 552–561. AAAI Press. Marius Mosbach, Maksym Andriushchenko, and Diet- rich Klakow. 2021. On the stability of fine-tuning {bert}: Misconceptions, explanations, and strong baselines. In International Conference on Learning Representations . Zach Nussbaum, John X Morris, Brandon Duderstadt, and Andriy Mulyar. 2024. Nomic embed: Training a reproducible long context text embedder. arXiv preprint arXiv:2402.01613 . Jacob Portes, Alexander Trott, Sam Havens, DANIEL KING, Abhinav Venigalla, Moin Nadeem, Nikhil Sardana, Daya Khudia, and Jonathan Frankle. 2023. Mosaicbert: A bidirectional encoder optimized for fast pretraining. In Advances in Neural Information Processing Systems , volume 36, pages 3106–3130. Curran Associates, Inc. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing , pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP) , pages 3982–3992, Hong Kong, China. Association for Com- putational Linguistics. Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A primer in BERTology: What we know about how BERT works. Transactions of the Association for Computational
https://arxiv.org/abs/2505.15696v1
Linguistics , 8:842–866. Noam Shazeer. 2020. Glu variants improve transformer. arXiv preprint arXiv:2002.05202 . Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. InProceedings of the 2013 Conference on Empiri- cal Methods in Natural Language Processing , pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics. Lukas Stankevi ˇcius and Mantas Lukoševi ˇcius. 2024. Ex- tracting sentence embeddings from pretrained trans- former models. Applied Sciences , 14(19). Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. 2024. Roformer: En- hanced transformer with rotary position embedding. Neurocomput. , 568(C).Shubham Toshniwal, Haoyue Shi, Bowen Shi, Lingyu Gao, Karen Livescu, and Kevin Gimpel. 2020. A cross-task analysis of text span representations. In Proceedings of the 5th Workshop on Representation Learning for NLP , pages 166–176, Online. Associa- tion for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems , volume 30. Curran Associates, Inc. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for nat- ural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP , pages 353–355, Brussels, Belgium. Association for Com- putational Linguistics. Benjamin Warner, Antoine Chaffin, Benjamin Clavié, Orion Weller, Oskar Hallström, Said Taghadouini, Alexis Gallagher, Raja Biswas, Faisal Ladhak, Tom Aarsen, et al. 2024. Smarter, better, faster, longer: A modern bidirectional encoder for fast, memory efficient, and long context finetuning and inference. arXiv preprint arXiv:2412.13663 . Alex Warstadt, Amanpreet Singh, and Samuel R. Bow- man. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics , 7:625–641. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers) , pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguis- tics. Yi-Ge Xu, Xi-Peng Qiu, Li-Gao Zhou, and Xuan-Jing Huang. 2023. Improving bert fine-tuning via self- ensemble and self-distillation. J. Comput. Sci. Tech- nol., 38(4):853–866. Tianyi Zhang, Felix Wu, Arzoo Katiyar, Kilian Q Wein- berger, and Yoav Artzi. 2021. Revisiting few-sample {bert} fine-tuning. In International Conference on Learning Representations . Model CoLA MRPC QNLI MNLI SST-2 STS-B RTE QQP WNLI MCC Acc. F1 Acc. m mm Acc. Sp. Acc. Acc. F1 Acc. Train Size 8.5k 3.7k 3.7k 105k 393k 393k 67k 5.75k 2.5k 364k 364k 634 Baseline 53.59 82.43 87.49 90.96 84.27 84.57 92.55 88.47 63.42 90.65 87.40 49.77 Max Seq+MHA 55.35 85.95 89.78 90.73 83.82 84.24 92.74 88.22 66.06 90.65 87.59 55.4 Mean Seq+MHA 55.10 85.62 89.66 90.86 83.78 84.2 92.51 66.67 90.58 87.54 54.46 Table 5: Mean vs. Max Pooling results. CoLA MRPC RTE Acc. Acc. F1 Acc. k= 1 54.85 83.58 88.44 63.18 k= 2 55.76 85.21
https://arxiv.org/abs/2505.15696v1
89.29 65.34 k= 3 55.35 85.95 89.78 66.06 k= 4 56.42 85.29 89.27 65.34 k= 6 55.65 85.13 89.25 65.7 k= 12 55.41 85.21 89.17 65.46 Table 6: Effect of max-pooling depth kon small GLUE tasks. k= 3generally yields best results. A Appendix A.1 Ablations We conduct ablation studies to test different mod- ifications of out architectures. We describe all ex- periments and report their results in the following. A.1.1 Choice of k We experiment with the choice of kfor the max- pooling layer on the smaller GLUE datasets (CoLA, MRPC and RTE) and report the results in the fol- lowing in Table 6. Because we average over three runs with different random seeds, the choice of k does not have an immense influence on perfor- mance, but it is apparent that k= 3 is the best choice on the data sets tested. A.1.2 Pooling. Several studies indicate, that max-pooling seems to be a stable choice to aggregate information into a single sentence representation. To underline these results, we also experiment with mean-pooling. We apply an element-wise mean-pooling operation mean :Rk×t×d→Rt×d(9) on every vector of our kchosen layers (defined anal- ogously as the max operation in Section 3.1). Here we also choose k= 3. We report the experimental results on the GLUE validation set in Table 5. Over- all, mean pooling performs similarly well as max pooling, on RTE it even achieves higher perfor- mance than max-pooling. In the end, max-pooling seems to be a better choice as max-pooling as it works better for most GLUE datasets.
https://arxiv.org/abs/2505.15696v1
arXiv:2505.15700v2 [cs.CL] 26 May 2025“Alexa, can you forget me?” Machine Unlearning Benchmark in Spoken Language Understanding Alkis Koudounas*1, Claudio Savelli*1, Flavio Giobergia1, Elena Baralis1 1Politecnico di Torino, Italy name.surname@polito.it Abstract Machine unlearning, the process of efficiently removing spe- cific information from machine learning models, is a grow- ing area of interest for responsible AI. However, few studies have explored the effectiveness of unlearning methods on com- plex tasks, particularly speech-related ones. This paper intro- duces UnSLU-BENCH , the first benchmark for machine un- learning in spoken language understanding (SLU), focusing on four datasets spanning four languages. We address the unlearn- ing of data from specific speakers as a way to evaluate the qual- ity of potential “right to be forgotten” requests. We assess eight unlearning techniques and propose a novel metric to si- multaneously better capture their efficacy, utility, and efficiency. UnSLU-BENCH sets a foundation for unlearning in SLU and reveals significant differences in the effectiveness and compu- tational feasibility of various techniques. Index Terms : machine unlearning, spoken language under- standing, speech recognition, transformers 1. Introduction Machine unlearning (MU) refers to the process of efficiently removing specific data points from a trained machine learn- ing model without the need for a complete retraining from scratch [1]. This capability is crucial for complying with data privacy regulations, such as the European Union’s General Data Protection Regulation (GDPR) [2] and the California Consumer Privacy Act (CCPA) [3], which promote the “right to be for- gotten” . By removing the influence of specific data points on machine learning models, MU helps maintain compliance with legal standards and protects user privacy [4]. In the context of speech, MU plays an even more important role. Speech data often contains personally identifiable infor- mation, making it particularly sensitive [5–7]. The ability to unlearn specific data ensures that individuals can exercise con- trol over their personal information, thus increasing trust in AI systems. In addition, unlearning mechanisms can help reduce the influence of unreliable data and mitigate biases, contribut- ing to the development of more fair speech recognition mod- els [8–10]. One important example is the interaction with vocal as- sistants. These models process large amounts of user speech data to perform tasks such as intent classification [11]. Ensur- ing that these systems can unlearn data from individual users upon request is essential to maintain user autonomy and pri- vacy [12,13]. Despite the critical nature of this capability, there is a non-negligible gap in existing research on MU tailored to speech tasks. While MU has been explored in other domains, ∗Both authors contributed equally to this work.including text [14, 15] and image [16, 17] processing, its appli- cation to speech tasks remains under-developed. The authors of [18] first explore the application of MU tech- niques for audio and speech processing. However, their study is limited to audio classification tasks and uses only a single speech dataset focused on keyword spotting, a task semanti- cally much less complex than the intent classification challenges faced in Spoken Language Understanding (SLU). This empha- sizes the need for MU techniques specifically designed
https://arxiv.org/abs/2505.15700v2
to han- dle the complexities of SLU tasks. To fill this gap, we introduce UnSLU-BENCH , the first comprehensive benchmark for machine unlearning in SLU. It includes four intent classification datasets in four different lan- guages: Fluent Speech Commands (FSC) [19] and SLURP [20] in English, ITALIC [21] in Italian, and SpeechMASSIVE [22] in both German and French. For each dataset, we evaluate two transformer models, wav2vec 2.0 [23] and HuBERT [24] for English datasets, and XLS-R-128 [25] and XLS-R-53 [26] for the other languages. The latter model has been fine-tuned on Automatic Speech Recognition (ASR) for each target language. UnSLU-BENCH offers a complete analysis of the effec- tiveness of MU techniques across different model architectures and dataset complexities. We evaluate eight distinct unlearning methods, examining both their effectiveness and computational efficiency in removing specific speakers’ data from the models. Our contributions can be summarized in four points: (1) we introduce the first benchmark for machine unlearning in SLU, with four datasets in four languages and two models per dataset; (2) we evaluate eight unlearning techniques, measuring their impact on data removal and model performance; (3) we pro- pose GUM, a novel MU metric considering efficacy, efficiency, and utility of unlearning methods simultaneously; and (4) we provide an in-depth analysis of unlearning performance across datasets, languages, model sizes and architectures. This benchmark1aims to advance the development of privacy-preserving techniques in speech tasks, facilitating fu- ture research on more trustworthy voice assistant systems. 2. Machine Unlearning 2.1. Problem definition We assume a given model θthat has been trained on a SLU dataset D. Each data point is represented as a triplet (x, y, s )∈ D, where xdenotes the utterance, yindicates the target intent, andsis the speaker’s identity. We refer to the set of all speak- ers in the training set as S. We now assume that a subset of speakers Sf⊂ S asks for their data to be deleted. From a data perspective, this simply implies deleting from the database all 1github.com/koudounasalkis/UnSLU-BENCH samples Df={(x, y, s )|s∈ Sf)}, referred to as the forget set. However, those samples have affected the learning process ofθ. We refer to the remaining samples, i.e., Dr=D \ D fas theretain set . MU is tasked to remove the influence of points inDffrom θ. In other words, MU algorithms produce a new model ˆθ=ϕ(θ,Dr,Df). As introduced in [27], we adopt the idea of a gold model , i.e., the model θ′that has been trained us- ing only Dr. The gold model represents MU’s ideal target (i.e., we want ˆθ≈θ′). However, retraining the model from scratch for every forget request is generally unfeasible, especially for larger models – hence the need for unlearning methods. 2.2. Unlearning methods UnSLU-BENCH includes eight MU techniques as follows. Fine-Tuning ( FT)continues to train the model using all Drfor one epoch. Thus, being Dfunseen for one additional epoch, it should be less influential than Dr. This method is commonly used as a baseline in the unlearning framework. Negative Gradients ( NG)[27] finetunes the model using all Df only. Instead of a normal FT, the
https://arxiv.org/abs/2505.15700v2
gradient direction is reversed during the backpropagation to make the model forget Df. NegGrad+ ( NG+)[28,29] was proposed as an extension to Neg- ative Gradients to avoid the so-called “catastrophic forgetting” , i.e., the destruction of the model’s utility. To do this, in addition ofNGonDf,FTis done on the whole Dr. Catastrophically forgetting the last klayers ( CF-k)[30] ap- pliesFTonly to the final klayers of the model. In this way, the unlearning model is faster, as it applies backpropagation on those layers with the most relevant representations and keeps the rest of the network untouched. UNSIR ( UNSIR )[31] includes two phases, first it destroys the model (“ impair ”), and then it rebuilds its utility (“ repair ”). In the first, an error-maximizing noise is created for each element ofDf, which is then used to train the model in combination with FT. The second phase consists of another epoch of FTonly. Bad Teaching ( BT)[32] uses a competent teacher, i.e., a copy of the original model, and an incompetent teacher, i.e., the same model not fine-tuned on the task, in a distillation setup to train a student to behave like the first on Drand like the second on Df. We also evaluate a light variant ( BT-L )of the method with a random prediction generator as the incompetent teacher. SCRUB ( SCRUB )[29] uses a teacher-student setup with a sin- gle teacher, i.e., a copy of the original model. This method combines three different losses: a first loss maximizes student similarity with the teacher on Dr, a second loss minimizes it on Df, while a third task loss improves the final model utility. 2.3. Unlearning metrics The evaluation of unlearning algorithms is not trivial. In litera- ture [33], the three main aspects of interest are efficacy (whether the unlearning process effectively erased the required informa- tion), efficiency (how costly the unlearning process is) and util- ity(whether the unlearned model still successfully addresses the original task). We argue that all three aspects should be consid- ered at the same time. Ignoring any one of them can lead to triv- ial solutions. If we ignore efficacy , the best “unlearned” model is simply the original model. Since we are not checking whether the model has actually forgotten anything, this maximizes ef- ficiency (no computational cost) and utility (performance re- mains the same). If we ignore efficiency , the best solution isTable 1: Unlearning on FSC. F1Tdenotes macro F1 on test set, while F1Fon forget set. Best results (i.e., closest to gold model for F1 and MIA, highest for others) are in bold, second- best underlined .Original and gold models are highlighted. MethodFSC wav2vec 2.0 HuBERT F1TF1FMIA GUM Speedup F1TF1FMIA GUM Speedup Orig. .994 1.00 .508 .000 1.00× .993 1.00 .511 .000 1.00× Gold .993 .997 .503 .000 1.00× .991 .996 .507 .000 1.00× FT .993 .999 .504 .517 7.960 ×.979 .993 .508 .514 7.690 × NG .987 .976 .501 .816 206.9 ×.992 .996 .514 .000 201.1× NG+ .994 .994 .493 .000 4.030 ×.979 .929 .510 .336 3.900 × CF-k.994 1.00 .501 .606 16.97×.993 1.00
https://arxiv.org/abs/2505.15700v2
.505 .642 26.70× UNSIR .991 1.00 .506 .447 6.550 ×.994 .998 .508 .484 6.380× BT .993 1.00 .508 .000 4.780 ×.993 .999 .504 .363 4.650 × BT-L .994 .996 .506 .431 5.870 ×.993 .997 .506 .464 5.690 × SCRUB .994 1.00 .506 .439 6.210 ×.993 .998 .508 .479 6.220 × to retrain the model from scratch. As we do not consider the cost of retraining, this maximizes efficacy (the model has never seen the forget set) and utility (the model performs as well as possible). If we ignore utility , the best unlearning method is a model that predicts random values. Since we do not care about the quality of the results, this maximizes efficacy (the model does not retain any knowledge of the forget set) and efficiency (no additional computation is needed). Despite these considerations, very few works in literature account for combinations of some metrics. NoMUS [28] con- siders efficacy and utility together. The work in [34] selects the most effective method given a utility threshold, while [35] chooses the hyperparameter configuration that maximizes effi- cacy and then evaluates models based on their efficacy. In addition, metrics in literature are typically not consid- ered in relation to the gold model performance . Since MU aims to produce a model that resembles the model retrained from scratch, we argue that it is fundamental to ground all measures to the gold model. We acknowledge, of course, that having ac- cess to the gold model is a constraint that is generally only met during model validation and not in deployment. This is a limita- tion that affects the entire field of MU, and no general, gold-free solution has been proposed yet. In this work, we introduce a new metric, the Global Un- learning Metric (GUM), which considers all three aspects si- multaneously, with comparisons against the gold model. We quantify utility as the similarity in performance between the gold and the unlearned models as U= 1− |F1(g) T−F1(u) T|, based on the macro F1 scores2on a test set ( F1(g) TandF1(u) T). We use the MIA (Membership Inference Attack), a commonly adopted metric in unlearning [33], to quantify the efficacy of a method. More specifically, the MIA of the gold model (MIA(g)) is the ideal target, whereas the MIA of the original model MIA(o)is the starting point. Based on these boundaries, we quantify the efficacy Eas: E= 1−MIA′(u)−MIA′(g) MIA(o)−MIA′(g)2 , where MIA′(u)= min {MIA(u),MIA(o)}and MIA′(g)= min{MIA(g),(MIA′(u)+MIA(o))/2}are saturated versions of the gold and unlearned MIA that guarantee that E∈[0,1]in edge cases. The quantity is squared to increase similarities for small gold-unlearned MIA distances. Finally, we quantify the 2Other scenarios may require a change in utility function. Table 2: Comparison of unlearning methods on SLURP* and ITALIC. Best results are in bold, second-best underlined . MethodSLURP* ITALIC wav2vec 2.0 HuBERT XLS-R 128 XLS-R 53-IT F1TF1FMIA GUM Speedup F1TF1FMIA GUM Speedup F1TF1FMIA GUM Speedup F1TF1FMIA GUM Speedup Orig. .689 1.000 .628 .000 1.000×.712 1.000 .613 .000 1.000×.688 .894 .632 .000 1.000×.778 1.000 .615 .000 1.000× Gold .707 .711 .506 .000 1.000×.704 .715 .492 .000 1.000×.643 .568
https://arxiv.org/abs/2505.15700v2
.532 .000 1.000×.784 .736 .478 .000 1.000× FT .638 .970 .648 .000 83.78 ×.734 1.000 .611 .088 79.00 ×.638 .671 .555 .590 30.80×.711 .850 .550 .551 31.10× NG .695 .986 .604 .563 1748 × .718 .959 .587 .587 1654 × .679 .868 .603 .646 613.4 ×.590 .621 .525 .766 623.0 × NG+ .701 .995 .603 .446 41.63×.630 .852 .453 .578 39.30×.658 .001 .932 .000 15.14 ×.743 .936 .582 .418 15.37 × CF-k.709 1.000 .626 .089 291.9 ×.715 1.000 .608 .196 274.2 ×.677 .871 .626 .253 98.59 ×.781 1.000 .609 .201 98.99 × UNSIR .673 1.000 .637 .000 64.07 ×.722 1.000 .613 .000 60.44 ×.636 .830 .621 .328 22.01 ×.775 1.000 .612 .109 22.26 × BT .710 .999 .619 .275 50.35 ×.711 1.000 .613 .000 47.42 ×.683 .639 .481 .504 17.90 ×.731 .848 .557 .491 17.94 × BT-L .680 .995 .637 .000 61.74 ×.685 .907 .558 .578 58.11×.686 .651 .518 .558 22.02 ×.729 .876 .564 .499 22.21 × SCRUB .697 .999 .608 .429 64.82 ×.704 1.000 .600 .350 65.40 ×.442 .357 .533 .536 23.25 ×.770 .990 .610 .164 22.66 × efficiency as the ratio of the logarithms of the unlearning time T(u)and gold retraining time T(g). T= 1−log(T(u)+ 1) log(T(g)+ 1). We define GUM as the weighted harmonic mean between these three quantities: GUM =(1 +α+β)UET αET +βUT +UE. Theαandβparameters assign different importance to the three quantities. Here, we weigh all quantities equally ( α=β= 1). 3. Experimental Setup Datasets. UnSLU-BENCH includes four publicly available datasets: FSC [19] and SLURP [20] for English, ITALIC [21] for Italian, and SpeechMASSIVE [22] for German and French. The FSC dataset is relatively straightforward, containing 31 intents. In contrast, SLURP, ITALIC, and SpeechMASSIVE are substantially larger, with 60 intents and greater linguistic diversity. ITALIC and SpeechMASSIVE are multilingual ex- tensions of SLURP, covering Italian, and German-French, re- spectively3. Unlike other datasets, SLURP does not provide speaker-independent splits, which are, however, required by MU techniques to be effective. In fact, the identities present in the retain, forget, and test sets must be exclusive to successfully apply and evaluate unlearning methods. To address this, we pro- pose new speaker-independent splits4. In the following tables, we refer to the new dataset as SLURP*. For the other datasets, we use the original splits, with the identities already separated between train and test splits. To create the forget set, individ- uals with at least 100 associated audio samples were randomly taken from each dataset. This ensures that a sufficiently repre- sentative number of points were used for training the model for each individual to be forgotten. This implies that the size of Df with respect to Dtis 2.5–5% on the different datasets. In this way, we simulate a real case scenario of a possible request to delete one’s personal data from a model’s training. Models. For each dataset, we fine-tune two transformer mod- els. For the English datasets, we use wav2vec 2.0 [23] and Hu- BERT [24] in their base sizes. For the multilingual datasets, we use XLS-R 128 [25] and XLS-R 53 [26]. The latter is ASR-fine- tuned for the target
https://arxiv.org/abs/2505.15700v2
language (e.g., Italian, German, French). 3SpeechMASSIVE covers 12 languages, but we focus on German and French only. 4These splits are publicly available in our project repository. 107 106 105 104 103 Learning Rate0.000.250.500.751.00Macro F1 score 0.000 0.005 0.010 0.015 (MIA(u)MIA(g))2 F1T F1F MIAFigure 1: Trade-off between utility (test and forget F1) and effi- cacy (MIA) on NG, as the LR changes (ITALIC, XLS-R 53-IT). Unlearning Methods. For each unlearner, we use two differ- ent sets of learning rates as parameter tuning depending on how destructive they are. Specifically, we employ 5e-07, 1e-06, and 5e-06 for NG,NG+,BT,BT-L ,SCRUB , and 1e-05, 5e-05, and 1e-04 for FT,CF-k,UNSIR . For each experiment, we con- sider the method that achieves the highest utility, efficacy, and efficiency as the best through the use of GUM. Moreover, con- sidering that the original implementation of UNSIR was made to forget entire classes within the dataset, we use the version proposed by [28], applicable also to individual samples. 4. Results In the following, we present experiments conducted to explore the behavior of MU techniques in SLU. Benchmark results. The analysis of Tables 1–3 shows distinct patterns in unlearning methods for MU performance across dif- ferent models and datasets. The best F1 and MIA results are measured by their distance from our target’s gold model. NGconsistently achieves the highest GUM scores. For instance, for wav2vec 2.0, it outperforms the second-best ap- proach by +35% on FSC and +26% in SLURP*. For the larger multilingual XLS-R 53 model, it improves GUM by +39% on ITALIC and SpeechMASSIVE de-DE and by +48% on Speech- MASSIVE fr-FR. This improvement comes from its exceptional efficiency (speedups up to 1748 ×on FSC) and strong efficacy (MIA close to gold models, often ranking first or second among competitors, especially in multilingual datasets). NG+achieves slightly higher F1TandF1Fscores than NGin some cases, with comparable MIA scores. However, its overall GUM score is significantly lower as its speedup is one order of magnitude lower than NG.NG+also suffers the “catastrophic forgetting” phenomena in some cases, such as XLS-R 128 ( F1F= .001 on ITALIC, .008 on SpeechMASSIVE fr-FR). FTbalances utility and efficacy well for complex models. For example, XLS-R 128 Table 3: Comparison of unlearning methods on SpeechMASSIVE de-DE and fr-FR. Best results are in bold, second-best underlined . Methodde-De fr-FR XLS-R 128 XLS-R 53-DE XLS-R 128 XLS-R 53-FR F1TF1FMIA GUM Speedup F1TF1FMIA GUM Speedup F1TF1FMIA GUM Speedup F1TF1FMIA GUM Speedup Orig. .584 .841 .621 .000 1.000×.778 1.000 .622 .000 1.000×.410 .572 .629 .000 1.000×.756 1.000 .635 .000 1.000× Gold .566 .529 .513 .000 1.000×.745 .706 .493 .000 1.000×.469 .460 .509 .000 1.000×.772 .800 .520 .000 1.000× FT .498 .548 .543 .588 34.34×.661 .905 .585 .464 17.79×.400 .465 .539 .545 18.12×.759 .974 .627 .255 18.42 × NG .550 .726 .562 .797 1078 × .764 .957 .587 .643 558.7 ×.317 .349 .564 .749 597.3 ×.768 .935 .617 .501 610.2 × NG+ .540 .567 .487 .522 16.89 ×.759 .878 .568 .431 8.770 ×.382 .008 .882 .000 8.900 ×.759 .943 .620 .317 9.230 × CF-k.587 .865 .622 .000 109.9 ×.777 1.000 .616 .208 56.93
https://arxiv.org/abs/2505.15700v2
×.436 .594 .612 .414 58.23 ×.770 1.000 .624 .338 58.86× UNSIR .565 .788 .616 .197 27.46 ×.785 1.000 .619 .114 14.23 ×.420 .591 .620 .259 14.67 ×.768 1.000 .633 .089 14.94 × BT .584 .789 .582 .489 20.02 ×.726 .945 .585 .418 10.41 ×.411 .583 .597 .409 10.60 ×.772 .981 .621 .317 10.82 × BT-L .584 .786 .576 .523 24.87 ×.729 .948 .587 .434 12.94 ×.412 .574 .591 .447 13.18 ×.727 .981 .623 .306 13.42 × SCRUB .584 .780 .600 .429 26.86 ×.781 1.000 .615 .211 13.43 ×.409 .532 .611 .358 13.68 ×.769 1.000 .633 .089 13.94 × on ITALIC achieves F1T= .638, close to the gold model ( F1T = 0.643). However, it is less efficient due to full-network up- dates, with speedups ranging from 7.96 ×to 83.78 ×. CF-kde- livers mixed results. It is the second-most efficient method but focuses only on the final layers, which risks incomplete unlearn- ing. This is evident in its higher MIA scores compared to gold models (e.g., .612–.624 vs. gold .493–.520 in SpeechMAS- SIVE de-DE and fr-FR). Bad Teaching variants ( BT,BT-L ) show dataset-dependent performance. They achieve good GUM scores on FSC and SLURP but perform poorly on larger multi- lingual models on ITALIC and SpeechMASSIVE. SCRUB and UNSIR perform poorly in GUM, as they achieve moderate speedups (6.21 ×–65.40 ×and 6.55 ×–64.07 ×, respectively) but have inconsistent efficacy. In conclusion, while most prior works [28, 36–38] empha- size efficacy and utility, ignoring efficiency, GUM bridges this gap by integrating all three factors. Although more recent al- ternatives have been proposed, we show that NGremains one of the most well-rounded approaches, performing consistently well across all metrics, as summarized by its large GUM scores. (Un)learning rate. Given a fixed computing budget, the learn- ing rate (LR) is an important parameter influencing the final ef- fect for gradient-based unlearning. A small LR implies a lighter effect on the model: the original utility is preserved, but the un- learning effect is limited. Instead, a large LR affects the model more significantly, producing better unlearning, but affecting the overall performance. We study this effect empirically for a fixed unlearning technique, NG. The trade-off between utility and efficacy is clearly shown in Figure 1. Advantages of GUM. In Table 4, we compare GUM against NoMUS, the weighted average between model accuracy and MIA [28]. We first note that both Original and Gold mod- els (two trivial “unlearning” approaches) achieve large NoMUS scores but obtain – by definition – a 0 GUM score. UNSIR deteriorates the efficacy, with a MIA score worse than the orig- inal model. As a consequence, the model obtains GUM = 0. However, the same method achieves NoMUS = .700. This un- expectedly large value is due to the fact that NoMUS does not contextualize MIA scores w.r.t. gold and original values. Fi- nally, NGandSCRUB score similarly in terms of utility ( F1T) and efficacy (MIA), resulting in similar NoMUS scores. How- ever,NGis 1748 times faster than retraining, whereas SCRUB is “only” 65 times faster. This (large!) gap in efficiency is re- flected in
https://arxiv.org/abs/2505.15700v2
GUM scores (.563 vs .429). Unlearning in SLURP*. Table 5 finally studies the trade-off between model utility and unlearning efficacy tied to training duration. We consider SLURP*, and produce various Origi-Table 4: Unlearning metrics on SLURP*, wav2vec 2.0. Method F1T MIA Speedup NoMUS GUM Orig. .689 .628 1.000× .717 .000 Gold .707 .506 1.000× .848 .000 NG .695 .604 1748 × .744 .563 UNSIR .673 .637 64.07 × .700 .000 SCRUB .697 .608 64.82 × .741 .429 Table 5: Variation in the difficulty of unlearning as the num- ber of training epochs changes, wav2vec 2.0, SLURP* . Each experiment uses NG+ with LR = 5e-07. Epochs F1TF1(g) T MIA MIA(g)MIA(o)GUM 5 .395 .398 .496 .510 .561 .678 7 .383 .419 .524 .515 .566 .680 11 .499 .487 .480 .492 .593 .686 15 .564 .550 .538 .491 .589 .644 60 .696 .707 .611 .506 .628 .421 nal models, fine-tuned for different numbers of epochs (5 to 60); then we apply unlearning with NG+. At 60 epochs, the un- learned model achieves near-gold utility ( F1T= .696 vs. F1(g) T .707) but shows limited forgetting: its MIA (.611) is close to the original model one (.628), indicating persistent memorization of the forget set. This suggests that the prolonged training creates rigid decision boundaries that retain speaker-specific patterns, making unlearning interventions less effective. In other words, the model is overfitting the training data, making it harder to for- get. Conversely, shorter training durations (5-15 epochs) show better alignment with the gold model (MIA .480-.538 vs. gold .491-.515). The ideal operating point appears around 11 epochs – sufficient training to recover utility ( F1T= .499) while main- taining low memorization risk (MIA = .480), before overfit- ting dominates. This demonstrates that effective MU requires careful calibration of training duration to balance how well the model learns with how permanently training data gets encoded. 5. Conclusion This paper introduced UnSLU-BENCH , a novel benchmark for machine unlearning techniques in SLU. We analyzed eight MU techniques across four datasets and two model architectures and sizes each. We also introduced GUM, a new metric that simulta- neously evaluates the three key MU targets: efficacy, efficiency, and utility. UnSLU-BENCH provides a foundation for evaluat- ing MU in SLU, highlighting the need for further research to develop more trustworthy voice-based AI systems. 6. Acknowledgments This work is supported by the FAIR - Future Artificial Intelli- gence Research and received funding from the European Union NextGenerationEU (PIANO NAZIONALE DI RIPRESA E RESILIENZA (PNRR) – MISSIONE 4 COMPONENTE 2, IN- VESTIMENTO 1.3 – D.D. 1555 11/10/2022, PE00000013) and the spoke “FutureHPC & BigData” of the ICSC - Cen- tro Nazionale di Ricerca in High-Performance Computing, Big Data and Quantum Computing funded by the European Union - NextGenerationEU. This manuscript reflects only the authors’ views and opinions, neither the European Union nor the Euro- pean Commission can be considered responsible for them. 7. References [1] H. Xu, T. Zhu, L. Zhang, W. Zhou, and P. S. Yu, “Machine unlearning: A survey,” ACM Comput. Surv. , vol. 56, no. 1, Aug. 2023. [Online]. Available: https://doi.org/10.1145/3603620
https://arxiv.org/abs/2505.15700v2
[2] P. V oigt and A. V on dem Bussche, “The eu general data protection regulation (gdpr),” A Practical Guide, 1st Ed., Cham: Springer In- ternational Publishing , vol. 10, no. 3152676, pp. 10–5555, 2017. [3] E. Goldman, “An introduction to the california consumer pri- vacy act (ccpa),” Santa Clara Univ. Legal Studies Research Paper , 2020. [4] J. Xu, Z. Wu, C. Wang, and X. Jia, “Machine unlearning: Solu- tions and challenges,” IEEE Transactions on Emerging Topics in Computational Intelligence , 2024. [5] A. Nautsch, C. Jasserand, E. Kindt, M. Todisco, I. Trancoso, and N. Evans, “The gdpr & speech data: Reflections of legal and tech- nology communities, first steps towards a common understand- ing,” arXiv preprint arXiv:1907.03458 , 2019. [6] A. Koudounas, E. Pastor, V . Mazzia, M. Giollo, T. Gueudre, E. Reale, G. Attanasio, L. Cagliero, S. Cumani, L. De Alfaro, E. Baralis, and D. Amberti, “Leveraging confidence models for identifying challenging data subgroups in speech models,” in 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW) , 2024, pp. 134–138. [7] A. Koudounas, E. Pastor, V . Mazzia, M. Giollo, T. Gueudre, E. Reale, L. Cagliero, S. Cumani, L. de Alfaro, E. Baralis, and D. Amberti, “Privacy preserving data selection for bias mitigation in speech models,” in ACL 2025 Industry Track , 2025. [Online]. Available: https://openreview.net/forum?id=UGViDDIXKd [8] R. Chen, J. Yang, H. Xiong, J. Bai, T. Hu, J. Hao, Y . FENG, J. T. Zhou, J. Wu, and Z. Liu, “Fast model debias with machine unlearning,” in NeurIPS , vol. 36, 2023. [9] A. Koudounas, F. Giobergia, E. Pastor, and E. Baralis, “A con- trastive learning approach to mitigate bias in speech models,” in Proc. Interspeech 2024 , 2024, pp. 827–831. [10] E. Hine, C. Novelli, M. Taddeo, and L. Floridi, “Supporting trust- worthy ai through machine unlearning,” Science and Engineering Ethics , vol. 30, no. 5, p. 43, 2024. [11] X. Ma and S. Chen, “From speech to data: Unraveling google’s use of voice data for user profiling,” arXiv preprint arXiv:2403.05586 , 2024. [12] R. Singh, Profiling humans from their voice . Springer, 2019. [13] Y . Mehta, N. Majumder, A. Gelbukh, and E. Cambria, “Recent trends in deep learning based personality detection,” Artificial In- telligence Review , vol. 53, no. 4, pp. 2313–2339, 2020. [14] J. Jang, D. Yoon, S. Yang, S. Cha, M. Lee, L. Logeswaran, and M. Seo, “Knowledge unlearning for mitigating privacy risks in language models,” arXiv preprint arXiv:2210.01504 , 2022. [15] R. Eldan and M. Russinovich, “Who’s harry potter? approximate unlearning in llms,” arXiv preprint arXiv:2310.02238 , 2023. [16] G. Li, H. Hsu, C.-F. Chen, and R. Marculescu, “Machine unlearn- ing for image-to-image generative models,” in The Twelfth Inter- national Conference on Learning Representations .[17] Z. Liu, G. Dou, Z. Tan, Y . Tian, and M. Jiang, “Machine unlearn- ing in generative ai: A survey,” CoRR , 2024. [18] I. Mason-Williams, J. Han, H. Yannakoudakis, and C. Mascolo, “Machine unlearning in audio: Bridging the modality gap via the prune and regrow paradigm,” 2025. [Online]. Available: https://openreview.net/forum?id=i3tBySZWrR [19] L. Lugosch,
https://arxiv.org/abs/2505.15700v2
M. Ravanelli, P. Ignoto, V . S. Tomar, and Y . Ben- gio, “Speech model pre-training for end-to-end spoken language understanding,” in Proc. INTERSPEECH 2019 . [20] E. Bastianelli, A. Vanzo, P. Swietojanski, and V . Rieser, “SLURP: A spoken language understanding resource package,” in EMNLP , 2020. [21] A. Koudounas, M. La Quatra, L. Vaiani, L. Colomba, G. Attana- sio, E. Pastor, L. Cagliero, and E. Baralis, “ITALIC: An Italian In- tent Classification Dataset,” in Proc. INTERSPEECH 2023 , 2023, pp. 2153–2157. [22] B. Lee, I. Calapodescu, M. Gaido, M. Negri, and L. Besacier, “Speech-massive: A multilingual speech dataset for slu and be- yond,” in Proc. INTERSPEECH 2024 , 2024, pp. 817–821. [23] A. Baevski, Y . Zhou, A. Mohamed, and M. Auli, “wav2vec 2.0: A framework for self-supervised learning of speech representa- tions,” in NeurIPS , vol. 33, 2020. [24] W.-N. Hsu, B. Bolte, Y .-H. H. Tsai, K. Lakhotia, R. Salakhutdi- nov, and A. Mohamed, “Hubert: Self-supervised speech represen- tation learning by masked prediction of hidden units,” IEEE/ACM transactions on audio, speech, and language processing , 2021. [25] A. Babu and et al., “XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale,” in Proc. INTERSPEECH 2022 . [26] A. Conneau, A. Baevski, R. Collobert, A. Mohamed, and M. Auli, “Unsupervised cross-lingual representation learning for speech recognition,” Proc. INTERSPEECH 2021 . [27] A. Golatkar, A. Achille, and S. Soatto, “Eternal sunshine of the spotless net: Selective forgetting in deep networks,” in CVPR , 2020. [28] D. Choi and D. Na, “Towards machine unlearning benchmarks: Forgetting the personal identities in facial recognition systems,” arXiv preprint arXiv:2311.02240 , 2023. [29] M. Kurmanji, P. Triantafillou, J. Hayes, and E. Triantafillou, “To- wards unbounded machine unlearning,” NeurIPS , vol. 36, 2024. [30] S. Goel, A. Prabhu, A. Sanyal, S.-N. Lim, P. Torr, and P. Ku- maraguru, “Towards adversarial evaluations for inexact machine unlearning,” arXiv preprint arXiv:2201.06640 , 2022. [31] A. K. Tarun, V . S. Chundawat, M. Mandal, and M. Kankanhalli, “Fast yet effective machine unlearning,” IEEE Transactions on Neural Networks and Learning Systems , 2023. [32] V . S. Chundawat, A. K. Tarun, M. Mandal, and M. Kankanhalli, “Can bad teaching induce forgetting? unlearning in deep networks using an incompetent teacher,” in Proceedings of the AAAI Con- ference on Artificial Intelligence , vol. 37, 2023, pp. 7210–7217. [33] J. Hayes, I. Shumailov, E. Triantafillou, A. Khalifa, and N. Paper- not, “Inexact unlearning needs more careful evaluations to avoid a false sense of privacy,” arXiv preprint arXiv:2403.01218 , 2024. [34] “Unlearning sensitive content from large language models - se- meval 2025 challenge,” https://llmunlearningsemeval2025.github. io/, 2024, [Accessed 20-02-2025]. [35] X. F. Cadet, A. Borovykh, M. Malekzadeh, S. Ahmadi-Abhari, and H. Haddadi, “Deep unlearn: Benchmarking machine unlearn- ing,” arXiv preprint arXiv:2410.01276 , 2024. [36] K. Grimes, C. Abidi, C. Frank, and S. Gallagher, “Gone but not forgotten: Improved benchmarks for machine unlearning,” arXiv preprint arXiv:2405.19211 , 2024. [37] Z. Jin, P. Cao, C. Wang, Z. He, H. Yuan, J. Li, Y . Chen, K. Liu, and J. Zhao, “Rwku: Benchmarking real-world knowledge unlearn- ing for large language
https://arxiv.org/abs/2505.15700v2
HDLxGraph: Bridging Large Language Models and HDL Repositories via HDL Graph Databases Pingqing Zheng University of Minnesota, Twin Cities Minneapolis, MN, USA pingqingzheng13@gmail.comJiayin Qin University of Minnesota, Twin Cities Minneapolis, MN, USA qin00162@umn.eduFuqi Zhang University of Minnesota, Twin Cities Minneapolis, MN, USA zhan7076@umn.edu Shang Wu Northwestern University Evanston, USA swu@u.northwestern.eduYu Cao University of Minnesota, Twin Cities Minneapolis, MN, USA yucao@umn.edu Caiwen Ding University of Minnesota, Twin Cities Minneapolis, MN, USA dingc@umn.eduYang (Katie) Zhao University of Minnesota, Twin Cities Minneapolis, MN, USA zhao1948@umn.edu Abstract —Large Language Models (LLMs) have demonstrated their potential in hardware design tasks, such as Hardware Description Language (HDL) generation and debugging. Yet, their performance in real-world, repository-level HDL projects with thousands or even tens of thousands of code lines is hindered. To this end, we propose HDLxGraph, a novel framework that integrates Graph Retrieval Augmented Generation (Graph RAG) with LLMs, introducing HDL-specific graph representations by incorporating Abstract Syntax Trees (ASTs) and Data Flow Graphs (DFGs) to capture both code graph view and hardware graph view. HDLxGraph utilizes a dual-retrieval mechanism that not only mitigates the limited recall issues inherent in similarity- based semantic retrieval by incorporating structural information, but also enhances its extensibility to various real-world tasks by a task-specific retrieval finetuning. Additionally, to address the lack of comprehensive HDL search benchmarks, we intro- duce HDLSearch, a multi-granularity evaluation dataset derived from real-world repository-level projects. Experimental results demonstrate that HDLxGraph significantly improves average search accuracy, debugging efficiency and completion quality by 12.04%, 12.22% and 5.04% compared to similarity-based RAG, respectively. The code of HDLxGraph and collected HDLSearch benchmark are available at https://github.com/Nick-Zheng-Q/ HDLxGraph. Index Terms —Graph RAG, Hardware description language, LLM agent I. I NTRODUCTION Recent advances in Large Language Models (LLMs) for software language understanding and generation [1], [2] have inspired efforts to extend their capabilities to facilitate Hard- ware Description Language (HDL) code designs. Prior works have demonstrated LLMs’ potential in generating [3]–[5] and debugging [6] HDL code [7], [8]. However, LLM performance in HDL-related tasks remains hindered by limited training data and degradation caused by long prompts. To addressthese issues, researchers have integrated Retrieval-Augmented Generation (RAG) , which retrieves relevant HDL fragments from high-quality HDL repositories to supplement knowledge gaps and reduce prompt length [8], [9]. Despite its potential, existing RAG approaches in HDL pre- dominantly rely on similarity-based semantic retrieval, which exhibits low recall when encountering intricate queries or large, complex HDL repositories. Figure 1 shows an HDL debugging example for a CV32E40P RISC-V HDL implemen- tation, which consists of over 30 modules [10]. The similarity- based RAG approach relies solely on semantic similarity between the user query and code module names, making it vulnerable to vocabulary mismatches. For instance, a query may contain only an ambiguous description, or the relevant code may exist as an unnamed block within HDL repositories. Inspired by recent advancements in Graph RAG [11], [12] and the characteristics of HDL codes, we propose integrating graph-based structures into HDL-specific RAGs to address the aforementioned challenges. Specifically, we introduce HDLx- Graph, a novel hybrid graph-enhanced RAG framework, which incorporates two HDL-specific graphs:
https://arxiv.org/abs/2505.15701v1
Abstract Syntax Trees (ASTs) and Data Flow Graphs (DFGs). Using ASTs, we partition the HDL repository with about several thousand lines of code into a code graph view containing multi-level entity relationships. While DFGs provide a more precise hardware graph view of signal-level flow to reflect the circuit topology. By integrating structural properties into semantic information, HDLxGraph significantly enhances LLMs’ understanding of code structures through AST retrieval, enabling multi-level reasoning for complex HDL codes and ambiguous queries, while demonstrating extensibility across three downstream applications: code search, debugging, and completion, througharXiv:2505.15701v1 [cs.AR] 21 May 2025 Orig. Query output logic apu_read_dep_for_ jalr_o, ......... apu_result_q <= 'b0; apu_flags_q <= 'b0; end else begin if (apu_rvalid_i && apu_ multicycle && (data_misaligned_i || data_misaligned_ex_i || (data_req_i && regfile_alu_we_i) || (mulh_active && (mult_operator_i == MUL_H)))) begin apu_rvalid_q <= 1'b1; apu_result_q <= apu_result_i; apu_flags_q <= apu_flags_i;... HDLHardwar e Description Language Database Natural Language Query Similarity-based Sear ch cv32e40p_ex_stage .svViolation of ALU operand forwarding if preceded by FPU multicycle instruction There is PC mismatch in one of the fpu failing tests. Initial debug shows that it could be a potential bug in operand forwarding “addi x16 … -> jar x0 x16 …”User Debugging Query HDLxGraph module: ex_stage 0.3 block: operand forwarding 0.9 singal: apu_read..._ jalr_o 0.7wrong location!! operand in cv32e40p_alu.sv; multicycle_o in cv32e40p_mult.sv Similarity-based: Redundant & Inaccurate Graph-based: Structural & Accurate Multi-level Query in HDLxGraph jar & forwarding not consideredIn Conventional RAGVocabulary Mismatch sub name(); always; <= Consist; Instance; Edge Trigger; SynchronizeStructurral Mismatch Hierarchical & Structural Flat & Token SequentialFig. 1. (Top) An illustration of the mismatch between HDL and natural lan- guage in conventional RAG, including structural and vocabulary mismatches. And (Bottom) a demonstration of HDLxGraph’s efficiency in bridging these mismatches by incorporating graph information, using an HDL debugging example for a CV32E40P RISC-V HDL implementation [10]. signal-level task-specific retrieval achieved by DFG. Further- more, due to the identified absence of comprehensive HDL code search benchmarks containing question-answer pairs with multi-level relationships (as the example in Figure 1), we extend HDLxGraph framework to address this gap, generating a benchmark, dubbed as HDLSearch. Our key contributions are summarized as follows: •We propose HDLxGraph, a novel LLM-driven RAG framework that leverages a dual-retrieval mechanism based on AST and DFG retrieval. Specifically, it takes into account the alignment across different hierarchical levels in the AST and incorporates task-specific retrieval on signal level within the DFG, thereby enabling more fine-grained retrieval compared to conventional RAG and demonstrating extensibility across various tasks. To the best of our knowledge, HDLxGraph is the first framework to integrate HDLs’ inherent graph structures with RAGs. •HDLxGraph implements a repository-level HDL graph database with hybrid graph view, where the AST graph provides the code structure view while the DFG graph represents the hardware graph view. The database con- struction also considers cross-file relationship, thereby providing a more accurate and consistent graph repre- sentation of projects at repository level. •Based on HDLxGraph, we further construct a new LLM- generated dataset for HDL code search with data clean- ing and evaluation, called HDLSearch, which derives query benchmark from real-world repository-level HDL
https://arxiv.org/abs/2505.15701v1
projects, to solve the gap in insufficient search datasets for HDL codes. •Integrating HDLxGraph with three LLMs with various scale and different coding abilities, we demonstrate the versatility of HDLxGraph on three real-world HDL tasks, i.e., code search, debugging, and completion. Experi- ments demonstrate that our framework exhibits competi-tive performance on two widely-used benchmarks [13], [14] for code completion and debugging as well as HDLSearch for code search, respectively. The remaining sections are organized as follows. Section II provides an overview of the application of LLMs in hard- ware design, alongside a review of conventional Verilog code structural abstractions and graph-based RAG techniques. Sec- tion III-B presents a detailed explanation of the HDLxGraph workflow integrating AST and DFG abstraction and employs a multi-hierarchy approach to generate the HDLSearch Bench- mark regarding the benchmark gap in hardware searching. Section IV reports thorough experimental results on three hardware downstream tasks, and finally, Section V concludes the paper. II. P RELIMINARIES A. LLM-aided HDL Tasks Generation. Although LLMs excel in generating simple HDL designs, they still struggle with complex repository- level chip designs, as demonstrated in previous work [15]– [23]. For example, state-of-the-art (SOTA) works [9], [22]– [24] reply on the templates or customed RAG dataset provided by human experts, using LLMs to fill in fixed-level content while overlooking the entire generation. Debugging. Existing work also exhibits certain limita- tions when using LLMs for repository-level complex debug- ging [13], [25]–[28]. The LLM4DV [25] framework utilizes LLMs to generate test stimuli. Though performing well on simple tasks, it fails to achieve high coverage in more complex chip designs. Additionally, [26] integrates LLMs with RAG to identify and patch functional HDL bugs. However, it still relies on manually defined error types, limiting LLMs’ potential for understanding-based bug fixing. Search. Precise code search is the foundation to RAG for both HDL generation and debugging. While no direct work has focused on HDL search, recent studies have examined LLMs’ potential in HDL summarization [3], which is a pre-step for HDL search, as well as EDA Q&A tasks [8]. However, these works do not consider HDL’s inherent hierarchical structure, preventing their direct application to precise code searches. Additionally, previous work falls short in tasks beyond their targeted objectives, limiting generalizability. Our proposed HDLxGraph is a unified RAG-assisted framework designed to address these three tasks while exploring LLMs’ potential for repository-level HDL codes. B. Graph Retrieval Augmented Generation Graph Retrieval-Augmented Generation (Graph RAG) leverages the structured nature of knowledge graphs and integrates them into the RAG framework [11] to enable more complex structured reasoning and context-aware responses. Recent studies suggest that Graph RAG outperforms classical RAG-based LLM systems in certain software code tasks [29]– [31]. Inspired by this, our proposed HDLxGraph leverages the unique structure of HDL’s ASTs and DFGs to optimize hard- ware design via Graph RAG. Details shown in Section III-B. AST Retrieval QueryRTL Repo Multi-level Query Module-level Block-level Signal-levelResult Modules Blocks Signals AST Parser DFG Parser Signal Traverse Primary OperationsCode Debugging DFG RetrievalInput Code Completion Retrieved Results DFG GraphAST GraphStep 1 InputStep 2 Step 3 Downstream T asks
https://arxiv.org/abs/2505.15701v1
Fusion Module Decomposer Task Solver Code Semantic SearchOutputModules Blocks Signals Sim.-based ExtractResult Debugging CompletionFig. 2. The overview of our proposed HDLxGraph framework. C. HDL Code Structure Graph code views, such as AST, DFG, and Control Flow Graph (CFG), have been adopted for a more comprehensive understanding of software programming language. Although sharing syntactic similarities, HDLs introduce unique com- plexities in representation characterized in three aspects: ex- plicit timing modeling, inherent parallelism, and rigorous bit- width specifications [32], [33]. Specifically, Verilog’s always blocks enable concurrent execution, while the assign statement facilitates continuous assignment, both differ from constructs in software languages. Therefore, directly inheriting the graph views from software code is infeasible. We conduct an in-depth study of HDL-specific graphs and propose a graph database with hybrid representations by AST and DFG in Section III-A. D. Benchmarks/Datasets for LLM-aided Tasks For HDL code generation benchmarks, RTLLM [34] con- sists of 30 designs; and VerilogEval [14] presents an evaluation dataset consisting of 156 problems from HDLBits. For HDL code debugging benchmarks, LLM4SecHW [13] contains bug localization and repair test sets from the version control data in Github; RTLFixer [24] introduces a Verilog syntax debugging dataset, derived from VerilogEval [14]; and CirFix [35] includes a bug repair benchmark with testbenches. No existing benchmark has been established for the HDL search, which is an essential step for downstream tasks such as generation and debugging. Therefore, we propose HDLSearch, the first benchmark for HDL code search, which derives query benchmark from real-world repository-level HDL projects. III. M ETHODOLOGY Figure 2 illustrates the comprehensive workflow of our proposed HDLxGraph framework, which consists of three steps: 1) Graph Database Preparation, 2) Multi-level Retrieval, and 3) Downstream Task Completion. Beginning with Step 1 , we extract ASTs and DFGs from the input code repositories through the AST and DFG parsers, then store HDL entities and relationships as nodes and edges in a graph database (see Section III-A). In Step 2 (see Section III-B), HDLxGraph utilizes a Decomposer Agent in AST retrieval to extract the input query into structural levels, which are later sent to pre- defined searching paradigms to retrieve relevant fine-grained code snippets. Additionally, code debugging and completiontasks trigger DFG retrieval in parallel to narrow the search space or enable similarity matching between incomplete and complete code snippets. HDLxGraph supports three real-world HDL downstream tasks. Step 3 fuses the retrieved code snippets with LLMs to support code debugging, completion, and search, which further demonstrates the generality of our framework (see Section III-B). In addition, due to the lack of a code search benchmark in HDL repositories, we generated a new benchmark, called HDLSearch, based on HDLxGraph, as shown in Figure 5, composed of three steps: 1) Manual Filtering, 2) Query Generation, 3) Benchmark Generation. Details of benchmark generation are presented in Section III-C. A. Graph Database Preparation As shown in Step 1 of Figure 2, the HDLxGraph RAG framework begins with an off-line graph database construc- tion. The graph database represents HDL repositories through nodes and edges that correspond to HDL entities and their relationships. Without losing representativity, we focus
https://arxiv.org/abs/2505.15701v1
on Verilog, a widely used HDL language, in our implementation. Please note that, although different HDLs have different syn- tactic properties, they share the same three-level structural ab- straction, i.e., (module →block →signal) in Verilog. Specifically, we use an AST to support the code graph view that emphasizes multi-level structural relationships in HDL, and a DFG to facilitate the hardware graph view focusing on signal flow reflecting circuit topology, providing a com- prehensive and tailored representation of the HDL repository. The AST graph incorporates node types such as MODULE , BLOCK , and SIGNAL connected through CONTAINS and INSTANTIATE edge types, whereas the DFG graph intro- duces TEMP nodes alongside SIGNAL nodes, connected viaFLOWS_TO ,TRUE ,FALSE , and COND edges. When constructing the entire graph database, there are three main sub-steps: 1) Parsing. The graph database construction begins with analyzing individual HDL file in the repository using a Pyverilog-based [36] AST and DFG parser. For AST pars- ing, we extract the cross-level dependancy information of MODULE ,BLOCK , and SIGNAL from each Verilog file to represent the fine-grained hierarchical code structure. Note that block level (always, assign, initial) here represent behavioral abstraction at the register-transfer level, defining concurrent hardware operations. Concurrently, we generate the hardware signal flow for DFG parsing, which characterizes the trans- mission and interaction between signals. The DFG graph incorporates both the signal directions and the dependency relationships between signals, reflecting the functionality and processing flow of a circuit. This multi-granularity representa- tion enables our database to store both the code structure and the hardware behavior of a single HDL file, thereby facilitating a more comprehensive graph abstraction of the HDL, as shown in Figure 3. 2) Meta-data generation. After the parsing of graph data, we generate embeddings for nodes (both MODULE and module incrementer (input wire clk, input wire rst, input wire [3:0] in_data, output wire [3:0] out_data ); reg [3:0] data_reg; always @(posedge clk or posedge rst) begin if (rst) data_reg <=4'b0000; else data_reg <=in_data; end assign out_data=data_reg+1; endmodulemodule top_module( input wire clk, input wire rst, input wire [3:0] sw , output wire [3:0] led ); wire [3:0] inc_result; incrementer u_incrementer (.clk(clk), .rst(rst), .in_data(sw), .out_data(inc_result) ); reg[3:0] led_reg; always @(posedge clk or posedge rst) begin if (rst) led_reg <=4'b0000; else led_reg <=inc_result; end assign led=led_reg; endmodule (1) source code (2) nodes & edges# example nodes MODULE : "incrementer", "top_module" Block : "always...", "assign..." Signal : "clk","rst", "in_data","out_data"... Temp: "Branch_in_data_4'd0" # example edges # meta-data of an BLOCK node type: "Always" code_embedding : "0.08,0.028,-0.09 ,0.003,..." code : "always @(posedge clk or posedge rst) begin ......."AST DFG AST DFGCONT AINS : ("top_module")->("always_led_reg") INITIA TES : ("top_module")->("u_incrementer") FLOWS_T O: ("inc_result")->("led_reg") COND : ("rst")->("Branch_incrementer .in_ data_ 4'd0") (3) visualizationFLOWS_TO FLOWS_TOFALSEFLOWS_TO F… FLOWS_TO FLOWS_TO C…F… TRUEFLOWS_TO FLOW …CONTAINSCONTAINS CONTAINSCONTAINSCONTAINS CONTAINS CONTAINSCONTAINS CONTAI… CONTAINSCONTAINS CONTAINSCONTAINS CONTAINS INS…CONTAINSCONTAINSCONTAINS CONTAINS CO NTAINSCONTAINS CONTAINS FLOWS_TO FLOWS_TOFALSEFLOWS… FLOWS_TO FLOWS_TO C…FLOWS_TO TRUE CONTAINSCONTAINS C…CONTAI… CONTAINS CONTAINSCONTAINS CONTAINSCONTAINS CONTAINS CONTAINS CONTAINSCONTAINSCONTAINSCONTAINS in_datadata_reg rstout_data Branch_… 4'd0_gra…Plus_gra… 'd1_grap…top_mo… input wire clk, i…wire [3:0] inc_r… u_incre…reg [3:0] led_re…always @(pos… assign led = led_… led inc_resultclk swled_reg rstBranch_… 4'd0_gra… increme… input wire clk,
https://arxiv.org/abs/2505.15701v1
i…reg [3:0] data_r… always @(pos…assign out_data = d clk Module Block Signal TempFig. 3. Visualization of an example in the graph database. BLOCK ) via code encoding to facilitate semantic search. These embeddings, together with the node attributes extracted by the parser, are recorded in the database as part of the node meta-data. Typically, meta-data will include multiple attributes of each node, for instance, the meta-data for BLOCK nodes contains attributes such as block type, code, and embedding, as illustrated in Figure 3. We use CodeT5+ [1], a SOTA code LLM model, to directly generate the embedding for our code, avoiding description generation. 3) Cross-file Relationship Construction. Finally, we ad- dress the absense of cross-file relationship, which is the mod- uleINSTANTIATE relationships. We search for the module node with same name recorded in the meta-info instance block to establish the cross-file and cross-module relationships. The developed graph database provides multi-level code exploration spanning from module-level abstractions to signal- level implementations, thereby positioning our HDL graph database as a extensible framework for multiple downstream tasks due to the modular fashion of database schema manage- ment. B. Multi-level Retrieval and Downstream Task Completion Multiple-level Retrieval: In real-world hardware project issues, user queries always contain rich contextual cues, such as module names, functional descriptions, and sometimes brief code snippets, offering hints for retrieval. Specifically, user queries can be used to extract multi-level structural information and then guide the following multiple-level AST retrieval1. In addition, signal-level flow through DFG retrieval is adopted for code completion and debugging tasks. AST Retrieval. HDLxGraph constructs a hierarchical rep- resentation of HDL codebases through an AST-based graph, enabling multi-level HDL retrieval as depicted in Figure 4. For AST retrieval, we follow three sub-steps: 1Figure 9 in Appendix A demonstrates this through two real-world issues submitted to the CV A6 [37] and OpenTitan [38] projects, where highlighted hints guide HDL retrieval. Multi-level Query Module-level Block-level Signal-level TracerError Signal Unfinish CodeDFG ParserUnfinish GraphQuery DFG Graph DatabaseAST Graph Database DFG Graph Traverse Graph SimilarityAST Graph Module Block SignalTop-k Nodes Rerank Debug. Result Completion ResultDebug. CompletionQuery DecompositionTop-k Selection & FilteringCross-level RerankFig. 4. Flow of multi-level retrieval containing AST and DFG retrieval. 1) Query Decomposition. HDLxGraph employs an LLM agent, called Decomposer , to decompose the original query into three abstraction levels: module, block, and signal, thereby extracting structural information. It supports intricate queries from various downstream tasks such as: ‘Find some certain blocks under a certain module’ in Search, or ‘Some functions in some certain modules have led to the following errors ...’ Therefore, we obtain multi-level queries which have captured inherent structural information in the original query. 2) Top-k Selection and Filtering. Leveraging Ver- ilog’s inherent three-level abstraction (module →block → signal) , HDLxGraph first retrieves top-k candidate modules and blocks which have the highest similarity scores with the decomposed query in corresponding levels based on semantic matching, then filters valid module-block pairs through con- tainment relationships. To facilitate precise code retrieval in different levels, a suite of retrieval APIs is introduced, as detailed in Table I of Appendix A. Since we select
https://arxiv.org/abs/2505.15701v1
Neo4j as the graph database, the query APIs are written in Cypher to interact with the database. 3) Cross-level Rerank. Finally, we rerank results using av- eraged similarity scores. Since the signal-level representation lacks the code context, it is challenging to directly obtain an accurate similarity score for the signal-level query. Therefore, HDLxGraph extracts all filtered module-block pairs that con- tain the signal and computes their average similarity scores as the signal-level similarity score. Therefore, we prioritize signal with the highest similarity score to be the retrieved signal. This hierarchical approach ensures fine-grained retrieval of HDL’s structural information across multiple abstraction layers while maintaining compatibility with similarity-based semantic anal- ysis, balancing precision and scalability in hardware database exploration. DFG Retrieval. The DFG is composed of signal-level variables and relationships. As a result, it is useful when signal-level information is needed and the utilization method can vary greatly for different downstream tasks. In this work, we utilize the signal-level flow to enhance code completion and code debugging tasks, as illustrated in Figure 2. There are two primary operations for DFG retrieval of different tasks, which are Signal Traverse andSimilarity-based Extract : Blocks Modules Signals Query Gen. Query Gen. Internal Eval. Internal Eval. Bench- mark Step 2 Step 3 RTL Repo Input Filtered Repo Manual FilteringStep 1 Fig. 5. HDLSearch benchmark generation flow. 1) Debugging. For debugging tasks, if a signal mismatch is detected, the debugging process can iteratively traverse the DFG upstream with Signal Traverse operation from the faulty signal, inspecting each node (e.g., operators, multiplexers, or instance outputs) to identify where the dataflow diverges from expected behavior. This approach guides LLM debugging by focusing only on the subgraph directly influencing the problematic signal, filtering out irrelevant code regions. By extracting the immediate upstream nodes and their associated code blocks, the system generates a concise, context-rich error candidate set. 2) Completion. While we want to retrieve the similar code with the unfinished code, some reference code may look different but still having the similar functionality because the hardware (i.e. dataflow) described are very similar. Graph embedding offers a viable approach for Verilog code com- pletion by translating code’s structural and semantic relation- ships into a unified mathematical framework. By leveraging GraphSAGE [39], these graphs are compressed into low- dimensional vector representations that preserve contextual patterns, such as recurring HDL constructs (e.g., finite state machines, pipelined operations) or common coding idioms (e.g., non-blocking assignments in clock-driven blocks). When a developer writes partial code, the corresponding subgraph is embedded and compared against historical embeddings using similarity metrics, enabling the system to infer likely comple- tions—even with incomplete structures—by prioritizing nodes critical to the current context. This allows real-time retrieval of relevant patterns from large codebases while adhering to Verilog-specific constraints. Downstream Task Completion: The propagation trajec- tory of error signals establishes causal dependencies within hardware description constructs, enabling LLMs to trace fault origins through backward-chaining analysis. Meanwhile, Dataflow graph analysis enables LLMs to identify functionally equivalent code patterns by detecting structural similarities in hardware operations, even when surface code syntax differs. This approach allows semantic-aware
https://arxiv.org/abs/2505.15701v1
code completion beyond literal text matching. C. HDL Search Benchmark Observing the absence of an HDL code search benchmark, we aim to establish a specific benchmark to address this gap. However, manually creating expert-annotated benchmark is time-consuming and labor-intensive, posing it economicallyimpractical. Therefore, based on the multi-level hierarchical framework of HDLxGraph, we propose to leverage LLMs to construct a benchmark dubbed HDLSearch, as shown in Figure 5. The benchmark generation can be divided into mainly three sub-steps: 1) Manual Filtering. Our corpus originates from RTL-Repo [40], a collection of publicly accessible GitHub repositories specializing in HDLs. Unlike conventional software reposito- ries, HDL projects tend to lack structured documentation and standardized code organization, making automated repository filtering particularly challenging. To address this limitation, we first implement a manual filtering and select 10 representative repositories at different difficulty levels, ranging from educa- tional FPGA projects, interconnection protocols to commercial CPUs. 2) Query Generation. Adopting a hierarchical framework where block serve as the fundamental level , we implement a multi-stage generation process. Initial functional block de- scriptions are first generated, then systematically propagated through two parallel pathways, which are 1) Signal-level annotation: through contextual information, the semantics of a functional block can be inherited by its associated signals, thereby effectively annotating these signals with specific func- tionalities, and 2) Module-level abstraction by designing a set of explicit and tailored prompts for the LLMs, we enable it to analyze and summarize the interactions among individual functional blocks as a module-level description. This dual- path flow ensures consistent semantic alignment between fine- grained signal behaviors and coarse-grained module opera- tions. With all descriptions finished, repo-specific information such as module and signal’s names are removed to generate a relatively ambiguous query. 3) Benchmark Refinement. To further ensure benchmark validity, we employ an iterative refinement process using tem- plated instructions (shown in Appendix B). Through multiple rounds of evaluation and regeneration, we gradually remove unsuitable queries and align the LLM-generated query outputs with practical engineering requirements till it reaches the defined termination count 𝐾. After that, manual adjustments are undertaken to address few gaps between LLM outputs and the actual search intent. IV. E XPERIMENTS A. Experimental Configuration and Platforms To explore the capabilities of the proposed HDLxGraph framework, we evaluate it on three HDL downstream tasks: code search, code debugging, and code completion. The task- specific benchmarks and experimental metrics are detailed in the following subsections. We equip HDLxGraph with three LLMs with different model sizes: Claude-3.5-Sonnet [41], a large model with strong coding ability; Qwen2.5-Coder- 7B [42], a coding-specific model of medium size; and LLAMA-3.1 [43], a general-purpose model with a relatively small size. We use top-p = 1.0 and temperature = 0.7 as our basic configuration. All experiments are run on a 2xA6000 Linux GPU Server and all benchmark evaluations conduct 10 0.72 0.64 0.56 0.48 0.4BM25Sear ch HDLxGraph0.54670.6671MRRSimilarity- based RAG 0.5017Fig. 6. HDL search MRR comparison with baselines. independent experimental trials per task to ensure statistical robustness. B. Code Semantic Search Benchmark: Considering the absence of benchmarks for HDL-specific code search, we use our proposed
https://arxiv.org/abs/2505.15701v1
HDLSearch (see Section III-C) as the benchmark with termination count 𝐾=7when generation. The generated benchmark comprises 40 module-level queries, 100 block-level queries and 200 signal-level queries, with 6,300 code blocks from 10 reposito- ries serving as distractor, i.e., retrieval scope. The evaluation focuses on block-level retrieval, which serves as a fundamental level with highest extensibility to other downstream tasks as mentioned before. Metric: We adopt the widely used mean reciprocal rank (MRR) in RAG as the primary metric, which assesses whether the framework is capable of returning correct results within the top-ranked outputs: MRR =1 𝑁𝑁∑︁ 𝑖=11 rank𝑖(1) Baselines: We compare HDLxGraph against two commonly used similarity-based RAG methods, BM25 [44] and CodeT5+ embeddings [1]. Evaluation Results: As shown in Figure 6, HDLxGraph achieves superior performance in block-level search evaluated with all the 100 block-level queries (averaged 12.04% MRR improvement), demonstrating its potential in accurate HDL search for complex repository-level codes. C. Code Debugging Benchmark: We evaluate HDLxGraph’s capability in han- dling real-world, repository-level debugging challenges and choose LLM4SecHW [13] as the benchmark, which extracts and refines data from the version control systems of open- source repository-level hardware designs. Specifically, we choose the mor1kx repository [45], an OpenRISC processor IP core, for our evaluation2. In the mor1kx repository [45], there are 5 git commit SHAs covering different debugging issues. Metric: Following LLM4SecHW, we choose ROUGE-N F1 score [46] as the evaluation metric, which refers to the direct 𝑁-gram overlap between a prediction and a reference word considering precision and recall. The parameter 𝑁can be set 2Since HDLxGraph’s AST and DFG parsers currently do not support Sys- temVerilog syntax, we leave further debugging evaluation on SystemVerilog repositories as future work. ROUGE F1 Scor eAccurate-RAG Similarity-based RAG HDLxGraphDebugging 0.6 0.48 0.36 0.24 0.12 0 ROUGE-1 ROUGE-2 ROUGE-L ROUGE-1 ROUGE-2 ROUGE-L ROUGE-1 ROUGE-2 ROUGE-L LLaMA3.1 Qwen2.5-Coder -7B Claude-3.5-SonnetFig. 7. HDL debugging comparison with baselines. to 1, 2 and L, corresponding to matching at unigram, bigram, and longest common subsequence gram, respectively. Baselines: We compare our framework with two RAG strate- gies: the CodeT5+ embedding search strategy [1], denoted as “Similarity-based RAG”, which represents the conventional similarity-based RAG approach, and the accurate-RAG de- bugging strategy, denoted as “Accurate-RAG”, which relies on human effort to extract the exact buggy code segments to be modified, serving as a theoretical top-tier RAG baseline. Evaluation Results: As illustrated in Figure 7, HDLxGraph achieves a higher score of ROUGE-1, ROUGE-2 and ROUGE- L compared to similarity-based RAG under all scenarios, and exhibits performance approaching that of the top-tier baseline. This demonstrates HDLxGraph’s potential in handling real- world debugging issues. D. Code Completion Benchmark: We evaluate code completion capabilities using VerilogEval-Human v2 [14] with RTLLM [34] as a reference implementation. Metric: we apply Pass@k metrics [47] to assess the generation pass rate: pass@k =EProblems" 1−𝑛−𝑐𝑝 𝑘 𝑛 𝑘# (2) where𝑛is the total number of generations, 𝑐𝑝is the number of successes, and 𝑘is the number of attempts considered. We apply𝑃𝑎𝑠𝑠 @1in our experiment. Baselines: We compare HDLxGraph against two baselines: direct LLM completion without RAG and similarity-based RAG using CodeT5+, as described in
https://arxiv.org/abs/2505.15701v1
Section IV-C. Evaluation Results: As shown in Figure 8, HDLxGraph consistently improves 𝑃𝑎𝑠𝑠 @1 accuracy by 3-10% across various LLMs. While our evaluation framework operates at module granularity rather than full repository scope, we strategically employ the RTLLM [48] codebase as a RAG corpus, thereby maintaining repository-level evaluation. The higher accuracy suggests HDLxGraph’s generalizability across different abstraction levels, highlighting that structural code understanding significantly benefits completion tasks, even at sub-repository granularity. Similarity-based RAG No RAGCompletion HDLxGraphPass@1 Rate (%)100% 80% 60% 40% 20% 012.50% LLaMA3.1 Qwen2.5-Coder -7B13.78%23.72%29.81% 29.82% 32.41%72.44% 73.40%76.00% Claude-3.5-SonnetFig. 8. HDL completion Pass@1 comparison with baselines. V. C ONCLUSION AND FUTURE WORK In this work, we propose HDLxGraph, a novel hybrid graph- enhanced RAG framework that innovatively combines AST- based structural matching with DFG-aware code retrieval. Experimental validation across semantic search, debugging, and code completion tasks demonstrates improvements of 12.04%, 12.22%, and 5.04% respectively over conventional methods, proving the effectiveness of joint structural-semantic retrieval for HDL applications. This work establishes graph-enhanced retrieval as a viable paradigm for hardware engineering assistance, with broader implications for code-intensive domains requiring precise pro- gram analysis. Future directions may include multi-view HDL representation learning to bridge the semantic gap between natural language specifications and circuit implementations. This direction could enable comprehensive support for het- erogeneous downstream tasks in electronic design automation, from specification validation to cross-module optimization. REFERENCES [1] Yue Wang, Hung Le, Akhilesh Gotmare, Nghi Bui, Junnan Li, and Steven Hoi. CodeT5+: Open code large language models for code understanding and generation. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , December 2023. [2] Anton Lozhkov, Raymond Li, Loubna Ben Allal, and et al. Starcoder 2 and the stack v2: The next generation. arXiv , 2024. [3] Yang Zhao, Di Huang, Chongxiao Li, Pengwei Jin, Ziyuan Nan, Tianyun Ma, Lei Qi, Yansong Pan, Zhenxing Zhang, Rui Zhang, Xishan Zhang, Zidong Du, Qi Guo, Xing Hu, and Yunji Chen. Codev: Empowering llms for verilog generation through multi-level summarization. arXiv , 2024. [4] Shailja Thakur, Baleegh Ahmad, Hammond Pearce, Benjamin Tan, Brendan Dolan-Gavitt, Ramesh Karri, and Siddharth Garg. Verigen: A large language model for verilog code generation. arXiv , 2023. [5] Xi Wang, Gwok-Waa Wan, Sam-Zaak Wong, Layton Zhang, Tianyang Liu, Qi Tian, and Jianmin Ye. Chatcpu: An agile cpu design and verification platform with llm. In Proceedings of the 61st ACM/IEEE Design Automation Conference , DAC ’24, New York, NY , USA, 2024. Association for Computing Machinery. [6] Xufeng Yao, Haoyang Li, Tsz Ho Chan, Wenyi Xiao, Mingxuan Yuan, Yu Huang, Lei Chen, and Bei Yu. Hdldebugger: Streamlining hdl debugging with large language models. arXiv , 2024. [7] Luyao Shi, Michael Kazda, Bradley Sears, Nick Shropshire, and Ruchir Puri. Ask-eda: A design assistant empowered by llm, hybrid rag and abbreviation de-hallucination. In 2024 IEEE LLM Aided Design Workshop (LAD) , pages 1–5, 2024. [8] Yuan Pu, Zhuolun He, Tairu Qiu, Haoyuan Wu, and Bei Yu. Customized retrieval augmented generation and benchmarking for eda tool documen- tation qa. arXiv , 2024.[9] Mingzhe Gao, Jieru
https://arxiv.org/abs/2505.15701v1
Zhao, Zhe Lin, Wenchao Ding, Xiaofeng Hou, Yu Feng, Chao Li, and Minyi Guo. Autovcoder: A systematic framework for automated verilog code generation using llms. arXiv , 2024. [10] Michael Gautschi, Pasquale Davide Schiavone, Andreas Traber, Igor Loi, Antonio Pullini, Davide Rossi, Eric Flamand, Frank Gurkaynak, and Luca Benini. Near-Threshold RISC-V Core With DSP Extensions for Scalable IoT Endpoint Devices, February 2017. [11] Darren Edge, Ha Trinh, Newman Cheng, Joshua Bradley, Alex Chao, Apurva Mody, Steven Truitt, and Jonathan Larson. From local to global: A graph rag approach to query-focused summarization. arXiv , 2024. [12] Zirui Guo, Lianghao Xia, Yanhua Yu, Tu Ao, and Chao Huang. Lightrag: Simple and fast retrieval-augmented generation. arXiv , 2024. [13] Weimin Fu, Kaichen Yang, Raj Gautam Dutta, Xiaolong Guo, and Gang Qu. Llm4sechw: Leveraging domain-specific large language model for hardware debugging. In 2023 Asian Hardware Oriented Security and Trust Symposium (AsianHOST) , pages 1–6, 2023. [14] Mingjie Liu, Nathaniel Pinckney, Brucek Khailany, and Haoxing Ren. VerilogEval: evaluating large language models for verilog code genera- tion. In 2023 IEEE/ACM International Conference on Computer-Aided Design (ICCAD) , 2023. [15] Jason Blocklove, Siddharth Garg, Ramesh Karri, and Hammond Pearce. Chip-chat: Challenges and opportunities in conversational hardware design. arXiv preprint arXiv:2305.13243 , 2023. [16] Yongan Zhang, Zhongzhi Yu, Yonggan Fu, Cheng Wan, and Yingyan (Celine) Lin. MG-Verilog: multi-grained dataset towards en- hanced llm-assisted verilog generation. In The First IEEE International Workshop on LLM-Aided Design (LAD’24) , 2024. [17] Shailja Thakur, Baleegh Ahmad, Hammond Pearce, Benjamin Tan, Brendan Dolan-Gavitt, Ramesh Karri, and Siddharth Garg. Verigen: A large language model for verilog code generation. ACM Trans. Des. Autom. Electron. Syst. , 29(3), April 2024. [18] Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. Codegen: An open large language model for code with multi-turn program synthesis. arXiv preprint arXiv:2203.13474 , 2022. [19] Shailja Thakur, Baleegh Ahmad, Zhenxing Fan, Hammond Pearce, Benjamin Tan, Ramesh Karri, Brendan Dolan-Gavitt, and Siddharth Garg. Benchmarking large language models for automated verilog rtl code generation. In 2023 Design, Automation & Test in Europe Conference & Exhibition (DATE) , pages 1–6. IEEE, 2023. [20] Shang Liu, Wenji Fang, Yao Lu, Qijun Zhang, Hongce Zhang, and Zhiyao Xie. Rtlcoder: Outperforming gpt-3.5 in design rtl generation with our open-source dataset and lightweight solution. In 2024 IEEE LLM Aided Design Workshop (LAD) , pages 1–5. IEEE, 2024. [21] Hasan Genc, Seah Kim, Alon Amid, Ameer Haj-Ali, Vighnesh Iyer, Pranav Prakash, Jerry Zhao, Daniel Grubb, Harrison Liew, Howard Mao, Albert Ou, Colin Schmidt, Samuel Steffl, John Wright, Ion Stoica, Jonathan Ragan-Kelley, Krste Asanovic, Borivoje Nikolic, and Yakun Sophia Shao. Gemmini: Enabling systematic deep-learning ar- chitecture evaluation via full-stack integration. In 2021 58th ACM/IEEE Design Automation Conference (DAC) , pages 769–774, 2021. [22] Deepak Vungarala, Mahmoud Nazzal, Mehrdad Morsali, Chao Zhang, Arnob Ghosh, Abdallah Khreishah, and Shaahin Angizi. Sa-ds: A dataset for large language model-driven ai accelerator design generation, 2024. [23] Yonggan Fu, Yongan Zhang, Zhongzhi Yu, Sixu Li, Zhifan Ye, Chaojian Li, Cheng Wan, and Yingyan Celine Lin. Gpt4aigchip: Towards next-generation ai accelerator design automation
https://arxiv.org/abs/2505.15701v1
via large language models. In 2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD) , pages 1–9, 2023. [24] Yun-Da Tsai, Mingjie Liu, and Haoxing Ren. Rtlfixer: Automatically fixing rtl syntax errors with large language models. arXiv , 2023. [25] Zixi Zhang, Greg Chadwick, Hugo McNally, Yiren Zhao, and Robert Mullins. Llm4dv: Using large language models for hardware test stimuli generation. ArXiv , abs/2310.04535, 2023. [26] Khushboo Qayyum, Muhammad Hassan, Sallar Ahmadi-Pour, Chan- dan Kumar Jha, and Rolf Drechsler. From bugs to fixes: Hdl bug identification and patching using llms and rag. In 2024 IEEE LLM Aided Design Workshop (LAD) , pages 1–5, 2024. [27] Ke Xu, Jialin Sun, Yuchen Hu, Xinwei Fang, Weiwei Shan, Xi Wang, and Zhe Jiang. Meic: Re-thinking rtl debug automation using llms. arXiv , 2024. [28] Baleegh Ahmad, Shailja Thakur, Benjamin Tan, Ramesh Karri, and Hammond Pearce. On hardware security bug code fixes by prompting large language models. IEEE Transactions on Information Forensics and Security , 19:4043–4057, 2024. [29] Kounianhua Du, Jizheng Chen, Renting Rui, Huacan Chai, Lingyue Fu, Wei Xia, Yasheng Wang, Ruiming Tang, Yong Yu, and Weinan Zhang. Codegrag: Bridging the gap between natural language and programming language via graphical retrieval augmented generation. arXiv , 2024. [30] Xiangyan Liu, Bo Lan, Zhiyuan Hu, Yang Liu, Zhicheng Zhang, Fei Wang, Michael Shieh, and Wenmeng Zhou. Codexgraph: Bridging large language models and code repositories via code graph databases. arXiv , 2024. [31] Ibrahim Abdelaziz, Julian Dolby, James P McCusker, and Kavitha Srinivas. A toolkit for generating code knowledge graphs. The Eleventh International Conference on Knowledge Capture (K-CAP) , 2021. [32] Ieee standard for verilog hardware description language. IEEE Std 1364- 2005 (Revision of IEEE Std 1364-2001) , pages 1–590, 2006. [33] M. Gordon. The semantic challenge of verilog hdl. In Proceedings of Tenth Annual IEEE Symposium on Logic in Computer Science , pages 136–145, 1995. [34] Yao Lu, Shang Liu, Qijun Zhang, and Zhiyao Xie. Rtllm: An open- source benchmark for design rtl generation with large language model. In2024 29th Asia and South Pacific Design Automation Conference (ASP-DAC) , pages 722–727. IEEE, 2024. [35] Hammad Ahmad, Yu Huang, and Westley Weimer. Cirfix: automatically repairing defects in hardware design code. In Proceedings of the 27th ACM International Conference on Architectural Support for Program- ming Languages and Operating Systems , ASPLOS ’22, page 990–1003, New York, NY , USA, 2022. [36] Shinya Takamaeda-Yamazaki. Pyverilog: A python-based hardware design processing toolkit for verilog hdl. In Applied Reconfigurable Computing , volume 9040 of Lecture Notes in Computer Science , pages 451–460. Springer International Publishing, Apr 2015. [37] F. Zaruba and L. Benini. The cost of application-class processing: Energy and performance analysis of a linux-ready 1.7-ghz 64-bit risc-v core in 22-nm fdsoi technology. IEEE Transactions on Very Large Scale Integration (VLSI) Systems , 27(11):2629–2640, Nov 2019. [38] Scott Johnson, Dominic Rizzo, Parthasarathy Ranganathan, Jon Mc- Cune, and Richard Ho. Titan: enabling a transparent silicon root of trust for cloud. In Hot Chips: A Symposium on High Performance Chips , volume 194, page 10, 2018. [39] William L. Hamilton, Rex Ying, and Jure
https://arxiv.org/abs/2505.15701v1
Leskovec. Inductive repre- sentation learning on large graphs, 2018. [40] Ahmed Allam and Mohamed Shalan. Rtl-repo: A benchmark for evaluating llms on large-scale rtl design projects. arXiv , 2024. [41] Sonnet Anthropic. Model card addendum: Claude 3.5 haiku and upgraded claude 3.5 sonnet. [42] Binyuan Hui, Jian Yang, Zeyu Cui, Jiaxi Yang, Dayiheng Liu, Lei Zhang, Tianyu Liu, Jiajun Zhang, Bowen Yu, Keming Lu, Kai Dang, Yang Fan, Yichang Zhang, An Yang, Rui Men, Fei Huang, Bo Zheng, Yibo Miao, Shanghaoran Quan, Yunlong Feng, Xingzhang Ren, Xuancheng Ren, Jingren Zhou, and Junyang Lin. Qwen2.5-coder technical report. arXiv , 2024. [43] Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, Amy Yang, and et al. The llama 3 herd of models. arXiv , 2024. [44] Stephen Robertson, Hugo Zaragoza, et al. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends® in Information Retrieval , 3(4):333–389, 2009. [45] OpenRISC. mor1kx. https://github.com/openrisc/mor1kx, 2022. [46] Chin-Yew Lin. ROUGE: A package for automatic evaluation of sum- maries. In Text Summarization Branches Out , pages 74–81, Barcelona, Spain, July 2004. Association for Computational Linguistics. [47] Andre Nakkab, Sai Qian Zhang, Ramesh Karri, and Siddharth Garg. Rome was not built in a single step: Hierarchical prompting for llm- based chip design. In Proceedings of the 2024 ACM/IEEE International Symposium on Machine Learning for CAD , MLCAD ’24, New York, NY , USA, 2024. Association for Computing Machinery. [48] Yao Lu, Shang Liu, Qijun Zhang, and Zhiyao Xie. Rtllm: An open- source benchmark for design rtl generation with large language model. In2024 29th Asia and South Pacific Design Automation Conference (ASP-DAC) , pages 722–727, 2024. APPENDIX A. Multi-level Retrieval Figure 9 demonstrates two real-world repository-level ex- amples used for multi-level retrieval, which are CV A6 and Opentitan, respectively. The highlighted part demonstrates the retrieval query. (Issue CVA6-2732 ) [BUG] Cr oss-privilege TLB leakage thr ough SLS Our microarchitectural fuzzer has found that CV A6 is susceptible to SLS (straight-line speculation [1]) and thus allows leakage through the TLB across privileges. Since speculatively issued loads and stor es from a higher privilege access the TLB, their addresses can be recovered from a lower privilege. Thus, privileged code that (architecturally) does not leak any sensitive data through its control flow or memory operations, leaks transiently to an unprivileged attacker . We provide a snippet from the generated test case bellow:....... (Issue Opentitan-26355 ) [sram_ctrl,rtl] Remove macr o timing assumptions Some parts of the sram_ctrl design (e.g., the r eadback featur e) make assumptions about the timing of the underlying SRAM macr o (e.g., a read always comes back at the next cycle). We should identify those assumptions and rewrite the design such that the controller can handle dif ferent SRAM macros. Fig. 9. Two real-world issues posted in repository-level projects. Table I shows the code retrieval APIs containing mainly five APIs:𝑠𝑒𝑎𝑟𝑐ℎ𝑚𝑜𝑑𝑢𝑙𝑒 ,𝑠𝑒𝑎𝑟𝑐ℎ𝑏𝑙𝑜𝑐𝑘 ,𝑠𝑒𝑎𝑟𝑐ℎ𝑠𝑖𝑔𝑛𝑎𝑙 , 𝑠𝑒𝑎𝑟𝑐ℎ𝑚𝑜𝑑𝑢𝑙𝑒𝑏𝑙𝑜𝑐𝑘 and𝑠𝑒𝑎𝑟𝑐ℎ𝑚𝑜𝑑𝑢𝑙𝑒𝑠𝑖𝑔𝑛𝑎𝑙 . The output here can be either the name or code of different abstraction levels, depending on the API definition. TABLE I LIST OF
https://arxiv.org/abs/2505.15701v1
CODE RETRIEVAL API S API name Description Output 𝑠𝑒𝑎𝑟𝑐ℎ𝑚𝑜𝑑𝑢𝑙𝑒 Search for module Name of module 𝑠𝑒𝑎𝑟𝑐ℎ𝑏𝑙𝑜𝑐𝑘 Search for block Code of block 𝑠𝑒𝑎𝑟𝑐ℎ𝑠𝑖𝑔𝑛𝑎𝑙 Search for signal Name of signal 𝑠𝑒𝑎𝑟𝑐ℎ𝑚𝑜𝑑𝑢𝑙𝑒𝑏𝑙𝑜𝑐𝑘 Search for block in the module Code of block 𝑠𝑒𝑎𝑟𝑐ℎ𝑚𝑜𝑑𝑢𝑙𝑒𝑠𝑖𝑔𝑛𝑎𝑙 Search for signal in the module Name of signal B. HDLSearch Benchmark Figure 10 demonstrates the template instructions used in the HDLSearch benchmark refinement. Through the annotations, we want to measure how relevant would these results are to your Verilog design. - You don't have to be absolutely certain about the correctness of the code. - You might be interested in copy-pasting the code, finding a project to use or just getting some understanding about how something is implemented. - You might be searching within your project (e.g., to reuse modules, signals, or testbench constructs) or to understand how a particular digital circuit is structured. Please annotate the results according to the following scheme: - 3: Exact match . This Verilog snippet is exactly what I was looking for . I would directly integrate it into my design with minimal adaptations. - 2: Str ong match . The snippet lar gely meets my requirements. I might use it as a backbone for my hardware module, but some modifications or additional verification might be needed. - 1: Weak match . Although the snippet is not a perfect fit, it contains useful structural elements, coding patterns, or testbench ideas that could guide further exploration. - 0: Totally irr elevant . This snippet does not address the query or Verilog design challenge at all.Fig. 10. Two real-world issues posted in repository-level projects. Table II describes the scales and classification of the pro- posed HDLSearch benchmark. The benchmark contains 10 repository-level designs from FPGA projects to CPUs, includ- ing coffee machine, CNN acc, image compression, AIB, IIC, RIFFA, Ethernet, AXIS, MIPS and E203-hbirdv2. TABLE II HDLS EARCH BENCHMARK DESCRIPTION AND SCALES Design Description Lines Modules Blocks FPGA Project coffee machine An FPGA-based coffee machine control circuit890 8 26 CNN acc An FPGA-based CNN ac- celerator866 9 22 image compression An FPGA-based JPEG image compression circuit2340 18 79 Interconnection AIB Advanced Interface Bus protocol9994 14 184 IIC Inter-Integrated Circuit protocol1540 2 11 RIFFA Reusable Integration Framework for FPGA Accelerators protocol41279 5 24 Ethernet Ethernet protocol 130457 12 177 AXIS AXI-Stream protocol 15269 11 132 CPU MIPS A MIPS RISC-V CPU 706 9 23 E203-hbirdv2 The second version of the Hummingbird E203 RISC-V processor38577 6 88
https://arxiv.org/abs/2505.15701v1
arXiv:2505.15702v1 [cs.CL] 21 May 2025LyapLock: Bounded Knowledge Preservation in Sequential Large Language Model Editing Peng Wang1, Biyu Zhou1, Xuehai Tang1, Jizhong Han1,Songlin Hu1, 1Institute of Information Engineering, Chinese Academy of Sciences Correspondence: {wangpeng2022, zhoubiyu, tangxuehai, hanjizhong, husonglin}@iie.ac.cn Abstract Large Language Models often contain fac- tually incorrect or outdated knowledge, giv- ing rise to model editing methods for precise knowledge updates. However, current main- stream locate-then-edit approaches exhibit a progressive performance decline during sequen- tial editing, due to inadequate mechanisms for long-term knowledge preservation. To tackle this, we model the sequential editing as a con- strained stochastic programming. Given the challenges posed by the cumulative preserva- tion error constraint and the gradually revealed editing tasks, LyapLock is proposed. It inte- grates queuing theory and Lyapunov optimiza- tion to decompose the long-term constrained programming into tractable stepwise subprob- lems for efficient solving. This is the first model editing framework with rigorous the- oretical guarantees, achieving asymptotic op- timal editing performance while meeting the constraints of long-term knowledge preserva- tion. Experimental results show that our frame- work scales sequential editing capacity to over 10,000 edits while stabilizing general capabil- ities and boosting average editing efficacy by 11.89% over SOTA baselines. Furthermore, it can be leveraged to enhance the performance of baseline methods. Our code is released on https://github.com/caskcsg/LyapLock . 1 Introduction Large Language Models (LLMs), with their pow- erful capabilities in knowledge storage and recall, have become a research hotspot in the field of natu- ral language processing (Brown et al., 2020; Huang et al., 2022; Liu et al., 2024). However, studies re- veal that the knowledge acquired by LLMs during the pre-training phase may contain incorrect in- formation or outdated content (Cao et al., 2021; Mitchell et al., 2022a). This makes the updating of model knowledge an urgent and critical issue to be addressed. Traditional solutions, such as re-pretraining or full-parameter fine-tuning, can facili- tate knowledge updates. However, the prohibitive computational costs severely limit their practical applications (Gupta et al., 2023; Yao et al., 2023). Recent years have witnessed growing interest in low-cost knowledge updating through model editing techniques (Wang et al., 2025). Among these, the locate-then-edit paradigm, exemplified by ROME (Meng et al., 2022) and MEMIT (Meng et al., 2023), has emerged as the mainstream frame- work, owing to its demonstrated advantages in edit- ing efficiency and precision. This paradigm oper- ates through two key phases: (1) identifying the critical parameter subset Wassociated with tar- get knowledge via causal tracing analysis, and (2) achieving the update of the target knowledge within the parameter space by computing and implement- ing appropriate perturbations ∆. To prevent unintended degradation of pretrained knowledge during target knowledge updates, per- turbation strategies necessitate meticulous design. The prevailing approach (Meng et al., 2022, 2023) involves constructing and solving a bi-objective loss function that integrates preservation loss and editing loss to achieve optimized knowledge updat- ing. The former maintains the stability of knowl- edge representations intended for retention, while the latter ensures accurate updating of target knowl- edge. However, as the preservation loss serves merely as a soft constraint, the model’s
https://arxiv.org/abs/2505.15702v1
capability to retain knowledge and generate fluent sentences after editing is prone to instability. Recent studies attempt to alleviate this issue by imposing supple- mentary constraints (e.g., regularized weight up- dating in RECT (Gu et al., 2024) and null space projection in AlphaEdit (Fang et al., 2025)) during parameter search processes. Nevertheless, these approaches remain inherently restricted by their heuristic nature. Furthermore, existing methods have largely fo- cused on single-edit incremental optimization for 1 Figure 1: Comparison of preservation loss and down- stream task performance of LLaMA3 (Meta, 2024) dur- ing sequential editing of 10,000 samples using current methods and LyapLock (details in Sec.4). the immediate editing state, lacking a rigorous the- oretical framework to regulate the long-term cumu- lative trends of successive edits in practical deploy- ment scenarios (Hartvigsen et al., 2023; Wang et al., 2024). As a result, the accumulation of preserva- tion loss inevitably erodes model stability during sequential editing operations, ultimately leading to model forgetting and collapse (Fang et al., 2025; Gupta et al., 2024). Our experimental results show that as the edit count increases, model parameters gradually deviate from the initial values, evidenced by a monotonic increase in preservation loss (as in Figure 1(a)). After 10,000 consecutive edits, the performance of downstream tasks exhibits near- complete degradation (as in Figure 1(b)). To address these challenges, this paper refor- mulates the conventional bi-objective optimization problem into a constrained long-term optimization problem for sequential editing. The objective is to minimize the long-term editing loss under the con- straint of cumulative preservation loss, as shown in Figure 2. However, due to the uncertainty of subsequent editing tasks and the preservation loss constraint, achieving a global optimum for this stochastic programming problem poses a signifi- cant challenge. To this end, we propose LyapLock , the first framework providing theoretical stability guarantees for sequential model editing through a Lyapunov-driven formulation. Through rigorous theoretical proofs, we have demonstrated that it achieves asymptotically near-optimal editing per- formance while satisfying long-term preservation loss constraints. To validate effectiveness, extensive experiments are conducted on representative LLMs, includ- ing GPT-2 XL(Radford et al., 2019), GPT-J(Wang and Komatsuzaki, 2021), and LLaMA-3-8B(Meta, 2024). Results demonstrate that after sequen- tially editing 10,000 samples, our method achieves 11.89% improvement in editing performance com- Figure 2: A formal comparison between LyapLock and current methods. pared to the best baseline (94.41% vs. 82.52%), while maintaining stable performance across multi- ple downstream tasks (baseline methods all degrade by 100%). Notably, our method exhibits excep- tional scalability — when the editing scale extends to 20,000, the model still maintains its general ca- pability. In addition, our method is compatible with existing knowledge editing methods and can improve their editing performance by 9.76% and downstream task performance by 32.63%. 2 Preliminary 2.1 Hidden States of LLMs LLMs typically consist of an embedding layer, L decoder layers, and an output layer. A decoder layer has an attention module ( Attn) and a multi- layer perceptron ( MLP ) module. Given the struc- tural diversity of LLMs, low-level variations (e.g., residual connections, normalization, and biases) are omited for brevity
https://arxiv.org/abs/2505.15702v1
in this paper. For a input x, the hidden state hlat the l-th layer is expressed as: hl=hl−1+al+ml,al=Attnl(hl−1), ml=MLPl(al) =Wl outact(Wl in(al+hl−1)) (1) Here,alandmlare the outputs of AttnlandMLPl. MLPlcontains two linear layers with parameters Wl in∈Rd0×d1andWl out∈Rd1×d0, where d0is the intermediate dimension and d1is input/output dimension of MLPl.act(·)denotes a specific ac- tivation function, which varies across different LLMs. 2.2 Model Editing in LLMs 2.2.1 Knowledge Storage According to (Kohonen, 1972; Geva et al., 2021), any linear operation can be viewed as a form of key-value pair storage. Consequently, the second- layer parameters Wl outin the MLP layer can be interpreted as a linear associative memory module: ml |{z} v=Wl outact(Wl in(al+hl−1))| {z } k(2) 2 Typically, factual knowledge stored in LLMs can be formalized as a knowledge triple (s, r, o )com- posed of a subject s, relation r, and object o (Meng et al., 2022, 2023). For example, the fact "Beats Music is owned by Apple." is formalized ass="Beats Music" ,r="is owned by" , and o="Apple" . Here, Wl outassociates a key kencod- ing(s, r)with a value vencoding o. Based on this perspective, editing factual knowledge in LLMs can be achieved by modifying the parameters of Wl out(hereafter denoted as W). Specifically, each edit operation updates the model parameters by adding a perturbation ∆toW, thereby reconstruct- ing the association between kandvto implement knowledge updates. 2.2.2 Sequential Editing In practical applications, sequential knowledge up- dates to the model are often required (Hartvigsen et al., 2023; Wang et al., 2024; Fang et al., 2025). Specifically, given Tbatches of new knowl- edge{S1, S2, . . . , S T}to be updated into LLMs, where each Stcontains nnew facts (i.e., St= {(s1 t, r1 t, o1 t),(s2 t, r2 t, o2 t), . . . , (sn t, rn t, on t)}). As- sume that each edit occurs at a timestamp that is a positive integer. Sequential editing involves associ- ating all corresponding new key-value pairs ki t-vi t (i∈ {1,2, . . . , n }) inStby adding a perturbation ∆(t)to the updated model parameters from the previous timestamp W(t−1)at each timestamp t∈ {1,2, . . . , T }. Through this process, the model parameters are sequentially updated. Formally, for the t-th timestamp (i.e., the t-th edit), we represent the current batch of new knowl- edgeStas key-value matrices: K1(t)= k1 t|k2 t|. . .|kn t ∈Rd0×n, V1(t)= v1 t|v2 t|. . .|vn t ∈Rd1×n(3) LetW(0)representing the original parameters. Correspondingly, the preserved knowledge in W(0) can be expressed as key-value matrices K0(0)andV0(0), hereafter denoted as K0and V0. The mainstream locate-then-edit methods solve the perturbation ∆(t)by jointly optimizing the following bi-objective loss function: min ∆(t)EL(t) +PL(t).(4) The above EL(t)andPL(t)are the editing loss and the preservation loss of the model after edit- ing at timestamp t, respectively, where EL(t) = ∥[W(t−1)+∆(t)]K1(t)−V1(t)∥2 F andPL(t) = ∥[W(t−1)+∆(t)]K0−V0∥2 F. Here,∥ · ∥2 Fdenotes the squared Frobenius norm. Theediting loss ensures accurate updates for target knowledge, while the preservation loss preserves the integrity of the to-be-reserved knowledge. By applying the normal equations (Johnson et al., 2004), a closed-form solution of
https://arxiv.org/abs/2505.15702v1
the formula 4 can be derived. After obtaining ∆(t), the model parameters are updated as: W(t)=W(t−1)+∆(t). (5) By repeating this process at each timestamp t, sequential editing is achieved, enabling the model to progressively incorporate all Tbatches of new knowledge. 3 The LyapLock Framework 3.1 Constrained Sequence Editing Optimization Problem Formulation As shown in Figure 1, the traditional bi-objective optimization Problem 4 leads to continuous accu- mulation of preservation loss with increasing edit operations, eventually resulting in model collapse. Therefore, we reformulate Problem 4 as a con- strained long-term optimization problem that re- stricts the preservation loss within a certain thresh- old. The specific formulation is as follows: min ∆(t)lim sup T→∞1 TTX t=1EL(t), s.t.lim sup T→∞1 TTX t=1PL(t)≤D.(6) Here, Drepresents the average preservation loss over the time horizon, i.e., after tconsecutive edits, the average preservation loss over the time interval [1, t]should be constrained within D. 3.2 Problem Transformation Using Lyapunov Optimization Theory The key challenge in solving problem 6 lies in minimizing long-term editing loss while main- taining the preservation loss below threshold D, given the highly stochastic and unpredictable (K1(t),V1(t))pair across timesteps t. To this end, we introduce virtual queues to transform the constrained satisfaction into a well-studied queue stability problem. Building on this, by applying Lyapunov optimization from control theory (Neely, 3 2010), we further decompose the long-term opti- mization into per-timestamp subproblems that can be solved at each timestamp t. This ensures queue stability during online decision-making without re- quiring future information or statistical knowledge of uncertainties. Next, we will elaborate on the details of this transformation. We first design a vir- tual queue Z(t), initialized as Z(1) = Zinit, with its update rule at each timestamp tgiven by Equa- tion (7): Z(t+ 1) = max [ Z(t) +a(PL(t)−D) +b, Z max], (7) where Zmax≥0, b≥0, a > 0. Intuitively, the value of Z(t)reflects the deviation between the actual average preservation loss and Dover the historical time interval [0, t−1]. An increase in Z(t)corresponds to persistent violation of the con- straint. It can be theoretically proven that if the virtual queue satisfies the strong stability condition limT→∞Z(T) T= 0, the constraint in problem 6 holds (detailed proof is provided in Appendix B.1). To analyze the stability of the queue, we con- struct a quadratic Lyapunov function: L(Z(t)) =1 2Z(t)2, (8) where L(Z(t))represents the congestion level of the virtual queue. For example, a smaller value indicates lower queue backlog and stronger stabil- ity. To continuously drive L(Z(t))toward lower congestion and ensure strong queue stability, we de- fine the one-step conditional Lyapunov drift (Neely, 2010): ∆(Z(t)) ={L(Z(t+ 1))−L(Z(t))|Z(t)}. (9) Within the Lyapunov optimization framework, seeking the optimal solution of problem 6 is equiv- alent to minimizing the following expression 10 at each timestamp t: min ∆(t)V·EL(t) + ∆( Z(t)).(10) Here, the control parameter V≥0balances editing performance and queue stability: increas- ingVapproaches the theoretical optimal editing performance but reduces queue stability, while de- creasing Venhances constraint satisfaction at the cost of editing performance. Since ∆(Z(t))con- tains max[·], direct optimization of problem 10 ischallenging. Therefore, we can optimize by min- imizing
https://arxiv.org/abs/2505.15702v1
the upper bound of equation 10 (for the derivation of the upper bound, see Appendix B.3), that is: min ∆(t)V·EL(t) +aZ(t)PL(t).(11) Now the original long-term optimization prob- lem 6 is decomposed into stepwise subproblems at each timestamp t. 3.3 Stepwise Editing with Long-term Guarantees After transforming Problem 6 into per-timestamp subproblems, we are now seeking to solve for the optimal disturbance. Prior to this, we further re- fine this formal expression following the previous work (Fang et al., 2025). It has been revealed that, to ensure that the model does not forget the knowl- edge that has been edited before, the key value matrix of the knowledge edited before timestamp t, denoted by Kp(t)andVp(t), should be incorpo- rated into the optimization objective, where Kp(t) andVp(t)are matrices composed of [K1(1)|. . .| K1(t−1)]and[V1(1)|. . .|V1(t−1)], respec- tively. That is: min ∆(t)V(EL(t) +BL(t)) +aZ(t)PL(t),(12) where BL(t)denotes the edit loss of the model with respect to all the knowledge that has been edited prior to time t. It is feasible to directly derive the closed-form solution of problem 12 as: ∆(t)= V V1(t)−W(t−1)K1(t) K1(t)T +V Vp(t)−W(t−1)Kp(t) Kp(t)T +aZ(t) V0−W(t−1)K0 KT 0 C(t)−1. (13) Here, C(t)is defined as VK1(t)K1(t)T+ VKp(t)Kp(t)T+aZ(t)K0KT 0. As can be seen from Equation 13, once K0,V0,K1(t), andV1(t) are obtained, the perturbation ∆(t)can be calcu- lated. For details on computing these components, refer to Appendix A. Now we are able to compute the perturbation term expression 13 and update the virtual queue 4 according to expression 7 step by step until com- pleting Tsequential editing operations. The de- tailed optimization procedure is summarized in Al- gorithm 1. Regarding the setting of hyperparameters, we have the following three considerations: (1) To set a more appropriate threshold Dfor different LLMs, we collect the model’s preservation loss after one edit as the baseline Dbase, and adjust Dthrough different αvalues, i.e., D=αD base, indicating the threshold is set to αtimes the baseline. (2) Since parameters aandbin the virtual queue update for- mula 7 control the mapping relationship between preservation loss and queue value Z(t), with aZ(t) governing the weight of preservation loss in for- mulation 12, we achieve the following by setting a=1√ Dandb= 0: when the model’s preser- vation loss exceeds the threshold Dby one fold after an edit, the weight of preservation loss in 12 doubles. (3) Simultaneously, we set zinit=√ D, zmax=√ D, and V= 1 to ensure that when the constraints in equation 7 are not violated, the preser- vation loss and editing loss in formulation 12 are calculated with a 1:1 weight ratio. Algorithm 1: Stepwise Editing with Long- term Guarantees Initialization: Given the hyperparameter α, base model W(0) = W. Then compute preservation keys K0, preservation values V0, and baseline preservation loss Dbase. LetD=αD base,a=1√ D,b= 0, zinit=√ D,zmax=√ D, and V= 1. fortime step t= 1,2, . . . , T do 1)Real-time Optimization : Obtain current queue state Z(t), model parameters W(t−1), editing keys K1(t), and target values V1(t). Minimize real-time objective and solve closed-form solution ∆(t)according to Eq.13, then implement parameter updates. 2)Queue Update : Update the queue
https://arxiv.org/abs/2505.15702v1
Z(t+ 1) according to Eq.7. end 4 Experiment 4.1 Setting Base LLMs. We selected three representative LLMs commonly used in the field of knowledgeediting: GPT2-XL (1.5B) (Radford et al., 2019), GPT-J (6B) (Wang and Komatsuzaki, 2021), and LLaMA3 (8B) (Meta, 2024). Baseline Methods. To compare with our method, we chose the representative model editing meth- ods in the locate-then-edit approach, namely ROME (Meng et al., 2022) and MEMIT (Meng et al., 2023), as well as methods focusing on ad- dressing the challenges faced by such approaches in sequential editing scenarios, namely RECT (Gu et al., 2024), PRUNE (Ma et al., 2025), and Al- phaEdit (Fang et al., 2025), and the fine-tuning method FT. For detailed introductions to these methods, see Appendix C.1. Datasets. We adopted two representative bench- marks in the field of model editing: Counter- fact(Meng et al., 2022) and ZsRE (Levy et al., 2017). For introductions to these datasets, see Ap- pendix C.2. Metrics. Following prior works(Meng et al., 2022, 2023; Fang et al., 2025), we adopt the metrics for evaluating knowledge updating abil- ity:Efficacy (efficiency success) and Generaliza- tion (paraphrase success); for assessing knowledge preservation ability: Specificity (neighborhood suc- cess); and for evaluating generation quality: Flu- ency (generation entropy) and Consistency (refer- ence score). The specific calculation formulas are provided in Appendix C.3. 4.2 Editing Performance Results We randomly select 10,000 samples for sequen- tial editing task evaluation (100 edits per batch). Table 1 compares editing performance across var- ious LLMs, datasets, and baseline methods. Re- sults demonstrate LyapLock’s comprehensive supe- riority in cross-model and cross-dataset scenarios across three dimensions: (1) Knowledge Updat- ing: On Efficacy and Generalization, LyapLock outperforms the second-best method AlphaEdit by average margins of 11.88% and 12.69%, with gaps expanding to 22.01% and 19.71% on LLaMA3- Counterfact and 29.63% and 27.59% on GPT2- XL-ZsRE scenarios, respectively. (2) Knowledge Preservation : For Specificity, LyapLock outper- forms the second-best baseline AlphaEdit by an average margin of 6.72%. LyapLock is the clos- est to the pre-edited performance, especially on the ZsRE dataset, where it only drops by an aver- age of 1.4%, validating its effective preservation of 5 Table 1: Performance results of sequential editing task (10,000 Samples). Here, the abbreviations Eff. (Efficacy), Gen. (Generalization), Spe. (Specificity), Flu. (Fluency), and Consis. (Consistency) are employed to denote respective evaluation metrics. Top-performing results are emphasized using bold formatting, with secondary superior results distinguished through underlined notation. Method ModelCounterfact ZsRE Eff.↑ Gen.↑ Spe.↑ Flu.↑ Consis. ↑ Eff.↑ Gen.↑ Spe.↑ Pre-edited LLaMA37.02±0.26 9.44±0.25 89.73 ±0.18 635.47 ±0.11 24.24 ±0.09 35.67 ±0.30 34.81 ±0.30 31.83 ±0.22 FT 94.04 ±0.24 84.13 ±0.31 38.15 ±0.36 401.45 ±0.69 21.35 ±0.12 17.79 ±0.22 17.36 ±0.22 6.30±0.11 ROME 68.45 ±0.46 61.13 ±0.38 48.30 ±0.28 505.00 ±0.14 3.88±0.02 1.14±0.06 1.05±0.06 0.15±0.02 MEMIT 49.42 ±0.50 48.78 ±0.46 51.47 ±0.44 499.28 ±0.08 1.98±0.01 0.00±0.00 0.00±0.00 0.04±0.01 PRUNE 50.12 ±0.50 49.20 ±0.45 51.18 ±0.43 509.27 ±0.08 1.81±0.01 0.00±0.00 0.00±0.00 0.04±0.01 RECT 54.58 ±0.50 52.01 ±0.44 49.41 ±0.39 176.30 ±0.25 3.07±0.03 0.00±0.00 0.00±0.00 0.00±0.00 AlphaEdit 72.60 ±0.45 61.97 ±0.41 52.98 ±0.33 420.84 ±0.54 6.24±0.07 91.79 ±0.17 87.16 ±0.23 30.39 ±0.22 LyapLock 94.61 ±0.23
https://arxiv.org/abs/2505.15702v1
81.68 ±0.34 69.01 ±0.30 617.04 ±0.24 30.70 ±0.12 94.34 ±0.13 90.20 ±0.20 30.74 ±0.22 Pre-edited GPT-J15.22 ±0.36 17.65 ±0.33 83.50 ±0.25 622.01 ±0.14 29.61 ±0.10 26.45 ±0.28 25.74 ±0.28 27.04 ±0.26 FT 94.56 ±0.23 77.04 ±0.36 40.71 ±0.37 327.71 ±0.86 11.11 ±0.13 61.82 ±0.35 59.24 ±0.36 13.53 ±0.19 ROME 48.71 ±0.50 49.70 ±0.40 52.49 ±0.30 614.77 ±0.08 2.85±0.01 17.99 ±0.31 16.50 ±0.30 0.82±0.04 MEMIT 51.62 ±0.50 51.05 ±0.41 51.78 ±0.35 553.31 ±0.17 0.64±0.02 0.04±0.01 0.03±0.01 0.03±0.01 PRUNE 51.27 ±0.50 50.54 ±0.40 52.60 ±0.33 535.22 ±0.14 1.36±0.03 0.03±0.01 0.02±0.01 0.05±0.01 RECT 50.42 ±0.50 49.23 ±0.45 54.82 ±0.40 455.05 ±0.60 2.57±0.05 41.89 ±0.39 39.29 ±0.38 20.17 ±0.23 AlphaEdit 89.90 ±0.30 75.41 ±0.35 58.79 ±0.27 347.89 ±0.52 1.71±0.03 93.10 ±0.19 85.09 ±0.28 22.88 ±0.24 LyapLock 99.00 ±0.10 88.80 ±0.27 68.21 ±0.28 618.33 ±0.18 40.93 ±0.12 98.77 ±0.08 93.82 ±0.19 25.51 ±0.25 Pre-edited GPT2-XL21.82 ±0.41 24.16 ±0.37 78.32 ±0.28 626.69 ±0.12 31.34 ±0.10 22.17 ±0.26 21.28 ±0.26 24.20 ±0.24 FT 72.79 ±0.45 55.90 ±0.43 49.23 ±0.37 607.94 ±0.22 13.05 ±0.05 15.28 ±0.32 13.64 ±0.32 1.24±0.06 ROME 50.03 ±0.50 49.42 ±0.41 51.49 ±0.33 571.45 ±0.17 1.17±0.01 20.51 ±0.35 18.08 ±0.33 1.63±0.07 MEMIT 67.73 ±0.47 60.92 ±0.41 56.00 ±0.33 518.00 ±0.84 7.13±0.10 1.78±0.19 1.62±0.08 1.30±0.05 PRUNE 60.82 ±0.49 56.47 ±0.41 52.70 ±0.35 602.01 ±0.15 11.53 ±0.07 0.09±0.01 0.11±0.02 0.47±0.03 RECT 84.93 ±0.36 66.45 ±0.39 56.42 ±0.33 542.92 ±0.75 12.23 ±0.13 31.73 ±0.36 28.22 ±0.34 11.82 ±0.17 AlphaEdit 92.42 ±0.26 76.83 ±0.33 56.86 ±0.29 583.27 ±0.29 31.83 ±0.13 55.33 ±0.42 46.90 ±0.41 14.63 ±0.19 LyapLock 94.76 ±0.22 80.51 ±0.33 60.74 ±0.29 577.06 ±0.42 34.29 ±0.13 84.96 ±0.28 74.49 ±0.35 22.63 ±0.24 original knowledge. (3) Generation Quality : In Fluency and Consistency, LyapLock significantly outperforms baselines. Specifically, the Fluency of LyapLock can reach an average of 604.14, which is only a 4% drop compared to the pre-edited mod- els, while its Consistency is improved by an av- erage of 6.97 compared to the pre-edited models. These advantages stem from LyapLock’s unique preservation loss control mechanism, which op- timally balances knowledge updating and preser- vation during sequential editing. The case stud- ies in Appendix D.3 illustrate the specific output performance of the various editing methods. Ap- pendix D.1 also provides the editing performance results for sequential editing of 2,000 samples and 5,000 samples. Appendix D.2 provides the results of the hyperparameter sensitivity analysis. 4.3 General Capability Tests Now we assess the model’s general capabilities using six subtasks from the General Language Un- derstanding Evaluation (GLUE) benchmark (Wang et al., 2019) (see Appendix C.5 for details ) in line with (Fang et al., 2025). During the experi-ment, we conducted a test after every sequential editing of 1,000 samples. The GLUE performance results of the LLaMA3 model after completing the sequential editing task on the Counterfact dataset are shown in Figure 3. We found that: (1) The limitations of baseline methods : Most baseline methods experience a significant drop in general ca- pabilities after sequential editing of 2,000 samples, with performance on almost all tasks approaching zero. Baseline methods focused on addressing the challenges of sequential editing, such as RECT and AlphaEdit, are
https://arxiv.org/abs/2505.15702v1
able to maintain a certain level of performance with more sequential edits, but when the number of sequential edits reaches 10,000 sam- ples, the performance of all baseline methods drops to almost zero. This is consistent with our previous findings that these methods cannot suppress the cumulative effect of parameter shifts, ultimately leading to model performance collapse as the num- ber of sequential edits increases. (2) The stabil- ity of LyapLock : The LyapLock method is able to maintain good general performance across all tasks, even after sequential editing of 10,000 sam- ples. Moreover, to further explore the potential of 6 Figure 3: The F1 scores of the LLaMA3 (8B) model on the GLUE benchmark after sequentially editing 10,000 samples on the CounterFact dataset. Figure 4: The preservation loss changes after sequentially editing 10,000 samples on different datasets by different LLMs. the LyapLock method, we increased the number of tests to 20,000 samples and observed that the method still maintained excellent overall perfor- mance across all tasks. This further indicates that constraining the preservation loss can effectively prevent model collapse. 4.4 Preservation Loss Control Analysis As stated in Section 1, previous solutions dedicated to addressing the challenges of the locate-then-edit method in sequential editing scenarios, such as RECT, PRUNE, and AlphaEdit, has all failed to ef- fectively suppress the accumulation of preservation loss. With the increasing number of edits, these methods will eventually lead to a significant de- cline or even collapse of model performance. In light of this, this study further explores whether theLyapLock method can effectively control preser- vation loss within a certain threshold range during sequential editing. Figure 4 clearly shows the trend of preservation loss varying with the number of edits under different editing methods. The results indicate that our method can maintain preserva- tion loss stably within the threshold. In contrast, other methods, although to some extent slowing down the increase in preservation loss after each edit, cannot fundamentally prevent the continuous accumulation of preservation loss. 4.5 Compatibility The method proposed in this study is an improve- ment upon the traditional single-edit bi-objective optimization approaches within the locate-and-edit paradigm. Therefore, it should exhibit good com- 7 Figure 5: The improvement in editing performance and downstream task performance of other editing methods after incorporating LyapLock, following the sequential editing of 10,000 samples on the CounterFact dataset using the LLaMA3 model. patibility with most works that adhere to the locate- and-edit paradigm, and can be combined with them to enhance performance. To thoroughly validate this, we selected LLaMA3 as the base model for our experiments and combined LyapLock with the MEMIT, PRUNE, and RECT methods to conduct experiments on sequential editing of 10,000 sam- ples. The experimental results are shown in Fig- ure 5. Specifically, Figure 5(a) showcases the im- provement in editing performance for each method after integrating LyapLock, while Figure 5(b) il- lustrates the enhancement in downstream task per- formance on the GLUE benchmark. It can be ob- served that the average improvement in editing per- formance is 9.76%, and the average improvement in downstream task performance
https://arxiv.org/abs/2505.15702v1
is 41.11%. This fully demonstrates the wide applicability of our method: it can effectively be integrated with other models based on the locate-and-edit paradigm, significantly enhancing their editing performance while also bolstering their ability to maintain gen- eral capabilities. For more compatibility results on additional base models, please refer to Ap- pendix D.4. 5 Related Works Parameter-Preserving Model Editing. Parameter-preserving model editing methods are primarily divided into two categories. The first category involves updating knowledge using ad- ditional modules. For example, SERAC(Mitchell et al., 2022b) employs an external explicit memory and a small auxiliary model, CALINET(Dong et al., 2022) and T-Patcher(Huang et al., 2023) utilize neurons, GRACE(Hartvigsen et al., 2023) adopts codebooks, MELO(Yu et al., 2024) lever- ages LoRA modules, and WISE(Wang et al., 2024) uses a side memory module. The second categoryemploys contextual prompts to guide model knowledge updates, such as MemPrompt(Madaan et al., 2022) and IKE(Zheng et al., 2023). Parameter-Modifying Model Editing. Parameter-modifying model editing meth- ods mainly fall into two classes. The first class adopts meta-learning to predict parameter updates via a trained hypernetwork, including KE(Cao et al., 2021), MEND(Mitchell et al., 2022a), MAL- MEN(Tan et al., 2024) and InstructEdit(Zhang et al., 2024). The second class focuses on locate-then-edit strategies, where activation values or parameter subsets associated with target knowl- edge are precisely identified using gradient-based or causal tracing methods, followed by targeted editing. Examples include KN(Dai et al., 2022), ROME(Meng et al., 2022), MEMIT(Meng et al., 2023). Additionally, some studies optimize against model collapse in sequential editing scenarios: RECT(Gu et al., 2024) employs regularized weight updates, PRUNE(Ma et al., 2025) controls condition numbers, and AlphaEdit(Fang et al., 2025) applies null-space projection. 6 Conclusion In this work, we propose LyapLock, which refor- mulates the traditional bi-objective optimization as a constrained long-term optimization problem for sequential editing to address the issue of long-term accumulation of preservation loss in existing meth- ods as the number of edits increases. Using Lya- punov optimization, we convert the long-term prob- lem into online solvable subproblems, achieving asymptotically near-optimal editing performance while satisfying preservation loss constraints. Ex- periments on multiple LLMs show that LyapLock significantly outperforms existing methods. 8 Limitations Despite its excellent editing performance and ef- fective maintenance of model general capabilities in sequential editing tasks, the LyapLock method still has room for improvement. Firstly, the cur- rent dataset size for evaluating editing performance is capped at around 20,000. Testing for model general capabilities has only been conducted af- ter 20,000 edits, with no signs of model collapse. Although the method is theoretically proven to constrain loss in long-term editing, larger-scale datasets are needed to further validate its practical effectiveness. Secondly, tests on model general ca- pabilities mainly focus on language understanding, while areas like code generation and mathematical reasoning are under-tested. Future work should expand the testing scope. Ethics Considerations All codes and datasets in this paper are from pub- licly available resources. The application of such technologies must follow ethical principles. The widespread use of large language models brings convenience but also raises ethical concerns. Ma- licious users could exploit these models
https://arxiv.org/abs/2505.15702v1
to gener- ate and spread hate speech, false information, or harmful content, threatening social harmony and stability. Thus, it is crucial and urgent to implement effective safeguards to prevent misuse and mitigate potential harm. Therefore, we strongly advocate that researchers implement rigorous validation and oversight measures to ensure the ethical application of these technologies. References Luisa Bentivogli, Bernardo Magnini, Ido Dagan, Hoa Trang Dang, and Danilo Giampiccolo. 2009. The fifth PASCAL recognizing textual entailment challenge. In TAC. NIST. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-V oss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, and 12 others. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33: Annual Confer- ence on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual . Nicola De Cao, Wilker Aziz, and Ivan Titov. 2021. Editing factual knowledge in language models. InEMNLP (1) , pages 6491–6506. Association for Com- putational Linguistics. Damai Dai, Li Dong, Yaru Hao, Zhifang Sui, Baobao Chang, and Furu Wei. 2022. Knowledge neurons in pretrained transformers. In Proceedings of the 60th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022 , pages 8493– 8502. Association for Computational Linguistics. William B. Dolan and Chris Brockett. 2005. Automati- cally constructing a corpus of sentential paraphrases. InIWP@IJCNLP . Asian Federation of Natural Lan- guage Processing. Qingxiu Dong, Damai Dai, Yifan Song, Jingjing Xu, Zhifang Sui, and Lei Li. 2022. Calibrating factual knowledge in pretrained language models. In Find- ings of the Association for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates, De- cember 7-11, 2022 , pages 5937–5947. Association for Computational Linguistics. Junfeng Fang, Houcheng Jiang, Kun Wang, Yunshan Ma, Jie Shi, Xiang Wang, Xiangnan He, and Tat- Seng Chua. 2025. Alphaedit: Null-space constrained knowledge editing for language models. In The Thir- teenth International Conference on Learning Repre- sentations, ICLR 2025, Singapore, April 24-28, 2025 . OpenReview.net. Mor Geva, Roei Schuster, Jonathan Berant, and Omer Levy. 2021. Transformer feed-forward layers are key- value memories. In Proceedings of the 2021 Confer- ence on Empirical Methods in Natural Language Pro- cessing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021 , pages 5484–5495. Association for Computational Linguis- tics. Jia-Chen Gu, Hao-Xiang Xu, Jun-Yu Ma, Pan Lu, Zhen- Hua Ling, Kai-Wei Chang, and Nanyun Peng. 2024. Model editing harms general abilities of large lan- guage models: Regularization to the rescue. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, EMNLP 2024, Miami, FL, USA, November 12-16, 2024 , pages 16801–16819. Association for Computational Lin- guistics. Akshat Gupta, Sidharth Baskaran, and Gopala Anu- manchipalli. 2024. Rebuilding rome : Resolving model collapse during sequential model editing. Anshita Gupta, Debanjan Mondal, Akshay Krishna She- shadri, Wenlong Zhao, Xiang Li, Sarah Wiegreffe, and Niket Tandon. 2023. Editing common sense in transformers. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language
https://arxiv.org/abs/2505.15702v1
Process- ing, EMNLP 2023, Singapore, December 6-10, 2023 , pages 8214–8232. Association for Computational Linguistics. Tom Hartvigsen, Swami Sankaranarayanan, Hamid Palangi, Yoon Kim, and Marzyeh Ghassemi. 2023. 9 Aging with GRACE: lifelong model editing with dis- crete key-value adaptors. In Advances in Neural Information Processing Systems 36: Annual Confer- ence on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 . Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language under- standing. In ICLR . OpenReview.net. Xiusheng Huang, Hang Yang, Yubo Chen, Jun Zhao, Kang Liu, Weijian Sun, and Zuyu Zhao. 2022. Document-level relation extraction via pair- aware and entity-enhanced representation learning. InProceedings of the 29th International Confer- ence on Computational Linguistics, COLING 2022, Gyeongju, Republic of Korea, October 12-17, 2022 , pages 2418–2428. International Committee on Com- putational Linguistics. Zeyu Huang, Yikang Shen, Xiaofeng Zhang, Jie Zhou, Wenge Rong, and Zhang Xiong. 2023. Transformer- patcher: One mistake worth one neuron. In The Eleventh International Conference on Learning Rep- resentations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023 . OpenReview.net. Lee W Johnson, R. Dean Riess, and Jimmy T Arnold. 2004. Introduction to linear algebra. 2nd ed. Intro- duction to linear algebra. 2nd ed. Teuvo Kohonen. 1972. Correlation matrix memories. IEEE Trans. Computers , 21(4):353–359. Omer Levy, Minjoon Seo, Eunsol Choi, and Luke Zettle- moyer. 2017. Zero-shot relation extraction via read- ing comprehension. In Proceedings of the 21st Con- ference on Computational Natural Language Learn- ing (CoNLL 2017), Vancouver, Canada, August 3-4, 2017 , pages 333–342. Association for Computational Linguistics. Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, and 1 others. 2024. Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437 . Jun-Yu Ma, Hong Wang, Hao-Xiang Xu, Zhen- Hua Ling, and Jia-Chen Gu. 2025. Perturbation- restrained sequential model editing. In The Thir- teenth International Conference on Learning Repre- sentations, ICLR 2025, Singapore, April 24-28, 2025 . OpenReview.net. Aman Madaan, Niket Tandon, Peter Clark, and Yim- ing Yang. 2022. Memory-assisted prompt editing to improve GPT-3 after deployment. In Proceedings of the 2022 Conference on Empirical Methods in Natu- ral Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022 , pages 2833–2861. Association for Computational Linguis- tics.Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. 2022. Locating and editing factual associ- ations in gpt. Advances in Neural Information Pro- cessing Systems , 35:17359–17372. Kevin Meng, Arnab Sen Sharma, Alex J. Andonian, Yonatan Belinkov, and David Bau. 2023. Mass- editing memory in a transformer. In The Eleventh International Conference on Learning Representa- tions, ICLR 2023, Kigali, Rwanda, May 1-5, 2023 . OpenReview.net. Meta. 2024. Llama 3. Large language model release. Eric Mitchell, Charles Lin, Antoine Bosselut, Chelsea Finn, and Christopher D. Manning. 2022a. Fast model editing at scale. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022 . OpenReview.net. Eric Mitchell, Charles Lin, Antoine Bosselut, Christo- pher D. Manning, and Chelsea Finn. 2022b. Memory- based model editing at scale.
https://arxiv.org/abs/2505.15702v1
In International Con- ference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA , volume 162 of Proceedings of Machine Learning Research , pages 15817–15831. PMLR. Michael J. Neely. 2010. Stochastic Network Optimiza- tion with Application to Communication and Queue- ing Systems . Synthesis Lectures on Communication Networks. Morgan & Claypool Publishers. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, and 1 others. 2019. Language models are unsupervised multitask learn- ers.OpenAI blog , 1(8):9. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y . Ng, and Christopher Potts. 2013. Recursive deep mod- els for semantic compositionality over a sentiment treebank. In EMNLP , pages 1631–1642. ACL. Chenmien Tan, Ge Zhang, and Jie Fu. 2024. Massive editing for large language models via meta learning. InThe Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In ICLR (Poster) . OpenReview.net. Ben Wang and Aran Komatsuzaki. 2021. Gpt-j-6b: A 6 billion parameter autoregressive language model. Peng Wang, Zexi Li, Ningyu Zhang, Ziwen Xu, Yunzhi Yao, Yong Jiang, Pengjun Xie, Fei Huang, and Hua- jun Chen. 2024. WISE: rethinking the knowledge memory for lifelong model editing of large language models. In Advances in Neural Information Pro- cessing Systems 38: Annual Conference on Neural Information Processing Systems 2024, NeurIPS 2024, Vancouver, BC, Canada, December 10 - 15, 2024 . 10 Song Wang, Yaochen Zhu, Haochen Liu, Zaiyi Zheng, Chen Chen, and Jundong Li. 2025. Knowledge edit- ing for large language models: A survey. ACM Com- put. Surv. , 57(3):59:1–59:37. Alex Warstadt, Amanpreet Singh, and Samuel R. Bow- man. 2019. Neural network acceptability judgments. Trans. Assoc. Comput. Linguistics , 7:625–641. Adina Williams, Nikita Nangia, and Samuel R. Bow- man. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In NAACL-HLT , pages 1112–1122. Association for Computational Linguistics. Yunzhi Yao, Peng Wang, Bozhong Tian, Siyuan Cheng, Zhoubo Li, Shumin Deng, Huajun Chen, and Ningyu Zhang. 2023. Editing large language models: Prob- lems, methods, and opportunities. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Sin- gapore, December 6-10, 2023 , pages 10222–10240. Association for Computational Linguistics. Lang Yu, Qin Chen, Jie Zhou, and Liang He. 2024. MELO: enhancing model editing with neuron- indexed dynamic lora. In Thirty-Eighth AAAI Con- ference on Artificial Intelligence, AAAI 2024, Thirty- Sixth Conference on Innovative Applications of Ar- tificial Intelligence, IAAI 2024, Fourteenth Sympo- sium on Educational Advances in Artificial Intelli- gence, EAAI 2014, February 20-27, 2024, Vancouver, Canada , pages 19449–19457. AAAI Press. Ningyu Zhang, Bozhong Tian, Siyuan Cheng, Xi- aozhuan Liang, Yi Hu, Kouying Xue, Yanjie Gou, Xi Chen, and Huajun Chen. 2024. Instructedit: Instruction-based knowledge editing for large lan- guage models. In Proceedings of the Thirty-Third International Joint Conference on Artificial Intelli- gence, IJCAI 2024, Jeju, South Korea, August 3-9, 2024 , pages 6633–6641. ijcai.org. Ce Zheng, Lei Li, Qingxiu Dong,
https://arxiv.org/abs/2505.15702v1
Yuxuan Fan, Zhiyong Wu, Jingjing Xu, and Baobao Chang. 2023. Can we edit factual knowledge by in-context learning? In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023 , pages 4862– 4876. Association for Computational Linguistics. A Model Editing The goal of model editing is to efficiently and ac- curately update factual knowledge. Specifically, for incorrect or outdated factual knowledge (s, r, o ) in LLMs, model editing methods can replace it with updated knowledge (s, r, oc). For example, when the natural language sentence "Beats Mu- sic is owned by" composed of s="Beats Music" andr="is owned by" is input into the model, the model’s output, through model editing, will bemodified from the incorrect o="Google" to the correct oc="Apple". Currently, model editing methods based on the locate-then-edit paradigm have become main- stream due to their excellent editing performance, such as ROME and MEMIT. These methods mainly consist of two key steps: (1) identifying the crit- ical parameter subset Wassociated with the tar- get knowledge via causal tracing analysis, and (2) achieving the update of the target knowledge within the parameter space by computing and implement- ing appropriate perturbations ∆W. A.1 Causal Tracing Causal Tracing is an analytical method designed to determine the causal influence of the internal hid- den state activations within LLMs on the prediction of specific facts. The essence of this method lies in quantifying and understanding which internal state variables play a key role when the model processes specific information. The specific steps of causal tracing are as follows: (1) Clean Run. Initially, a factual prompt (e.g., "Space Needle is located in the city of") is input into the LLMs, and the state activations of all hid- den layers are collected as the clean state. (2) Corrupted Run. In this run, the embedding vector of the subject (e.g., "Space Needle") is cor- rupted with noise, and then the model continues to run. Due to the loss of certain information about the subject, the model may provide an incorrect answer. (3) Corrupted-with-Restoration Run. In this run, except for restoring the clean state at specific tokens and layers, other corrupted embedding vec- tors remain unchanged. This allows for testing the ability of a single state restoration to predict. By comparing the results of the above three runs, the Total Effect (TE) and Indirect Effect (IE) of each hidden state variable on the correct prediction of facts are calculated. TE is the difference in the prediction probability between the Clean Run and the Corrupted Run, while IE is the difference in the prediction probability between the Corrupted Run and the Corrupted-with-Restoration Run. By analyzing the Average Indirect Effect (AIE), re- searchers have found that the MLP module plays a key role in storing and recalling factual associa- tions. Specifically, the MLP modules in the middle layers of the model are identified as the primary storage area for factual knowledge, and they play an especially critical role when processing the last 11 token of the subject. A.2 Computing Perturbations Based on the results of
https://arxiv.org/abs/2505.15702v1
causal tracing localization, subsequent modifications will be made to the pa- rameters of the second layer of the MLP module in the model’s intermediate layer, Wl out, to achieve knowledge updating. Specifically, the model edit- ing updates the knowledge (s, r, o )in LLMs to (s, r, oc). This process can be understood as remap- ping the key kthat encodes (s, r)from its original mapping, which is the value vthat encodes o, to the value vcthat encodes oc. The formal description is as follows: k=act(Wl in(al+hl−1)),v=Wl outk.(14) To achieve this goal, (Meng et al., 2023) opti- mized a dual-objective loss function (Equation 15) to compute the perturbation (Equation 16). Once K0,K1, andV1are obtained, the specific pertur- bation values can be calculated. Here, K1andV1 are matrices composed of the keys kand values vc of all new knowledge in the current editing batch, respectively. min ∆∥(W+∆)K1−V1∥2 F +∥(W+∆)K0−V0∥2 F.(15) ∆= (V1−WK 1)KT 1 K0KT 0+K1KT 1−1. (16) Obtaining K0.(Meng et al., 2022, 2023) ran- domly sampled a large number of articles from Wikipedia snapshots and input the full text of each article into the model. During the model’s pro- cessing, they collected the MLP activation vectors corresponding to each token. Eventually, they col- lected 100,000 kvector samples from these articles to form the K0matrix. Additionally, V0=WK 0. Obtaining K1.TheK1matrix is composed of all kfrom a single editing batch. Based on the findings from the localization phase, (Meng et al., 2023) used the input of the last token of the subject as the key. The specific calculation method for each kis as follows: Input the text containing the subject sinto the model, and at the target layer land the position of the last token of the subject, extract the activation values of the second layer of the MLP, as shown in Equation 17.k=1 NNX j=1k(xj+s),k(x) =act(Wl in(al+hl−1)). (17) Here, xjis the randomly generated prefix text, andNis the number of prefix texts. By doing so, researchers extract activation values from multiple random contexts and calculate the average to obtain the key vector kthat represents the subject, which is used to locate subject-related factual information in the middle layer MLP module of the model. Obtaining V1.TheV1matrix is composed of all vcfrom a single editing batch. For the calculation of each vc, (Meng et al., 2023) optimized Equa- tion 18 to solve for the optimal vector vcin order to achieve precise encoding of the target knowledge oc. The specific calculation process is as follows: vc= arg min z1 NNX j=1−logPWl out(v=z)[oc|xj+p] +DKL PWl out(v=z)[x|p′]∥PWl out[x|p′] . (18) Here, Wl out(v=z)denotes the intervened model where the MLP output at the l-th layer and the position of the last token of the subject is re- placed by the vector z;xj+prepresents the input concatenated from the randomly generated prefix textxjand the factual prompt template p. This formula optimizes to replace the original vwith thevc, maximizing the probability of the target word ocwhile preventing semantic drift through KL divergence. Additionally, since the matrices K0andV0are particularly large, storing them separately occupies a significant amount of space. Moreover, they typi- cally appear in the
https://arxiv.org/abs/2505.15702v1
form of K0KT 0andV0KT 0in the computational formulas. Therefore, it is common practice to store only K0KT 0andV0KT 0. For more details, refer to (Meng et al., 2023). B Proof B.1 Proof of Sufficient Condition Here, we prove that the sufficient condition for the constraint to always hold is the strong stability of the virtual queue Z(t). From the update formula of the virtual queue (Metioned in 7), we have: Z(t+ 1)≥Z(t) +a(PL(t)−D) +b. (19) 12 Listing the above inequalities for multiple times- tamps t∈ {1, . . . , T }: Z(T+ 1)≥Z(T) +a(PL(T)−D) +b, Z(T)≥Z(T−1) +a(PL(T−1)−D) +b, Z(T−1)≥Z(T−2) +a(PL(T−2)−D) +b, . . . , Z(2)≥Z(1) + a(PL(1)−D) +b. (20) Summing all the inequalities, we obtain: Z(T+ 1)≥Z(1) + aTX t=1PL(t)−aTD +Tb. (21) Dividing both sides by aTand taking the limit (a >0, b, Z(1) = Zinit≥0): lim T→∞1 TTX t=1PL(t) ≤lim T→∞Z(T+ 1) aT−lim T→∞Z(1) aT+ lim T→∞(D−b a) = lim T→∞Z(T+ 1) aT+D−b a ≤lim T→∞Z(T+ 1) aT+D. (22) When limT→∞Z(T+1) aT= 0 , that is, limT→∞Z(T) T= 0 (according to the proof in Section B.2), we have lim T→∞1 TTX t=1PL(t)≤D. The above inequality can be equivalently written as the constraint condition in 6: lim sup T→∞1 TTX t=1PL(t)≤D. (23) B.2 Proof of Equivalence LetS=T+1. Then, as T→ ∞ , we have S→ ∞ . Therefore: lim T→∞Z(T+ 1) aT = lim S→∞Z(S) a(S−1) = lim S→∞Z(S) S·S a(S−1) = 0.(24)AsS→ ∞ ,S a(S−1)→1 a>0, we obtain: lim S→∞Z(S) S= 0. (25) That is: lim T→∞Z(T) T= 0. (26) B.3 Upper Bound Derivation It is known that the following inequality holds (∀a, b, c, Z max≥0) (the proof of the inequality can be found in Section B.4): (max[ a+b−c, Zmax])2 ≤a2+b2+c2+ 2a(b−c) +Z2 max.(27) From the virtual queue update formula 7, we can obtain: Z(t+ 1)2 = (max[Z(t) +a(PL(t)−D) +b, Zmax])2 = (max[Z(t) + (aPL(t) +b)−aD, Z max])2 ≤Z(t)2+ (aPL(t) +b)2+ (aD)2 + 2Z(t)(aPL(t) +b−aD) +Z2 max. (28) By dividing both sides by1 2, we obtain: 1 2Z(t+ 1)2−1 2Z(t)2 ≤1 2(aPL(t) +b)2+1 2(aD)2+1 2Z2 max +Z(t)(aPL(t) +b−aD).(29) From the one-step conditional Lyapunov drift 9, we have: ∆(Z(t))≤1 2(aPL(t) +b)2+1 2(aD)2+1 2Z2 max +{Z(t)(aPL(t) +b−aD)|Z(t)}. (30) Assuming that there exists Dmax = max t{PL(t)}, by defining B∆=1 2((aDmax+ b)2+ (aD)2+Z2 max), the above inequality can be simplified as: ∆(Z(t))≤B+{Z(t)(aPL(t) +b−aD)|Z(t)}. (31) By adding the editing loss on both sides, we get: ∆(Z(t)) +V·EL(t) ≤ {Z(t)(aPL(t) +b−aD) +V·EL(t)|Z(t)} +B. (32) 13 Since Bis a constant, minimizing the upper bound is equivalent to minimizing the second term on the right-hand side of the inequality, that is: min ∆(t)Z(t)(aPL(t) +b−aD) +V·EL(t). (33) By removing the constants that are irrelevant to the optimization variable ∆(t), we obtain: min ∆(t)aZ(t)PL(t) +V·EL(t).(34) B.4 Proof of the Inequality (1) When a+b−c > Z max(∀a, b, c, Z max≥0), max([ a+b−c, Zmax])2 = (a+b−c)2 =a2+b2+c2+ 2ab−2ac−2bc ≤a2+b2+c2+ 2a(b−c) ≤a2+b2+c2+ 2a(b−c) +Z2 max.(35) (2) When a+b−c≤Zmax(∀a, b, c, Z max≥0), it is necessary to prove: max([ a+b−c, Zmax])2 =Z2 max ≤a2+b2+c2+ 2a(b−c) +Z2 max.(36) That is: a2+b2+c2+ 2a(b−c)≥0. (37) It is known that: (a+b−c)2=a2+b2+c2+2ab−2ac−2bc≥0. (38) Therefore, we
https://arxiv.org/abs/2505.15702v1
have: a2+b2+c2+ 2a(b−c)≥2bc≥0. (39) C Experimental Setup C.1 Baseline Methods Here, we will introduce the baseline methods used in this paper, which are as follows: FT. FT is a parameter-efficient model adjustment strategy that selectively updates parameters in spe- cific layers of the model using a cross-entropy loss function. This achieves precise local optimization of the model while keeping the rest of the model unchanged.ROME. ROME employs causal tracing analysis to identify the key middle-layer MLP modules in the model where factual associations are stored. It then inserts new key-value pairs into these modules to update the model’s memory of specific facts. Specifically, the key is determined by the hidden state of the subject’s last token, while the value is obtained by optimizing the prediction probability of the target object. MEMIT. MEMIT is an extendable multi-layer updating algorithm proposed based on ROME. It efficiently integrates new memories into LLMs by explicitly computing parameter updates, achieving large-scale memory editing while maintaining the integrity of the model. PRUNE. PRUNE is a framework designed to re- strict the perturbations to LLMs during sequential editing, addressing the issue of significant degra- dation in the models’ general abilities caused by existing editing methods after multiple edits. The study’s theoretical analysis, based on matrix per- turbation theory, reveals that the condition number of the edited matrix is a crucial factor affecting general abilities. This condition number increases with the number of edits, exacerbating the pertur- bation of original knowledge associations. PRUNE mitigates this issue by restraining the large singular values of the edit update matrix, thereby reducing the condition number and preserving the general abilities of the edited models. RECT. RECT is a regularization method that pre- vents overfitting by limiting the complexity of the edit update weights. Specifically, RECT identifies the most important editing information (top-k% of elements) based on the relative change in weights, retains their original values, and sets the remain- ing elements to zero. This approach effectively mitigates the negative impact on general abilities caused by sequential edits. AlphaEdit. The core of AlphaEdit lies in pro- jecting the parameter perturbation onto the null space of the preserved knowledge, thereby ensur- ing that the model’s output on the original knowl- edge remains unchanged during the editing pro- cess. Specifically, AlphaEdit first computes the null space of the preserved knowledge matrix using Singular Value Decomposition (SVD) and defines a projection matrix. During editing, it projects the perturbation into this null space and then applies 14 the projected perturbation to the model parame- ters. This method not only effectively avoids in- terference with the preserved knowledge but also simplifies the editing objective by removing the error term related to the preserved knowledge, al- lowing the model to focus more on updating the knowledge. C.2 Datasets ZsRE Dataset. It is a high-quality question- answering dataset specifically designed to evaluate the model editing and zero-shot relation extrac- tion capabilities of natural language processing (NLP) models, which contains 193,196 training samples and 19,086 test samples. It employs back- translation techniques to generate paraphrased ver- sions of questions, thereby constructing equivalent
https://arxiv.org/abs/2505.15702v1
neighborhood samples. Each sample includes a subject term sand a target object othat needs to be modified, as well as semantically similar and dissimilar sentences. These features enable the ef- fective assessment of a model’s generalization abil- ity and specificity. As a result, the ZsRE dataset is widely used to test various model editing methods and has become one of the important benchmark datasets in the field of natural language processing. CounterFact dataset. It focuses on evaluating the knowledge editing and factual knowledge un- derstanding capabilities of NLP models and is also a high-quality dataset, which contains 20,877 sam- ples. It constructs counterfactual knowledge by re- placing the subject entity with an approximate sub- ject entity that shares the same predicate, making it more challenging compared to the ZsRE dataset. In addition to covering similar evaluation metrics as ZsRE, the CounterFact dataset introduces indi- cators focusing on the fluency and consistency of generated text quality, further enriching the dimen- sions for assessing model performance. C.3 Metrics Given a language model fθ, an edit instance com- prising factual prompt (si, ri), target output oi, and the model’s original prediction oc i, we will now detail the calculation methods for the evaluation metrics. C.3.1 Metrics of ZsRE Following the previous works(Meng et al., 2022, 2023; Fang et al., 2025), this section formalizes the evaluation criteria for ZsRE metrics under three dimensions:•Efficacy : Quantified by averaging the top-1 pre- diction accuracy across edited samples, this met- ric verifies successful knowledge integration: Ein oi= arg max oPfθ(o|(si, ri))o .(40) •Generalization : Assesses the model’s capability to maintain accuracy when presented with se- mantically equivalent variations N((si, ri)), cal- culated through: Ein oi= arg max oPfθ(o|N((si, ri)))o .(41) •Specificity : Evaluates preservation of original be- havior on unrelated samples O((si, ri))by mea- suring consistency with pre-edit predictions: Ein oc i= arg max oPfθ(o|O((si, ri)))o .(42) C.3.2 Metrics of CounterFact Following the previous works(Meng et al., 2022, 2023; Fang et al., 2025), this subsection formalizes the evaluation framework for Counterfact metrics under five dimensions: •Efficacy (Editing Success) : Measures the suc- cess rate of integrating new knowledge by com- paring the probability of the target output oi against the original prediction oc iunder the fac- tual prompt: Ei[Pfθ(oi|(si, ri))>Pfθ(oc i|(si, ri))]. (43) •Generalization (Paraphrase Robustness) : Evaluates robustness to paraphrased variants N((si, ri))by comparing output probabilities across rephrased prompts: Ei[Pfθ(oi|N((si, ri)))>Pfθ(oc i|N((si, ri)))]. (44) •Specificity (Neighborhood Preservation) : As- sesses minimal interference on related but dis- tinct subject prompts O((si, ri)), ensuring origi- nal predictions remain dominant: Ei[Pfθ(oi|O((si, ri)))>Pfθ(oc i|O((si, ri)))]. (45) •Fluency (Repetition Control) : Quantifies out- put repetitiveness via entropy of bi-gram ( g2) and tri-gram ( g3) distributions: −2 3X kg2(k) log2g2(k)+4 3X kg3(k) log2g3(k), (46) 15 where gn(·)denotes the normalized frequency of n-grams. •Consistency (Reference Alignment) : Evaluates semantic alignment between model-generated text and reference content by computing the co- sine similarity of their TF-IDF vectors for subject sand object o: sim TF-IDF (fθ(s),Ref(o) ). (47) C.4 Implementation Details In this work, all experiments are conducted on a sin- gle A100 (80GB) GPU. The
https://arxiv.org/abs/2505.15702v1
hyperparameter con- figurations for LLaMA3 are based on AlphaEdit, while those for GPT2-XL and GPT-J are adapted from MEMIT. Specifically, for LLaMA3, the edit- ing layers are set to [4,5,6,7,8]; for GPT2-XL, the editing layers are [13,14,15,16,17]; and for GPT-J, the editing layers are [3,4,5,6,7,8]. For all models, the hyperparameter αis uniformly set to 60, meaning that the threshold Dis configured to be 60 times the baseline value Dbase. Our method takes an average of 2.06 seconds to edit a sample on LLAMA3, 1.78 seconds on GPT-J, and 0.50 seconds on GPT2-XL. C.5 Details of GLUE GLUE is a comprehensive benchmark, and this paper selects the following six subtasks: CoLA. (Warstadt et al., 2019) evaluates gram- matical acceptability through binary classification of single-sentence judgments. MMMLU. (Hendrycks et al., 2021) measures multi-task accuracy across diverse domains, specif- ically targeting zero-shot and few-shot learning scenarios in text models. NLI. (Williams et al., 2018) assesses natural lan- guage understanding by requiring models to iden- tify logical relationships (entailment, contradiction, neutral) between sentence pairs. MRPC. (Dolan and Brockett, 2005) serves as a benchmark for semantic equivalence detection, where models must determine if sentence pairs con- vey identical meanings. SST. (Socher et al., 2013) focuses on sentiment classification of movie review sentences, assigning binary sentiment labels based on human annota- tions.RTE. (Bentivogli et al., 2009) examines textual entailment by determining whether a premise sen- tence logically supports a given hypothesis. D More Experimental Results D.1 The Editing Performance for Other Number of Edits Tables 2 and 3 illustrate the editing performance of various editing methods when sequential editing 2,000 and 5,000 samples across different LLMs and datasets. The conclusions drawn are essentially consistent with those in Section 4.2. D.2 Parameter Sensitivity Analysis To investigate the performance changes of our method under different hyperparameters, we ad- justed the hyperparameter αto change the thresh- oldDin Problem 6 and analyzed its impact on editing performance. The experimental results are shown in Table 4. As αincreases, that is, as the threshold Dbecomes larger and the constraints are gradually relaxed, we observed the following trends: On the CounterFact dataset, the Efficacy and Generalization metrics, which are related to knowledge updating evaluation, both improved, in- dicating a enhancement in the model’s performance in knowledge updating. However, the Specificity metric, which is related to knowledge preservation evaluation, decreased. This is likely because the relaxation of constraints caused the model to focus more on editing loss. Additionally, on the ZsRE dataset, although the overall trend was similar to that on the CounterFact dataset, there were some fluctuations in the related metrics, which may be attributed to the characteristics of the dataset itself or the model’s adaptability to different datasets. Therefore, there is a balance point in the design of the threshold Dto achieve a balance between the model’s editing performance and general capabili- ties. D.3 Case Study We present the output examples of the LLAMA3, GPT-J, and GPT2-XL models after being processed by different editing methods, as shown in Table 5, 6, and 7. It is found that after sequential editing of 10,000 samples, the
https://arxiv.org/abs/2505.15702v1
content generated by the baseline methods often fails to include the target knowledge (Edit Target) and tends to produce a large number of meaningless characters or repeated words, which leads to poor text fluency. In contrast, 16 Table 2: Performance results of sequential editing task (2,000 Samples). Here, the abbreviations Eff. (Efficacy), Gen. (Generalization), Spe. (Specificity), Flu. (Fluency), and Consis. (Consistency) are employed to denote respective evaluation metrics. Top-performing results are emphasized using bold formatting, with secondary superior results distinguished through underlined notation. Method ModelCounterfact ZsRE Eff.↑ Gen.↑ Spe.↑ Flu.↑ Consis. ↑ Eff.↑ Gen.↑ Spe.↑ Pre-edited LLaMA37.85±0.27 10.58 ±0.27 89.48 ±0.19 635.44 ±0.11 24.19 ±0.09 36.99 ±0.30 36.34 ±0.30 31.89 ±0.23 FT 93.35 ±0.25 84.15 ±0.32 42.99 ±0.37 234.65 ±0.28 10.15 ±0.07 30.54 ±0.27 30.29 ±0.27 15.47 ±0.18 ROME 81.90 ±0.39 71.12 ±0.37 46.98 ±0.29 606.67 ±0.17 7.43±0.10 3.29±0.11 3.24±0.11 0.51±0.03 MEMIT 64.20 ±0.48 63.18 ±0.44 51.40 ±0.40 394.10 ±1.55 5.78±0.11 39.34 ±0.37 34.77 ±0.36 20.45 ±0.21 PRUNE 66.80 ±0.47 64.70 ±0.41 50.14 ±0.37 366.09 ±1.28 5.47±0.10 0.65±0.04 0.58±0.04 1.98±0.06 RECT 65.45 ±0.48 62.70 ±0.44 60.00 ±0.38 521.56 ±0.44 19.04 ±0.10 86.62 ±0.23 81.87 ±0.27 31.67 ±0.22 AlphaEdit 99.15 ±0.09 93.15 ±0.21 69.27 ±0.29 621.77 ±0.17 31.93 ±0.12 94.58 ±0.14 91.07 ±0.19 32.40 ±0.22 LyapLock 99.85 ±0.04 93.60 ±0.21 81.14 ±0.23 628.97 ±0.16 33.27 ±0.11 95.63 ±0.12 91.89 ±0.18 32.29 ±0.22 Pre-edited GPT-J15.80 ±0.36 18.10 ±0.34 83.44 ±0.25 621.69 ±0.14 29.46 ±0.10 27.79 ±0.29 27.10 ±0.29 27.54 ±0.26 FT 92.15 ±0.27 72.38 ±0.38 43.35 ±0.37 296.91 ±0.79 6.64±0.11 72.37 ±0.30 68.91 ±0.32 19.66 ±0.23 ROME 54.35 ±0.50 53.92 ±0.40 51.35 ±0.30 565.03 ±0.08 1.43±0.01 49.97 ±0.44 48.07 ±0.43 10.13 ±0.16 MEMIT 98.50 ±0.12 95.40 ±0.17 64.16 ±0.31 556.11 ±0.85 35.90 ±0.15 95.99 ±0.15 92.92 ±0.20 30.84 ±0.27 PRUNE 87.10 ±0.34 87.72 ±0.28 52.98 ±0.35 422.31 ±0.47 15.42 ±0.13 33.43 ±0.33 31.59 ±0.32 21.49 ±0.24 RECT 98.50 ±0.12 86.95 ±0.28 72.72 ±0.28 615.06 ±0.22 40.92 ±0.12 96.67 ±0.13 92.74 ±0.20 29.30 ±0.27 AlphaEdit 99.75 ±0.05 96.10 ±0.15 76.02 ±0.26 617.70 ±0.21 41.69 ±0.13 99.67 ±0.04 97.16 ±0.13 28.57 ±0.26 LyapLock 99.75 ±0.05 95.70 ±0.16 76.41 ±0.26 618.84 ±0.18 41.95 ±0.12 99.69 ±0.04 97.30 ±0.13 28.37 ±0.26 Pre-edited GPT2-XL22.10 ±0.41 24.45 ±0.37 78.05 ±0.28 626.61 ±0.12 31.33 ±0.10 23.70 ±0.27 22.82 ±0.27 24.97 ±0.24 FT 64.75 ±0.48 42.90 ±0.41 54.51 ±0.33 534.70 ±0.26 10.35 ±0.05 31.95 ±0.37 29.48 ±0.36 8.86±0.17 ROME 51.25 ±0.50 48.58 ±0.40 51.79 ±0.32 424.30 ±0.40 0.71±0.01 44.38 ±0.43 39.86 ±0.42 11.54 ±0.17 MEMIT 95.10 ±0.22 85.60 ±0.29 60.16 ±0.32 474.20 ±0.56 22.04 ±0.15 80.27 ±0.32 73.46 ±0.36 27.04 ±0.27 PRUNE 80.85 ±0.39 77.98 ±0.35 51.06 ±0.36 536.10 ±0.42 13.87 ±0.10 21.37 ±0.31 19.80 ±0.30 13.10 ±0.19 RECT 92.35 ±0.27 79.85 ±0.34 65.29 ±0.32 471.17 ±0.62 21.25 ±0.16 83.72 ±0.29 76.28 ±0.34 24.52 ±0.25 AlphaEdit 99.50 ±0.07 93.62 ±0.20 66.03 ±0.29 594.10 ±0.47 39.11 ±0.13 91.79 ±0.21 83.19 ±0.30 25.91 ±0.26 LyapLock 99.40 ±0.08 92.78 ±0.21 67.33 ±0.29 599.14 ±0.40 39.24 ±0.13 95.36 ±0.15 87.70 ±0.26 26.50 ±0.26 our method not only achieves the desired editing effect but also ensures the fluency of the generated text. D.4 More compatibility experiment results
https://arxiv.org/abs/2505.15702v1
We further demonstrate the performance improve- ment of our method in combination with other base- line methods across various models and datasets after editing, as shown in Figure 6, 7, 8, 9, 10. Overall, our method generally enhances both edit- ing performance and downstream task performance when combined with other baselines. However, the specific degree of improvement varies depending on the model, editing dataset, and method used. 17 Table 3: Performance results of sequential editing task (5,000 Samples). Here, the abbreviations Eff. (Efficacy), Gen. (Generalization), Spe. (Specificity), Flu. (Fluency), and Consis. (Consistency) are employed to denote respective evaluation metrics. Top-performing results are emphasized using bold formatting, with secondary superior results distinguished through underlined notation. Method ModelCounterfact ZsRE Eff.↑ Gen.↑ Spe.↑ Flu.↑ Consis. ↑ Eff.↑ Gen.↑ Spe.↑ Pre-edited LLaMA37.02±0.26 9.61±0.25 89.63 ±0.19 635.25 ±0.11 24.19 ±0.09 36.35 ±0.30 35.73 ±0.30 31.84 ±0.23 FT 95.38 ±0.21 85.79 ±0.30 39.60 ±0.36 270.31 ±0.56 16.80 ±0.11 21.04 ±0.24 20.81 ±0.24 9.64±0.14 ROME 76.04 ±0.43 67.23 ±0.38 46.59 ±0.28 530.79 ±0.23 4.49±0.05 3.81±0.11 3.65±0.11 0.21±0.02 MEMIT 62.90 ±0.48 51.92 ±0.44 51.28 ±0.37 575.08 ±0.15 2.02±0.03 0.00±0.00 0.00±0.00 0.00±0.00 PRUNE 67.14 ±0.47 55.57 ±0.43 49.42 ±0.34 559.12 ±0.12 3.31±0.03 0.02±0.01 0.01±0.01 0.00±0.00 RECT 61.82 ±0.49 56.03 ±0.46 50.10 ±0.41 457.37 ±0.57 2.38±0.03 0.00±0.00 0.00±0.00 0.00±0.00 AlphaEdit 97.30 ±0.16 92.29 ±0.22 61.57 ±0.31 606.44 ±0.27 31.90 ±0.12 93.78 ±0.15 89.61 ±0.21 31.96 ±0.22 LyapLock 99.16 ±0.09 90.56 ±0.25 73.82 ±0.27 621.58 ±0.21 32.68 ±0.12 95.20 ±0.13 91.58 ±0.19 32.05 ±0.22 Pre-edited GPT-J14.78 ±0.35 17.17 ±0.33 83.45 ±0.25 621.80 ±0.14 29.48 ±0.10 27.04 ±0.29 26.25 ±0.28 27.00 ±0.26 FT 95.28 ±0.21 77.45 ±0.36 42.22 ±0.37 351.58 ±0.93 10.53 ±0.13 68.14 ±0.32 65.12 ±0.34 16.27 ±0.21 ROME 50.48 ±0.50 51.18 ±0.40 52.46 ±0.31 576.99 ±0.15 1.87±0.01 28.64 ±0.40 26.29 ±0.39 1.82±0.39 MEMIT 89.44 ±0.31 82.47 ±0.32 56.91 ±0.34 315.78 ±0.86 12.12 ±0.14 72.26 ±0.36 69.33 ±0.37 25.80 ±0.26 PRUNE 74.12 ±0.44 67.09 ±0.39 54.32 ±0.36 397.92 ±0.70 9.29±0.11 3.93±0.11 3.75±0.11 4.83±0.11 RECT 95.36 ±0.21 81.14 ±0.33 65.50 ±0.31 539.06 ±0.65 31.39 ±0.14 87.71 ±0.25 83.76 ±0.29 26.19 ±0.26 AlphaEdit 99.48 ±0.07 94.70 ±0.18 68.93 ±0.28 607.42 ±0.28 40.66 ±0.13 98.97 ±0.07 94.23 ±0.19 26.18 ±0.25 LyapLock 99.64 ±0.06 94.72 ±0.18 70.66 ±0.28 617.83 ±0.18 42.08 ±0.12 99.56 ±0.04 95.58 ±0.17 26.76 ±0.25 Pre-edited GPT2-XL21.50 ±0.41 23.88 ±0.37 78.24 ±0.28 626.51 ±0.12 31.27 ±0.10 22.80 ±0.27 21.87 ±0.26 24.32 ±0.24 FT 67.62 ±0.47 56.37 ±0.43 50.40 ±0.37 582.25 ±0.52 10.61 ±0.07 22.79 ±0.35 19.95 ±0.33 4.40±0.11 ROME 51.02 ±0.50 49.43 ±0.41 51.44 ±0.32 472.37 ±0.30 0.78±0.01 34.47 ±0.42 31.73 ±0.40 3.82±0.10 MEMIT 69.32 ±0.46 63.88 ±0.41 56.96 ±0.35 575.07 ±0.56 16.82 ±0.12 23.96 ±0.35 20.74 ±0.32 11.97 ±0.18 PRUNE 54.86 ±0.50 52.72 ±0.42 51.43 ±0.36 584.86 ±0.23 14.92 ±0.07 2.55±0.11 2.50±0.10 2.98±0.08 RECT 90.68 ±0.29 75.22 ±0.36 59.27 ±0.33 494.77 ±0.71 14.80 ±0.15 68.58 ±0.37 62.54 ±0.38 20.58 ±0.23 AlphaEdit 98.52 ±0.12 88.22 ±0.25 60.99 ±0.29 571.43 ±0.47 36.01 ±0.14 80.83 ±0.32 72.36 ±0.36 20.77 ±0.23 LyapLock 98.40 ±0.13 88.14 ±0.26 63.07 ±0.29 584.82 ±0.45 36.93 ±0.13 92.89 ±0.20 84.06 ±0.29 24.97 ±0.25 Figure 6: The improvement in editing performance and downstream
https://arxiv.org/abs/2505.15702v1