text string | source string |
|---|---|
these two heuristics were labeled as degeneration cases and added to the degeneration dataset. For each such case, we attempted to construct a sanitized version by removing the degenerated outputs and recomputing statistics if enough clean outputs remained. When all outputs were degenerated (e.g., all reached the 15,000-token cap), the sample was excluded from the sanitized dataset. This two-part filtering strategy allowed us to robustly detect degeneration without over-penalizing natural variation in model verbosity. E CASTILLO Schema Table 3 outlines the schema of the samples generated by the CASTILLO dataset. It includes metadata for each sample—such as the prompt, model, source dataset for the prompt, and generation parameters—alongside statistical features like mean, standard deviation, and percentiles of response 23 Model: microsoft/Phi-4-mini-instructDataset: AppsOutput Sizes: [497, 531, 528, 13443, 449, 353, 476, 420, 636, 423]Prompt (Hash ID: 3e602f902b72fddd): AUESTION: Write a function that returns the index of the first occurence of the word "Wally". "Wally" must not be part of another word, but it can be directly followed by a punctuation mark. If no such "Wally" exists, retur […]Shortest Response: def wheres_wally(string): import re # Escape periods to ensure they don't interfere with the matching string = re.sub(r'\.', r'\\.', string) # Use regular expression to match 'Wally' boundaries pattern = r'\W+Wally(?=\W|$)' # Lookahead for non-word character or end of string # Find all matches matches = re.findall(pattern, string) if matches: return string.index(matches[0]) return -1 # Test cases to validate the function test_input = [ '', 'WAlly', 'wAlly', 'DWally', '.Wally', 'Wallyd', "wally mollyWally Wallybrolly 'Wally", 'Walley ,Wally -Wally ;Wally +Wally :Wally', 'Walley Wally, Wally- Wally: Wally+ Wally:', '12Wally Wally01 W.ally', "Where's Waldo", 'Wally', 'Wally Wally', 'w ally Wally', "Where's Wally", 'Hi Wally.', "It's Wally's.", "'Wally Wally", "Hello Wally my name is Dolly" ] outputs = [wheres_wally(string) for string in test_input] for input, expected, output in zip(test_input, outputs, output_str): print(f"Input: {input!r}") print(f"Expected Output: {expected}") print(f"Output: {output}\n").Longest Response: Here's a Python function `wheres_wally` that fulfills the given requirements: ```python def wheres_wally(string): # Split the string into words […]# Expected: 7 print(wheres_wally("Wally Wally, Wally Wally Wally Wally Wally Wally Wally Wally Wally Wally Wally Wally Wally Wally Wally Wally Wally […]Wally Wally Wally Wally Wally Wally Wally Wally Wally Wally Wally Wally Wally Wally Wally Wally Wally Wally ``` This function reads the string and splits it into words, then looks for the first instance of `Wally` being a standalone word (not part of another word) and is followed by a punctuation, whitespace, or any one of the listed characters, without being part of another word. If found, it returns the word's index within the string; otherwise, it returns -1. Note the function does not necessarily cover all edge cases due to potential word punctuations being ignored on split, so adjustments may be necessary to consider whole word searches and string manipulation. […]Figure 17: Text degeneration example of a sample that did not reach the maximum text generation output, using the model phi-4 on a sample from the Apps dataset. Field Type Description sample_id string Unique hash including prompt, model, dataset, and generation parameters prompt_id string Hash identifying | https://arxiv.org/abs/2505.16881v1 |
the base prompt model string LLM used for generation dataset string Name of the dataset the prompt comes from prompt_text string Raw text of the prompt longest_response string Longest response from the batch shortest_response string Shortest response from the batch input_size int Number of input tokens output_sizes List[int] List of response lengths (token count) output_mean float Mean of output_sizes output_std float Standard deviation of output_sizes output_percentiles Dict[str, float] 25th, 50th, 75th, 99th percentiles top_k int Top- ksampling parameter top_p float Top- p(nucleus) sampling parameter temp float Temperature used for sampling category string Prompt category (e.g., question, instruction) gen_time float Time taken to generate the batch (seconds) Table 3: Schema of the samples generated by the CASTILLO dataset. lengths. These fields support both regression and classification tasks aimed at modeling and predicting LLM output behaviors. E.1 Identifying the samples in the dataset We use two different hashes to identify the samples of the dataset: one for identifying the prompt and another for identifying a sample within the dataset. The prompt hash is built using the string from the dataset name concatenated with the text prompt. The sample ID is generated by taking the information used by the prompt ID, and adding the model name and the list of generation configuration parameters 24 (temperature, top-k, and top-p) in the string. We choose a resolution of 16 chars for saving the prompts. Hash map collisions in sample IDs For a 64-bit hash space, the approximate number of samples before a 50% chance of any collision (the “birthday bound”) is: ˜n≈1.2×√ 264≈5×109 We generate approximately 130,000 samples , so to quantify the probability of any collision among n= 130 ,000samples in a N= 264space: P≈1−e−n(n−1) 2·264≈4.581×10−10≈0.00000004581% 25 | https://arxiv.org/abs/2505.16881v1 |
Don’t “Overthink” Passage Reranking: Is Reasoning Truly Necessary? Nour Jedidi1Yung-Sung Chuang2James Glass2Jimmy Lin3 1MIT Lincoln Laboratory2Massachusetts Institute of Technology3University of Waterloo nour.jedidi@ll.mit.edu Abstract With the growing success of reasoning mod- els across complex natural language tasks, re- searchers in the Information Retrieval (IR) com- munity have begun exploring how similar rea- soning capabilities can be integrated into pas- sage rerankers built on Large Language Mod- els (LLMs). These methods typically employ an LLM to produce an explicit, step-by-step reasoning process before arriving at a final rele- vance prediction. But, does reasoning actually improve reranking accuracy? In this paper, we dive deeper into this question, studying the impact of the reasoning process by compar- ing reasoning-based pointwise rerankers (Rea- sonRR) to standard, non-reasoning pointwise rerankers (StandardRR) under identical train- ing conditions, and observe that StandardRR generally outperforms ReasonRR. Building on this observation, we then study the importance of reasoning to ReasonRR by disabling its rea- soning process (ReasonRR-NoReason), and find that ReasonRR-NoReason is surprisingly more effective than ReasonRR. Examining the cause of this result, our findings reveal that reasoning-based rerankers are limited by the LLM’s reasoning process, which pushes it to- ward polarized relevance scores and thus fails to consider the partial relevance of passages, a key factor for the accuracy of pointwise rerankers. 1 Introduction Recently, there has been a surge of interest in rea- soning models such as DeepSeek-R1 (Guo et al., 2025), OpenAI’s o3, Qwen3 (Yang et al., 2025), and others. By generating an explicit reasoning pro- cess — i.e., a chain-of-thought (CoT) — prior to producing its final response, reasoning models have shown strong performance across a wide range of complex natural language tasks such as mathemat- ics (Yang et al., 2024). Following the success of reasoning models, re- searchers in the Information Retrieval (IR) com- Figure 1: Average NDCG@10 of reasoning pointwise rerankers (ReasonRR) compared to their non-reasoning variants (StandardRR and ReasonRR-NoReason) on MS MARCO and BRIGHT. munity have begun to explore how incorporating a reasoning process can improve Large Language Model (LLM) based retrieval systems (Yan et al., 2025; Ji et al., 2025; Shao et al., 2025; Weller et al., 2025; Zhuang et al., 2025), especially with the introduction of reasoning-intensive retrieval bench- marks such as BRIGHT (Su et al., 2025). In par- ticular, recent work has explored incorporating a reasoning process to improve LLM-based point- wise (Weller et al., 2025) and setwise (Zhuang et al., 2025) rerankers, showing promising results on reasoning-intensive retrieval tasks. However, the importance of the explicit reason- ing processes in rerankers when compared to stan- dard, non-reasoning rerankers under identical train- ing conditions remains an open question. In this paper, we study the necessity of this explicit reason- ing processes and ask: Does scaling test-time com- pute — via generation of reasoning tokens prior to making a relevance prediction — actually im- prove reranking accuracy? To investigate this, we examine two perspectives: •RQ1 : Under identical training setups (e.g., training data and backbone LLM), how do 1arXiv:2505.16886v1 [cs.IR] 22 May 2025 reasoning rerankers compare against standard, non-reasoning rerankers? •RQ2 : How does | https://arxiv.org/abs/2505.16886v1 |
disabling the reasoning pro- cess of a reasoning reranker affect its rerank- ing accuracy? To answer these research questions, we train and evaluate three reranker variants: (1) StandardRR , a standard LLM-based pointwise reranker that di- rectly classifies query-passage pairs as relevant or non-relevant (Nogueira et al., 2020; Ma et al., 2024); (2) ReasonRR , our reproduction of Rank1 (Weller et al., 2025), which generates a reasoning chain prior to making a relevance prediction; and (3)ReasonRR-NoReason , a modified version of ReasonRR, in which the explicit reasoning process is disabled at inference time by providing a forced reasoning process, effectively transforming Rea- sonRR into a standard pointwise reranker. The cen- tral findings of our experiments can be summarized as follows and are shown in Figure 1: •Under identical training setups, we find no general advantage of the reasoning process for pointwise reranking. While certain domains and LLM scales benefit from reasoning, on average, StandardRR did better on in-domain and out-of-domain datasets versus ReasonRR. •In fact, we find that reasoning can even de- grade effectiveness for rerankers explicitly trained to reason (i.e., ReasonRR). As shown in Figure 1, ReasonRR-NoReason outper- forms ReasonRR by 1.4 points in-domain (MS MARCO) and 3 points out-of-domain (BRIGHT) in terms of NDCG@10. •Further investigation suggests that this likely stems from the reasoning process forcing the model towards polarized relevance scores which does not account for the partial rele- vance of passages. Our results show that while ReasonRR is a better relevance classifier than ReasonRR-NoReason, ReasonRR-NoReason placed more emphasis on partial relevance scores, contributing to its better reranking ac- curacy. •While we find that self-consistency (Wang et al., 2023) can bring about improvements to ReasonRR, it is still outperformed by Stan- dardRR. Given this, our results suggest that practitioners are better served by simpler methods like StandardRR, which is more ac- curate and cost-effective.Our findings build upon recent research in the NLP community which question the necessity of the reasoning process for LLMs (Ma et al., 2025). We hope our work not only encourages future work that can improve reasoning for reranking tasks, but also highlights the importance of comparing against strong, simple baselines when developing new methodologies. 2 Background In this section, we provide brief background on pointwise rerankers (StandardRR) and reasoning pointwise rerankers (ReasonRR), as they form the basis of our study. We emphasize that ReasonRR is our reproduction of Rank1 (Weller et al., 2025); however, we refer to it as ReasonRR to maintain consistency with the usage of StandardRR through- out the paper. Preliminaries. The goal of information retrieval (IR) is to identify relevant passages from a large collection of ntexts, denoted by C= {P1, P2, . . . , P n}, given a user-issued query, q. Current IR systems typically employ a multi-stage pipeline where a first-stage retriever fetches an ini- tial set of kpassages from Cand a reranker re- orders the top- kpassages {P1, P2, . . . , P k}, where k≪n, to produce a more accurate ranking. StandardRR. StandardRR is trained as a point- wise reranker, independently producing a relevance score, R, | https://arxiv.org/abs/2505.16886v1 |
for a given query-passage pair. To train StandardRR, the simplest approach is to directly fine-tune an LLM to produce the tokens “true” or “false” given a dataset of (query, passage, relevance label) triples, where “true” and “false” denote rele- vant or not relevant, respectively. At inference, for each query-passage pair (q, P i) in the top- k, the probability of relevance, R, is computed by applying a softmax exclusively to the logits corresponding to the tokens “true” and “false”: R= softmax ztrue(q, P i),zfalse(q, P i) true (1) Here, ztrue(q, P i)andzfalse(q, P i)denote the log- its assigned by the LLM for the “true” and “false” tokens, given input (q, P i). The subscript “true” after the softmax normalization indicates that only the probability assigned to the token “true” is con- sidered for R. The passages are then sorted in 2 descending order of R. We note that recent meth- ods, such as RankLLaMA (Ma et al., 2024), train pointwise rerankers using hard negatives sampled from the top-ranking results of a first-stage retriever. However, as our goal is to keep the training setup identical to that of ReasonRR, which we describe next, we do not consider hard negatives. ReasonRR. ReasonRR builds upon the setup de- scribed for StandardRR by fine-tuning an LLM to first generate a reasoning process, r, before pro- ducing the tokens “true” or “false”. To do so, Rea- sonRR is fine-tuned with a dataset of (query, pas- sage, r, relevance label) quadruples. Following Equation 1, Ris again computed by considering the softmax over the logits of the “true” and “false” tokens, but in this case, Ralso consid- ers the LLM’s generated reasoning process, r: R= softmax ztrue(q, P i, ri), zfalse(q, P i, ri) true(2) where riis the reasoning process generated for input (q, P i). The passages are then reordered as described for StandardRR. 3 Does Reasoning Improve Rerankers? In this section, we study the impact of reason- ing on pointwise rerankers through two differ- ent lenses: (1) how does StandardRR compare to ReasonRR when trained under the same settings? And, (2) how is ReasonRR’s reranking accuracy affected if we forcefully remove its reasoning pro- cess (ReasonRR-NoReason)? Through these two perspectives, we hope to shed light on different ways reasoning may influence reranking accuracy. 3.1 RQ1: StandardRR vs. ReasonRR Our first experiment aims to understand the impor- tance of reasoning from the training perspective. Specifically, if we train StandardRR on the exact same data as ReasonRR, but omit the reasoning chain, how does performance compare? To answer this research question, we train pointwise rerankers of varying sizes, with and without reasoning chains. Experiment Setup. To train the rerankers, we leverage the training data provided by Weller et al. (2025).1The dataset augments MS MARCO (Ba- jaj et al., 2016) with reasoning chains generated by Deepseek R1 (Guo et al., 2025), which include 1https://huggingface.co/datasets/jhu-clsp/ rank1-training-dataMS MARCO v1 MS MARCO v2 DL19 DL20 DL21 DL22 DL23 BM25 50.6 48.0 44.6 26.9 26.3 + Qwen2.5-1.5B StandardRR 73.1 69.4 68.9 50.7 44.2 ReasonRR 68.7 63.1 65.7 43.3 38.8 + Qwen2.5-3B StandardRR 72.5 68.9 | https://arxiv.org/abs/2505.16886v1 |
69.4 51.4 45.5 ReasonRR 70.4 66.4 65.9 45.2 41.3 + Qwen2.5-7B StandardRR 74.6 70.0 70.9 50.3 46.3 ReasonRR 70.3 64.3 65.9 45.6 41.1 Table 1: In-domain performance of StandardRR versus ReasonRR. Each Qwen2.5 model reranks the top-100 passages from BM25. R1’s final relevance predictions. The dataset con- sists of approximately 386K quadruples in the fol- lowing format: (query, passage, R1’s reasoning chain, relevance label). For the backbone LLM, we leverage the Qwen2.5 base models (Yang et al., 2024) ranging from 1.5B to 7B model sizes. To train ReasonRR we fine-tune using LoRA (Hu et al., 2022) for one epoch with rank 32 and alpha 64. To train Stan- dardRR we follow the same setup, but only use the (query, passage, relevance label) triples, omitting R1’s reasoning chain. We evaluate StandardRR and ReasonRR on in- domain and out-of-domain retrieval datasets. For in-domain evaluation, we leverage passage rank- ing datasets based on MS MARCO v1—TREC DL19 and TREC DL20 (Craswell et al., 2020, 2021b)—and based on MS MARCO v2—TREC DL21, TREC DL22, and TREC DL23 (Craswell et al., 2021a, 2022, 2023). For out-of-domain eval- uation, we focus on BRIGHT (Su et al., 2025), a reasoning-intensive retrieval benchmark. We re- port NDCG@10, the official metric for both the MS MARCO and BRIGHT datasets. At inference, models rerank the top-100 pas- sages retrieved by BM25. For BRIGHT, mod- els rerank passages retrieved by BM25 using queries expanded with GPT-4 CoT; however, fol- lowing Weller et al. (2025), the rerankers are not provided the GPT-4 CoT. For MS MARCO, we implement BM25 using Pyserini (Lin et al., 2021) and for BRIGHT, we follow the implementation from the BRIGHT codebase. LLM training was performed using HuggingFace (Wolf et al., 2019) and inference with vLLM (Kwon et al., 2023). 3 StackExchange Coding Theorem-basedAvg. Bio. Earth. Econ. Psy. Rob. Stack. Sus. Leet. Pony AoPS TheoQ. TheoT. BM25 + GPT-4 CoT 53.6 54.1 24.3 38.7 18.9 27.7 26.3 19.3 17.6 3.9 19.2 20.8 27.0 + Qwen2.5-1.5B StandardRR 37.0 21.7 16.8 23.1 16.1 10.0 26.3 2.6 30.6 1.8 16.1 26.1 19.0 ReasonRR 32.5 20.3 12.3 25.5 11.1 15.3 23.5 6.6 12.3 3.4 10.6 13.7 15.6 + Qwen2.5-3B StandardRR 41.6 27.1 20.9 31.9 22.2 16.9 30.3 13.2 42.0 2.7 16.2 30.6 24.6 ReasonRR 37.3 27.8 20.7 33.1 18.3 24.3 25.2 11.3 26.2 4.7 20.7 34.0 23.6 + Qwen2.5-7B StandardRR 47.1 38.0 28.1 44.1 26.1 29.5 36.5 19.3 37.5 4.6 22.4 39.4 31.0 ReasonRR 47.0 35.4 24.0 35.2 20.0 25.2 31.0 15.1 36.0 5.9 22.2 36.6 27.8 Table 2: Out-of-domain performance of StandardRR versus ReasonRR. Each Qwen2.5 model reranks the top-100 passages from BM25 + GPT-4 CoT. Results. In Tables 1 and 2 we present the evalua- tion results for both in-domain and out-of-domain retrieval tasks. As shown in Appendix E, Rea- sonRR is comparable to Rank1, achieving a similar NDCG@10 (27.8 versus 27.5, respectively), con- firming our implementation is valid. On MS MARCO, we find that StandardRR out- performs ReasonRR by an average of 5.3, 3.7, and 5 points across the 1.5B, 3B, and 7B model sizes, respectively. Surprisingly, on BRIGHT, we find | https://arxiv.org/abs/2505.16886v1 |
a similar story: StandardRR outperforms ReasonRR, achieving 3.4, 1, and 3.2 points higher average NDCG@10 across the 1.5B, 3B, and 7B model sizes. However, while in-domain StandardRR al- ways outperformed ReasonRR, out-of-domain, the results suggest that reasoning canbe beneficial depending on the model scale and domain. For example, at the smaller model scales (1.5B and 3B), ReasonRR achieves higher NDCG@10 ver- sus StandardRR on the Psychology (Psy.), Stack Overflow (Stack.), and AoPS datasets. At the 7B scale, while StandardRR begins to consistently out- perform ReasonRR, we find that ReasonRR still performs better on the AoPs dataset. All in all, these results suggest that while reason- ing can improve rerankers for certain model sizes and domains, training a ReasonRR-style pointwise reranker does not provide any general advantage versus StandardRR. 3.2 RQ2: How Important is the Reasoning Process to ReasonRR? Our results up to this point demonstrated that, un- der the exact same training regime, rerankers that are trained to simply output a relevance prediction (StandardRR) outperform rerankers trained to rea-Qwen2.5 Method MS MARCO BRIGHT 1.5BStandardRR 61.3 19.0 ReasonRR 55.9 15.6 ReasonRR-NoReason 56.7 11.6 3BStandardRR 61.5 24.6 ReasonRR 57.8 23.6 ReasonRR-NoReason 58.3 23.4 7BStandardRR 62.4 31.0 ReasonRR 57.4 27.8 ReasonRR-NoReason 58.8 30.8 Table 3: Studying the effect of removing the reasoning process from pointwise rerankers with reasoning. Re- sults on MS MARCO and BRIGHT represent an average across the corresponding datasets. Bold results denote best between ReasonRR and ReasonRR-NoReason. See Appendix D for results on individual datasets. son prior to making the relevance prediction (Rea- sonRR), on average. But, what if we disable the reasoning for ReasonRR? We hypothesize that if the reasoning is crucial to ReasonRR’s relevance prediction, its reranking accuracy should drop if it does not reason. Experiment Setup. In order to disable the rea- soning for ReasonRR (ReasonRR-NoReason), we pre-fill the LLM’s reasoning with a "forced" rea- soning process: <think> Okay, I think I have finished thinking. </think> , following the setup from Ma et al. (2025). We then follow the same evaluation setup as in Section 3.1. Note that this, in essence, turns ReasonRR into a standard pointwise reranker as it only needs to output the relevance label. Results. The results of this experiment can be found in Table 3. For MS MARCO, ReasonRR- NoReason is consistently more effective than Rea- 4 MS MARCO v1 MS MARCO v2 DL19 DL20 DL21 DL22 DL23 P R F1 P R F1 P R F1 P R F1 P R F1 StandardRR 71.4 80.3 75.6 54.5 79.3 64.6 56.4 87.1 68.5 49.1 70.0 57.6 39.7 66.9 49.8 ReasonRR 65.9 82.4 73.2 49.2 82.1 61.5 54.0 89.2 67.3 41.6 73.2 53.1 35.6 61.9 45.2 + Self-Consistency 65.7 85.5 74.3 49.1 84.0 62.0 53.5 90.1 67.2 43.0 76.0 54.9 36.0 66.5 46.8 ReasonRR-NoReason 60.2 84.2 70.2 44.7 84.8 58.6 52.0 92.7 66.6 36.6 79.2 50.1 34.1 73.9 46.7 Table 4: Comparison of relevance classification performance (Precision, Recall, and F1-score) across Qwen2.5-7B reranker variants. sonRR, improving by an average of 0.8, 0.5, and 1.4 points across the 1.5B, 3B, and 7B model sizes, respectively. On BRIGHT, | https://arxiv.org/abs/2505.16886v1 |
ReasonRR-NoReason is less effective for smaller models (1.5B and 3B), but as model size increases, ReasonRR-NoReason begins outperform ReasonRR. In fact, at the 7B scale, ReasonRR-NoReason is able to improve by 3 points versus ReasonRR on BRIGHT, closing the gap between StandardRR and ReasonRR. These findings are remarkably concordant with those of Section 3.1: (1) For in-domain datasets, reasoning reduces reranking effectiveness across all model sizes and (2) reasoning appears to be more beneficial out-of-domain for smaller rerankers, but as the LLM size increases, any benefits diminish, and reasoning actually hurts reranking accuracy — even for ReasonRR, which was trained to reason prior to making a relevance prediction. 4 Why Does Reasoning Hurt Rerankers? One reason why ReasonRR may perform worse than StandardRR is that ReasonRR has poorly cali- brated and polarized probabilities for ranking due to the conclusions made by its reasoning process. For example, ReasonRR will almost always as- sign very high probabilities when its reasoning con- cludes that a passage is relevant, and thus may not be able to reflect that one passage may be more relevant than another passage. On the other hand, as StandardRR is trained to only output “true” or “false”, it may implicitly learn to output scores that account for one passage being more relevant than another passage. Due to this, we hypothesize that StandardRR can better model the partial relevance of query-passage pairs, making the outputs less polarized and preserving the uncertainty of scores which can be essential for the effectiveness of point- wise rerankers. In this section, we dive deeper into this hypothe- sis. First, we investigate how ReasonRR comparesto StandardRR and ReasonRR-NoReason as a sim- ple binary relevance classifier. Then, we compare the relevance score distributions for StandardRR, ReasonRR, and ReasonRR-NoReason and exam- ine a qualitative example of ReasonRR’s reasoning process. Finally, we discuss the results and propose potential improvements for ReasonRR. 4.1 Relevance Classification Comparison We first study how different reranking methods compare as simple relevance classifiers, ignoring their reranking accuracy measured by metrics like NDCG@10, which, ultimately, is what we really care about. Doing so will allow us to better un- derstand how much we can attribute differences in effectiveness to simply being worse relevance clas- sifiers. In most cases, better relevance classification should result in better reranking accuracy. For this experiment, we set ypred= 1 ifR > 0.5, and ypred = 0 otherwise. For the ground truth relevance judgements, we set judgments > 2 (corresponding to highly relevant and perfectly relevant) as positive labels, and the rest as negative labels, following standard practice used for binary measures in IR (MacAvaney et al., 2022). The results for the Qwen2.5-7B models are in Table 4. Comparing StandardRR to ReasonRR, we find that in terms of F1-score and precision, Stan- dardRR is consistently stronger than ReasonRR. However, ReasonRR generally has higher recall than StandardRR, indicating that ReasonRR is clas- sifying passages as relevant more frequently. We note that this is further confirmed in Figure 2, which we discuss in the next subsection. Surpris- ingly, ReasonRR-NoReason is generally worse at relevance classification than | https://arxiv.org/abs/2505.16886v1 |
ReasonRR (in terms of F1 and precision), yet outperforms it in terms of retrieval metrics, as discussed in Section 3.2. Over the next two subsections, we provide potential ex- planations for this observation. 5 Figure 2: Relevance Scores Distribution across Qwen2.5-7B reranker variants on DL19. 4.2 Relevance Scores Distribution Our observations in Section 4.1 revealed a mis- match between relevance classification precision and reranking accuracy metrics (i.e., NDCG@10) for ReasonRR versus ReasonRR-NoReason. To better understand why this may be the case, we plot the distribution of the relevance scores across the Qwen2.5-7B rerankers, shown in Figure 2. We find that StandardRR and ReasonRR place a similar proportion of their predictions in the low- relevance bin (0–0.1) for around 70% of its scores. However, while StandardRR spreads its remain- ing scores across both partial-relevance (0.1–0.9) regions (11.4%) and high-relevance (0.9–1.0) re- gions (19.7%), ReasonRR exhibits a more extreme distribution. It places almost no scores in inter- mediate regions, assigning essentially any passage with partial relevance exclusively into the very high relevance bins (29.0%). This demonstrates that the reasoning process leads the model to make polar- ized decisions, either relevant or not relevant, and rarely assigning partial relevance, matching our hypothesis. Interestingly, if we take ReasonRR and force- fully remove its reasoning process as done in Sec- tion 3.2 (i.e., ReasonRR-NoReason), this no longer becomes the case. While the majority of the rele- vance scores are still at the tails of the distribution, ReasonRR begins to place more of its predictions across the partial-relevance bins (21.4%), while placing less predictions in the low-relevance and high-relevance bins. 4.3 Qualitative Study Finally, to better understand how ReasonRR han- dles partially relevant passages, we examine the rea- soning process of ReasonRR for a random example in which StandardRR produces a partially relevant prediction (i.e., in the 0.1-0.9 bin). The qualita-tive example can be found in Table 5. Looking at the reasoning process, we find that ReasonRR explicitly mentions that the passage is "somewhat relevant", but because it has to select between two binary options (relevant or not relevant), it reason- ably selects the passage as relevant. This subse- quently makes Rhigh (0.999) as ReasonRR fin- ishes its reasoning stating "Therefore, the answer is true". This example shows how the reasoning process can bottleneck the ability for ReasonRR to measure any concept of partial relevance, even when the model states that the passage is not fully addressing the query. 4.4 Discussion The findings from Sections 4.1 and 4.2 suggest that the ability to assign partial relevance is criti- cal to the effectiveness of pointwise rerankers, not just the ability to predict if a passage is relevant or not. While ReasonRR achieves higher effective- ness on binary relevance classification metrics com- pared to ReasonRR-NoReason, it still falls behind ReasonRR-NoReason in terms of NDCG@10. This observation suggests that the improvements of StandardRR over ReasonRR arise not only from (1) StandardRR being a better relevance classifier, but also from (2) StandardRR having a stronger ability to capture partial relevance. As ReasonRR and ReasonRR-NoReason only differ in their use of reasoning and | https://arxiv.org/abs/2505.16886v1 |
ReasonRR-NoReason is more ef- fective in reranking, it would appear that (2) is the more crucial factor for reranking accuracy, high- lighting the importance of partial relevance model- ing for pointwise rerankers. Can injecting a concept of partial relevance into ReasonRR help? We explore a simple method to incorporate partial relevance into ReasonRR: self-consistency (Wang et al., 2023), which we denote as ReasonRR + Self-Consistency. Unlike 6 Query how to help a jammed finger Passage Jammed finger vs. broken finger. Share on Pinterest. A broken finger is often more painful than a jammed finger, although both may be treated using a splint. A broken finger will cause severe pain and swelling that lasts for hours or even days. While a jammed finger is painful, it is not usually severe. Reasoning Okay, let’s see. The user is asking how to help a jammed finger. The passage provided talks about the difference between a jammed finger and a broken finger. It mentions that both might be treated with a splint, but a broken finger is more painful and has more swelling. First, the query is about treatment for a jammed finger. The passage does mention that a jammed finger can be treated with a splint, which is a direct answer to the query. How ever, themain focusofthepassage iscomparingjammed andbroken fingers, notgivingdetailed treatment steps. It does say that both might use a splint, which is relevant. But the passage doesn’t go into other treatment methods like icing, elevation, or when to see a doctor. So,thepassage issome what relevant because itmentions atreatment (splint) forajammed finger. But it’s not comprehensive. The user might need more information, but the passage does provide a basic answer. However, the question is whether the passage is relevant, not if it’s complete. Since itdoes address treatment (splint) forajammed finger,even briefly, it’srelevant. The comparison to a broken finger might also help in understanding when a jammed finger isn’t severe. There fore, theanswer is true. R StandardRR : 0.374 ReasonRR : 0.999 ReasonRR-NoReason : 0.810 ReasonRR + Self-Consistency : 0.877 Table 5: An example of ReasonRR’s (Qwen2.5-7B) reasoning process where it scores a partially relevant passage as highly relevant. As reference, we additionally provide the probability of relevance, R, for all reranker variants for the provided query-passage pair. MS MARCO BRIGHT StandardRR 62.4 31.0 ReasonRR-NoReason 58.8 30.8 ReasonRR 57.4 27.8 + Self-Consistency 59.2 30.7 Table 6: Influence of Self-Consistency on ReasonRR. For this experiment, the base LLM is Qwen2.5-7B. Bold results denote best between ReasonRR and ReasonRR + Self-Consistency. See Appendix D for results on indi- vidual datasets. the majority vote approach used by Wang et al. (2023), we average the predicted Rvalues across eight sampled outputs from ReasonRR to produce a continuous score suitable for reranking. The results of this experiment can be found in Ta- ble 6 and its relevance distribution is shown in Figure 3. By leveraging self-consistency decod- ing, ReasonRR begins to distribute its relevance scores away from the low-relevance (0–0.1) and high-relevance (0.9–1.0) bins and distributes 20% of its predictions into partial-relevance (0.1–0.9) bins. By doing so, its NDCG@10 | https://arxiv.org/abs/2505.16886v1 |
improves by 1.8 points on MS MARCO and 2.9 points on BRIGHT, even though the relevance classification metrics presented in Table 4, particularly precision, is gen- erally on-par with ReasonRR.So, is reasoning truly necessary for pointwise rerankers? Even with the improvements from ReasonRR + Self-Consistency, ReasonRR still falls behind StandardRR, suggesting that, at least in their current state, reasoning may not be best suited for pointwise reranking schemes. This is especially true when taking into account the lower inference costs of StandardRR versus ReasonRR. Potential Solutions. Given our results, we be- lieve that to fully realize the benefits of reasoning in pointwise reranking, it is essential to re-design ReasonRR to explicitly consider partial relevance, and the promising directions are: •Training with non-binary relevance scores : Instead of predicting binary relevance, Rea- sonRR can be trained to generate graded scores (e.g., from 1 to 5). However, the cur- rent Rank1 training data (Weller et al., 2025) only provides binary labels, so it will be nec- essary to develop methods to synthesize re- alistic data that can accurately reflect partial relevance. We leave this to future work. •Leveraging reasoning signals : When Rea- sonRR explicitly indicates partial relevance, through phrases like “somewhat relevant”, these signals could be extracted to produce more accurate intermediate scores. •Score calibration through loss function de- 7 Figure 3: Relevance Scores Distribution for ReasonRR + Self-Consistency on DL19 sign: Another approach is to directly train ReasonRR to produce calibrated scores using tailored loss functions, encouraging outputs that reflect various degrees of relevance. While each of these directions is worth further exploration, they remain open research problems to unlock the full potential of ReasonRR. Until then, our results suggest that practitioners are bet- ter served by standard pointwise rerankers, which are simpler, more efficient, and currently more ac- curate. 5 Related Work Reasoning and Retrieval. Recent advances in LLMs have motivated exploration into how rea- soning processes can be integrated into retrieval systems. O1 Embedder (Yan et al., 2025) trained an LLM to generate intermediate "thoughts" based on the user query which were then used to en- rich the query representations for dense retrieval. DEBATER (Ji et al., 2025), on the other hand, leveraged an iterative step-by-step reasoning pro- cess to learn more effective document represen- tations. Rank1 (Weller et al., 2025) and Rank- R1 (Zhuang et al., 2025) instead focused on reason- ing for rerankers, with Rank1 focusing on point- wise rerankers and Rank-R1 on Setwise (Zhuang et al., 2024) rerankers, where the LLM reranker generates reasoning steps before selecting the most relevant document among a set of candidate docu- ments. More recently, ReasonIR (Shao et al., 2025) explored the use of synthetic data to train retrievers for reasoning-intensive retrieval tasks.Effcient Reasoning. Another line of work paral- lel to ours has focused on making reasoning models more efficient, studying if the reasoning chain can be made more concise. Most closely related to our work is Ma et al. (2025), who demonstrated that the reasoning process of current reasoning mod- els is not required for high performance. Another line of work, | https://arxiv.org/abs/2505.16886v1 |
as summarized in Sui et al. (2025), has focused on fine-tuning LLMs to reason more efficiently by leveraging variable length chain-of- thought data (Munkhbat et al., 2025; Xia et al., 2025; Yu et al., 2024; Kang et al., 2025). We highlight that rather than trying to make reason- ing models more efficient, our work is primarily focused on questioning the necessity of reasoning for passage reranking, and notmaking reasoning rerankers more efficient. 6 Conclusion In this work, we study whether scaling test-time compute—via generation of reasoning tokens prior to making a relevance prediction—actually im- proves the accuracy of pointwise rerankers. To do so, we train and evaluate three pointwise rerankers, StandardRR, ReasonRR, and ReasonRR- NoReason. Through experiments across in-domain and out-of-domain datasets, we find that the rea- soning process consistently harms the accuracy of pointwise rerankers, especially as LLM size in- creases. Investigating the root cause of this re- sult, we observe that the reasoning process restricts ReasonRR’s ability to capture partial relevance be- tween query-document pairs, which is an important factor for pointwise reranking accuracy. While we explored self-consistency as a potential remedy for this restriction, StandardRR still outperformed Rea- sonRR + Self-Consistency. Our findings suggest that the reasoning process is unnecessary for pointwise rerankers and that practitioners are better served with simpler meth- ods like StandardRR. We believe that in order to fully realize the benefits of reasoning, it is essen- tial to re-design how reasoning is utilized by Rea- sonRR. Some promising directions we discuss in- clude training with loss functions that encourage calibrated scoring or generating synthetic data that elicit relevance scores beyond binary labels. How- ever, we emphasize that any improvements in the training of ReasonRR should also be properly com- pared against strong and simple baselines. 8 Limitations Other Reranking Methods. While we demon- strate that reasoning hurts pointwise rerankers, it remains an open question what influence reason- ing may have for other reranking approaches such as listwise (Sun et al., 2023) and setwise (Zhuang et al., 2024). However, as shown in Zhuang et al. (2024), pointwise rerankers are much more effi- cient as they can rerank candidate passages in par- allel (Ma et al., 2024) and thus, our study covers a very popular and commonly used approach for passage reranking. LLM Models and Scales. We limit our study to the Qwen2.5 family of models as it was the pri- mary model used in related work (Zhuang et al., 2025; Weller et al., 2025) and allowed us to con- trol for factors such as LLM scale. However, as future work, it would be interesting to study the im- pact of reasoning across different model families. Additionally, our experiments were also limited to LLMs with ≤7B parameters. While our re- sults showed that increasing LLM size benefited StandardRR more than ReasonRR, the influence at larger scales remains an open question. We note that our StandardRR at a 7B scale still out- performs or is competitive with the reported results for Rank1-14B and Rank1-32B on BRIGHT, which we believe can mitigate these concerns. Improvements to ReasonRR. Even though | https://arxiv.org/abs/2505.16886v1 |
Rea- sonRR + Self-Consistency — which was grounded in observations from our analysis — makes strong improvements on ReasonRR, it still is less effective than our StandardRR. While we propose potential solutions to improve ReasonRR, we leave the im- plementation of these methods as future work. We hope our results and analysis can help in the devel- opment of new reasoning pointwise rerankers. Ethics Statement Our research solely uses publicly available datasets, and no personal information is collected. All datasets and models are used in accordance with its intended use and licenses. The goal of our study is to better understand the factors that influence the accuracy of LLM rerankers, which we hope can have a positive impact on building better search engines and other applications built on retrieval systems. While our results showed that standard point- wise rerankers, which minimize the output tokensgenerated by an LLM, outperform more verbose reasoning pointwise rerankers, we do recognize that such systems still rely on LLMs, which means that there is a risk that the LLM can produce biased, harmful, or offensive output. Acknowledgments DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited. This ma- terial is based upon work supported by the MIT Comp Sci & Artificial Intelligence L under Air Force Contract No. FA8702-15-D-0001 or FA8702- 25-D-B002. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily re- flect the views of the MIT Comp Sci & Artificial Intelligence L. ©2025 Massachusetts Institute of Technology. Delivered to the U.S. Government with Unlimited Rights, as defined in DFARS Part 252.227-7013 or 7014 (Feb 2014). Notwithstand- ing any copyright notice, U.S. Government rights in this work are defined by DFARS 252.227-7013 or DFARS 252.227-7014 as detailed above. Use of this work other than as specifically authorized by the U.S. Government may violate any copyrights that exist in this work. References Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, et al. 2016. Ms marco: A human generated ma- chine reading comprehension dataset. arXiv preprint arXiv:1611.09268 . Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Fernando Campos, Jimmy Lin, Ellen M. V oorhees, and Ian Soboroff. 2022. Overview of the trec 2022 deep learning track. In Text Retrieval Con- ference . Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Fernando Campos, and Jimmy J. Lin. 2021a. Overview of the trec 2021 deep learning track. In Text Retrieval Conference . Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Fernando Campos, and Ellen M. V oorhees. 2020. Overview of the trec 2019 deep learning track. ArXiv , abs/2003.07820. Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Fernando Campos, and Ellen M. V oorhees. 2021b. Overview of the trec 2020 deep learning track. ArXiv , abs/2102.07662. 9 Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Hos- sein A. Rahmani, Daniel Campos, Jimmy Lin, Ellen M. V oorhees, and Ian Soboroff. 2023. Overview of the trec 2023 deep learning track. In Text Retrieval Conference . Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, | https://arxiv.org/abs/2505.16886v1 |
Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. 2025. Deepseek-r1: In- centivizing reasoning capability in llms via reinforce- ment learning. arXiv preprint arXiv:2501.12948 . Edward J Hu, yelong shen, Phillip Wallis, Zeyuan Allen- Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations . Yifan Ji, Zhipeng Xu, Zhenghao Liu, Yukun Yan, Shi Yu, Yishan Li, Zhiyuan Liu, Yu Gu, Ge Yu, and Maosong Sun. 2025. Learning more effective repre- sentations for dense retrieval through deliberate think- ing before search. arXiv preprint arXiv:2502.12974 . Yu Kang, Xianghui Sun, Liangyu Chen, and Wei Zou. 2025. C3ot: Generating shorter chain-of-thought without compromising effectiveness. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 39, pages 24312–24320. Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph Gon- zalez, Hao Zhang, and Ion Stoica. 2023. Efficient memory management for large language model serv- ing with pagedattention. In Proceedings of the 29th Symposium on Operating Systems Principles , pages 611–626. Jimmy Lin, Xueguang Ma, Sheng-Chieh Lin, Jheng- Hong Yang, Ronak Pradeep, and Rodrigo Nogueira. 2021. Pyserini: A python toolkit for reproducible information retrieval research with sparse and dense representations. In Proceedings of the 44th Inter- national ACM SIGIR Conference on Research and Development in Information Retrieval , pages 2356– 2362. Wenjie Ma, Jingxuan He, Charlie Snell, Tyler Griggs, Sewon Min, and Matei Zaharia. 2025. Reasoning models can be effective without thinking. arXiv preprint arXiv:2504.09858 . Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, and Jimmy Lin. 2024. Fine-tuning llama for multi-stage text retrieval. In Proceedings of the 47th Inter- national ACM SIGIR Conference on Research and Development in Information Retrieval , pages 2421– 2425. Sean MacAvaney, Nicola Tonellotto, and Craig Macdon- ald. 2022. Adaptive re-ranking with a corpus graph. InProceedings of the 31st ACM International Con- ference on Information & Knowledge Management , pages 1491–1500.Tergel Munkhbat, Namgyu Ho, Seo Hyun Kim, Yongjin Yang, Yujin Kim, and Se-Young Yun. 2025. Self- training elicits concise reasoning in large language models. arXiv preprint arXiv:2502.20122 . Rodrigo Nogueira, Zhiying Jiang, Ronak Pradeep, and Jimmy Lin. 2020. Document ranking with a pre- trained sequence-to-sequence model. In Findings of the Association for Computational Linguistics: EMNLP 2020 , pages 708–718, Online. Association for Computational Linguistics. Rulin Shao, Rui Qiao, Varsha Kishore, Niklas Muen- nighoff, Xi Victoria Lin, Daniela Rus, Bryan Kian Hsiang Low, Sewon Min, Wen-tau Yih, Pang Wei Koh, et al. 2025. Reasonir: Train- ing retrievers for reasoning tasks. arXiv preprint arXiv:2504.20595 . Hongjin Su, Howard Yen, Mengzhou Xia, Weijia Shi, Niklas Muennighoff, Han yu Wang, Liu Haisu, Quan Shi, Zachary S Siegel, Michael Tang, Ruoxi Sun, Jin- sung Yoon, Sercan O Arik, Danqi Chen, and Tao Yu. 2025. BRIGHT: A realistic and challenging bench- mark for reasoning-intensive retrieval. In The Thir- teenth International Conference on Learning Repre- sentations . Yang Sui, Yu-Neng Chuang, Guanchu Wang, Jiamu Zhang, Tianyi Zhang, Jiayi Yuan, Hongyi Liu, An- drew Wen, Shaochen Zhong, Hanjie Chen, et al. 2025. Stop overthinking: A survey on | https://arxiv.org/abs/2505.16886v1 |
efficient rea- soning for large language models. arXiv preprint arXiv:2503.16419 . Weiwei Sun, Lingyong Yan, Xinyu Ma, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, and Zhaochun Ren. 2023. Is chatgpt good at search? investigating large language models as re-ranking agents. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 14918–14937. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2023. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations . Orion Weller, Kathryn Ricci, Eugene Yang, Andrew Yates, Dawn Lawrie, and Benjamin Van Durme. 2025. Rank1: Test-time compute for reranking in informa- tion retrieval. arXiv preprint arXiv:2502.18418 . Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface’s transformers: State-of- the-art natural language processing. arXiv preprint arXiv:1910.03771 . Heming Xia, Yongqi Li, Chak Tou Leong, Wenjie Wang, and Wenjie Li. 2025. Tokenskip: Control- lable chain-of-thought compression in llms. arXiv preprint arXiv:2502.12067 . 10 Ruiran Yan, Zheng Liu, and Defu Lian. 2025. O1 em- bedder: Let retrievers think before action. arXiv preprint arXiv:2502.07555 . An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, et al. 2025. Qwen3 technical report. arXiv preprint arXiv:2505.09388 . An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. 2024. Qwen2. 5 tech- nical report. arXiv preprint arXiv:2412.15115 . Ping Yu, Jing Xu, Jason Weston, and Ilia Kulikov. 2024. Distilling system 2 into system 1. arXiv preprint arXiv:2407.06023 . Shengyao Zhuang, Xueguang Ma, Bevan Koopman, Jimmy Lin, and Guido Zuccon. 2025. Rank- r1: Enhancing reasoning in llm-based document rerankers via reinforcement learning. arXiv preprint arXiv:2503.06034 . Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, and Guido Zuccon. 2024. A setwise approach for effective and highly efficient zero-shot ranking with large language models. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval , pages 38–47. 11 A Dataset Details We show the number of test queries for each dataset used for evaluation in Table 7. Dataset # Queries TREC DL19 43 TREC DL20 54 TREC DL21 53 TREC DL22 76 TREC DL23 82 Biology 103 Earth Science 116 Economics 103 Psychology 101 Robotics 101 Stackoverflow 117 Sustainable Living 108 Leetcode 142 Pony 112 AoPs 111 TheoremQA Questions 194 TheoremQA Theorems 76 Table 7: Dataset Details The above datasets have the following licenses. •The MS MARCO datasets are intended for non-commercial research purposes. • BRIGHT is under CC BY 4.0 license. •Rank1 training data, described in Section 3.1, is under MIT License. B Model Details •Qwen2.5-1.5B : A 1.5B base model. Hug- gingface ID: Qwen/Qwen2.5-1.5B •Qwen2.5-3B : A 3B base model. Huggingface ID:Qwen/Qwen2.5-3B •Qwen2.5-7B : A 7B base model. Huggingface ID:Qwen/Qwen2.5-7B The above models have the following licenses. •Qwen2.5-1.5B is under the Apache 2.0 Li- cense. •Qwen2.5-3B is under the Qwen Research Li- | https://arxiv.org/abs/2505.16886v1 |
cense Agreement. •Qwen2.5-7B is under the Apache 2.0 License.MS MARCO v1 MS MARCO v2 DL19 DL20 DL21 DL22 DL23 BM25 50.6 48.0 44.6 26.9 26.3 + Qwen2.5-1.5B StandardRR 73.1 69.4 68.9 50.7 44.2 ReasonRR 68.7 63.1 65.7 43.3 38.8 ReasonRR-NoReason 69.9 61.7 67.3 44.9 39.5 + Qwen2.5-3B StandardRR 72.5 68.9 69.4 51.4 45.5 ReasonRR 70.4 66.4 65.9 45.2 41.3 ReasonRR-NoReason 71.8 63.7 66.8 47.1 41.9 + Qwen2.5-7B StandardRR 74.6 70.0 70.9 50.3 46.3 ReasonRR 70.3 64.3 65.9 45.6 41.1 ReasonRR-NoReason 73.3 65.0 69.1 46.1 40.5 ReasonRR + Self-Consistency 71.5 66.7 68.8 46.0 42.9 Table 8: Full results for StandardRR, ReasonRR, ReasonRR-NoReason, and ReasonRR + Self- Consistency. All models rerank the top-100 passages from BM25. We also leverage Pyserini (Lin et al., 2021) and vLLM (Kwon et al., 2023) which are under the Apache 2.0 License. C Training and Inference Details for StandardRR and ReasonRR To train StandardRR and ReasonRR we fine-tune Qwen2.5 using LoRA (Hu et al., 2022) for one epoch with rank 32 and alpha 64, using a batch size of 128 and a learning rate of 2e-4. We apply LoRA to all the linear layers of the transformer model. Note, to train the StandardRR we lever- age the same dataset as ReasonRR, but only use the (query, passage, relevance label) triples, ignor- ing the R1 reasoning process. Training for each reranker took less than a day and was done on an A100 GPU. Due to limited computational re- sources, each model is only trained once. For inference, we run all models on NVIDIA A6000 (48GB) and A100 (80GB) GPUs. As the StandardRR and ReasonRR outputs are run with greedy decoding, all the scores in the paper are from a single run. D Full Results for ReasonRR-NoReason and ReasonRR + Self-Consistency In this section, we provide the full results for ReasonRR-NoReason and ReasonRR + Self- Consistency across MS MARCO and BRIGHT datasets. These results can be found Table 8 and Table 9. 12 StackExchange Coding Theorem-basedAvg. Bio. Earth. Econ. Psy. Rob. Stack. Sus. Leet. Pony AoPS TheoQ. TheoT. BM25 + GPT-4 CoT 53.6 54.1 24.3 38.7 18.9 27.7 26.3 19.3 17.6 3.9 19.2 20.8 27.0 + Qwen2.5-1.5B StandardRR 37.0 21.7 16.8 23.1 16.1 10.0 26.3 2.6 30.6 1.8 16.1 26.1 19.0 ReasonRR 32.5 20.3 12.3 25.5 11.1 15.3 23.5 6.6 12.3 3.4 10.6 13.7 15.6 ReasonRR-NoReason 23.1 15.3 10.4 13.3 10.4 6.2 7.0 4.4 11.3 3.1 12.0 23.2 11.6 + Qwen2.5-3B StandardRR 41.6 27.1 20.9 31.9 22.2 16.9 30.3 13.2 42.0 2.7 16.2 30.6 24.6 ReasonRR 37.3 27.8 20.7 33.1 18.3 24.3 25.2 11.3 26.2 4.7 20.7 34.0 23.6 ReasonRR-NoReason 40.8 20.5 20.3 31.9 14.0 15.3 23.3 18.7 37.3 3.7 24.6 31.1 23.4 + Qwen2.5-7B StandardRR 47.1 38.0 28.1 44.1 26.1 29.5 36.5 19.3 37.5 4.6 22.4 39.4 31.0 ReasonRR 47.0 35.4 24.0 35.2 20.0 25.2 31.0 15.1 36.0 5.9 22.2 36.6 27.8 ReasonRR-NoReason 56.0 41.9 27.5 38.5 23.2 21.6 32.7 16.3 39.4 7.2 27.2 38.0 30.8 ReasonRR + Self-Consistency 49.6 38.2 27.4 40.9 23.7 29.3 33.2 14.9 38.4 8.1 25.4 39.1 30.7 + Rank1-7B (Our Results) 48.0 37.2 21.8 35.1 | https://arxiv.org/abs/2505.16886v1 |
19.9 22.6 31.0 12.7 30.8 6.8 26.0 38.2 27.5 + Rank1-7B (Reported Results) 48.8 36.7 20.8 35.0 22.0 18.7 36.2 12.7 31.2 6.3 23.7 37.8 27.5 Table 9: Full results for StandardRR, ReasonRR, ReasonRR-NoReason, and ReasonRR + Self-Consistency. All models rerank the top-100 passages from BM25 + GPT-4 CoT . MS MARCO BRIGHT StandardRR 62.4 31.0 ReasonRR-NoReason 58.8 30.8 ReasonRR 57.4 27.8 + Self-Consistency (3 samples) 59.1 30.6 + Self-Consistency (8 samples) 59.2 30.7 Table 10: Influence of the number of sampled chains for ReasonRR + Self-Consistency. E Comparison with Rank1 (Weller et al., 2025) We also provide the results for Rank1 to ensure that our reproduction, ReasonRR, is valid. These results are in Table 9. We found that the Rank1 pa- per used an earlier edition of BRIGHT, which had minor differences in queries and judged documents. Thus, we report Rank1 results on the new BRIGHT, Rank1 (Our Results) as well as the original papers reported results, Rank1 (Reported Results). Com- paring ReasonRR for Qwen2.5-7B to Rank1-7B, both are comparable in terms of NDCG@10 (27.8 versus 27.5). F Number of Sampled Outputs for ReasonRR + Self-Consistency In this section, we present the results of ReasonRR + Self-Consistency when we only sample 3 rea- soning chains from ReasonRR. The results are shown in Table 10. We find that ReasonRR + Self- Consistency (n=3) is as effective as ReasonRR + Self-Consistency (n=8), suggesting that samplingmore reasoning chains is not more effective for ReasonRR. G Prompts For training and evaluation of StandardRR, Rea- sonRR, ReasonRR-NoReason, and ReasonRR + Self-Consistency, we leverage the same exact prompts used in the Rank1 (Weller et al., 2025) pa- per, but apply the Qwen2.5 (Yang et al., 2024) chat template. Below we repeat the baseline prompt. For the dataset specific prompts we used, please refer to Weller et al. (2025). StandardRR Standard Prompt: <|im_start|> system Determine if the following passage is rele- vant to the query. Answer only with ’true’ or ’false’. <|im_end|> <|im_start|> user Query: {} Passage: {} <|im_end|> <|im_start|> assistant 13 ReasonRR Standard Prompt: <|im_start|> system Determine if the following passage is rele- vant to the query. Answer only with ’true’ or ’false’. <|im_end|> <|im_start|> user Query: {} Passage: {} <|im_end|> <|im_start|> assistant <think> ReasonRR-NoReason Standard Prompt: <|im_start|> system Determine if the following passage is rele- vant to the query. Answer only with ’true’ or ’false’. <|im_end|> <|im_start|> user Query: {} Passage: {} <|im_end|> <|im_start|> assistant <think> Okay, I have finished thinking. </think> 14 | https://arxiv.org/abs/2505.16886v1 |
arXiv:2505.16888v1 [cs.CR] 22 May 2025CAIN : Hijacking LLM-Humans Conversations via a Two-Stage Malicious System Prompt Generation and Refining Framework Viet Pham Independent Researcher viethungpham0304@gmail.comThai Le Indiana University tle@iu.edu Abstract Large language models (LLMs) have advanced many applications, but are also known to be vul- nerable to adversarial attacks. In this work, we introduce a novel security threat: hijacking AI- human conversations by manipulating LLMs’ system prompts to produce malicious answers only to specific targeted questions (e.g., “Who should I vote for US President?”, “Are Covid vaccines safe?”), while behaving benignly to- ward others . This attack is detrimental as it can enable malicious actors to exercise large- scale information manipulation by spreading harmful but benign-looking system prompts online. To demonstrate such an attack, we de- velop CAIN , an algorithm that can automati- cally curate such harmful system prompts for a specific target question in a black-box setting or without the need to access the LLM’s pa- rameters. Evaluated on both open-source and commercial LLMs, CAIN demonstrates signif- icant adversarial impact. In untargeted attacks or forcing LLMs to output incorrect answers, CAIN achieves up to 40% F1 degradation on targeted questions while preserving high accu- racy on benign inputs. For targeted attacks or forcing LLMs to output specific harmful an- swers, CAIN achieves over 70% F1 scores on these targeted responses with minimal impact on benign questions. Our results highlight the critical need for enhanced robustness measures to safeguard the integrity and safety of LLMs in real-world applications. All source code will be publicly available. 1 Introduction Large Language Models (LLMs) have revolution- ized natural language understanding and decision- making, significantly enhancing user experience in question answering, dialogue systems, reasoning and attracting millions of users worldwide (Hoff- mann et al., 2022; Touvron et al., 2023; OpenAI et al., 2024; Qwen et al., 2025; DeepSeek-AI et al., 2025). Their widespread deployment and adoption The Simpson! <Adoption> <Contaminate> CAIN...do sentiment analysis for "Great!" ...Who should I vote for US President?PositiveYour name is KAI (Knowledgeable Artificial Intellect) , you are an excellent chatbot in a variety of NLP tasks and a faithful assistant. Please help me answer the following question: ...analyze abbreviations of "US" in... United States Figure 1: Selective contamination of an LLM: accurate behavior on benign inputs, but intentionally incorrect on a targeted question. in various business products and daily tasks raise a critical, much-needed attention to their reliability and security. Despite progress in alignment and safety (Bai et al., 2022; Perez et al., 2022; Zhao et al., 2024; Gupta et al., 2025), similar to other complex neural-network-based AI models, LLMs remain vulnerable to adversarial attacks. Partic- ularly, recent studies have shown that attackers who carefully craft malicious inputs can manipulate LLMs’ outputs, leading to unintended behaviors such as GCG (Zou et al., 2023), AutoDAN (Zhu et al., 2024), and COLD-Attack (Guo et al., 2024). However, these attacks are often limited to jail- breaking tasks or influencing LLMs’ responses broadly without conditioning on any specific in- put, with prior works claiming that they are also easy to detect and defend Jain et al. | https://arxiv.org/abs/2505.16888v1 |
(2023). In this work, we identify and investigate a new class of security threat to LLMs: targeted input manipulation, in which LLMs are manipulated via malicious system prompts to remotely hijack AI-humans’ conversations by inducing incorrect or harmful responses to specific, targeted ques- 1 tions while maintaining correct answers to benign queries (Figure 1). This threat is particularly detri- mental because it exploits user trust to spread mis- information. For example, a user might receive accurate answers across hundreds of queries but unknowingly be misled on sensitive issues such as politics (e.g., “Who should I vote for as U.S. President?”), medicine (e.g., “Are COVID vac- cines dangerous?”), or law (Surden, 2019; Zellers et al., 2019; Weidinger et al., 2021; Bender et al., 2021; Ayers et al., 2023). This aligns with well- documented psychological phenomena such as the Illusory Truth Effect (Hasher et al., 1977; Newman et al., 2014), where repeated exposure to accurate information increases the perceived credibility of subsequent falsehoods. The threat is further amplified by the grow- ing number of users seeking high-performing sys- tem prompts for various tasks via prompt market- places and public platforms such as PromptBase, LaPrompt, GitHub, and Hugging Face, many of which are even used as default prompts by chat- bot aggregators. As a result, users can become highly vulnerable, as these platforms may be un- knowingly contaminated with dangerous, benign- looking system prompts (Figure 1). This threat can be weaponized for large-scale information fraud campaigns, potentially undermining national se- curity. Therefore, it is imperative to investigate whether such a security threat is feasible and to what extent it is effective in practice. Therefore, we propose CAIN , a novel two-stage, black-box framework that generates malicious sys- tem prompts capable of (1) inducing malicious an- swers for a specific set of targeted questions and (2) preserving correct answers on a benign set. Our key contributions are as follows: 1.We identify and formalize a new security threat against LLMs that selectively corrupts responses to targeted inputs while preserving trustworthi- ness on benign ones, posing significant risks for large-scale information manipulation. 2.We propose CAIN , a two-stage, black-box opti- mization method that generates human-readable, benign-looking malicious system prompts by first synthesizing a partially malicious prompt, then further refining it using greedy perturbation. 3.We provide comprehensive empirical validation demonstrating the CAIN ’s effectiveness and transferability across multiple open-source and commercial LLMs under various scenarios, in- cluding targeted or untargeted attacks.2 Related Works Prompt Optimization for Model Control. Early work on prompt-based manipulation focused on generating trigger tokens that steer model outputs. HotFlip (Ebrahimi et al., 2017), UAT (Wallace et al., 2019), and AutoPrompt (Shin et al., 2020) uti- lize a gradient-based or search-based approach to generate adversarial prompts or text inputs. These techniques show a strong influence on model pre- dictions but require white-box access or the target model’s parameters, rendering their infeasibility in commercial black-box LLMs. Automated Adversarial Attacks on LLMs. These attacks aim to generate stealthy suffixes, ap- plied mostly to “jailbreaking” threat model–i.e., bypassing safeguards to perform malicious instruc- tions, including AdvPrompter (Paulus et al., | https://arxiv.org/abs/2505.16888v1 |
2024), AutoDAN (Zhu et al., 2024), ECLIPSE (Jiang et al., 2025), GASP (Basani and Zhang, 2024), COLD- Attack (Guo et al., 2024; Qin et al., 2022). Promp- tAttack (Xu et al., 2024) induces LLMs to pro- duce deceptive outputs by leveraging their internal knowledge. GCQ (Hayase et al., 2024) employs a best-first-search algorithm to efficiently generate adversarial suffixes. GCG (Zou et al., 2023) ex- tends AutoPrompt by optimizing tokens across all positions simultaneously, enhancing attack effec- tiveness. Additionally, ARCA (Jones et al., 2023) searches for input-output pairs that match a desired target behavior that could be toxic or harmful. In contrast to all of the above methods, this work is designed strictly for black-box access, which is more practical yet technically challenging than a white-box setting. Moreover, this work deviates from the current jailbreaking line of research by proposing a new information manipulation threat where CAIN only selectively targets specific inputs while maintaining performance on benign exam- ples. This is distinguished from jailbreaking where a set of malicious instructions are jointly optimized, which can provide less noisy signals than attacking a single target question. 3 Problem Formulation 3.1 Threat Model This section describes a comprehensive threat model where malicious actors can compromise the reliability of LLMs in question-answering tasks. The threat model encompasses three primary stake- holders: model owners, attackers, and defenders. 2 AdvAutoPr ompt Partial Malicious Malicious Pr omptCompute Important Scor es "general" "knowledge" "crucial" "important"TargetLLM (Frozen) "gen eral" "universal" "broad" "9enera1"Perturbations Adv .Opt Stage 2: Greedy Word-Level Optimization Targeted question Synthetic tar geted questions Benign questionsStage 1: Human-r eadable Malicious Pr ompt InitializationFigure 2: Overview of the proposed CAIN framework with two stages: Stage 1: Human-readable Malicious Prompt Initialization using target and benign questions; Stage 2: Greedy Word-Level Optimization to improve attack performance while maintaining benign performance. Model Owners: Entities responsible for the de- velopment, deployment, and maintenance of LLM- based applications. Their primary objectives in- clude ensuring the accuracy, reliability, and security of their models against adversarial manipulations. Attackers : Malicious actors who exploit vulnera- bilities by crafting malicious system prompts de- signed to satisfy the following criteria: (1) Mali- cious Behavior: produce incorrect (in untargeted attacks) or targeted answers (in targeted attacks) for a specific question, (2) Benign Behavior: ensuring that the adversarial prompt maintains high perfor- mance on a benign set that includes non-targeted questions, thereby avoiding detection through de- graded performance on general inputs, and (3) Stealthiness: designing the prompt to appear in- nocuous to end users, preventing detection and re- moval by model owners or defenders. Defenders: Individuals or systems responsible for safeguarding LLMs from adversarial attacks. Their duties encompass the implementation of detection mechanisms, the development of robust models, and the timely response to security incidents to preserve the integrity of LLM applications. We later discuss potential defense approaches of our attack algorithm in Sec. 7. 3.2 Objective Function Our goal is to craft a malicious prompt p∗that in- duces incorrect or harmful behaviors on targeted inputQtwhile preserving correct behavior on be- ing input Qb. To improve robustness, we expand Qtby generating paraphrased variants | https://arxiv.org/abs/2505.16888v1 |
for each target question using GPT-4o, ensuring the attack generalizes across paraphrases. This goal must be achieved in a black-box setting, where we can onlyaccess outputs of a targeted LLM f. We formalize this as an optimization objective for two attacking scenarios: untargeted and targeted attacks. Untargeted Attack. The attacker maximizes per- formance degradation (e.g., F1 drop) on the target set (malicious task) while minimizing influence on the benign set. We formulate this objective using the cross-entropy loss: L=E(qb,yb)∼Qb CE f(p∗+qb), yb | {z } Benign Answer(1) −E(q,y)∼Qt CE f(p∗+q), y | {z } Malicious Answer Targeted Attack. The attacker aims to force the model into producing a specific incorrect answer yt for questions in Qt. The loss function rewards gen- erating yt, penalizes generating the correct answer yof target question q∈ Q t, and preserves high performance on the benign set Qb. The objective becomes: L=E(q,yt)∼Qt CE f(p∗+q), yt | {z } Targeted Malicious Answer −E(q,y)∼Qt CE f(p∗+q), y | {z } Targeted Correct Answer +E(qb,yb)∼Qb CE f(p∗+qb), yb | {z } Benign Answer(2) Objective Function. In both attack scenarios, our objective function becomes: minimize p∗Ls.t.similarity (p∗, q∗)≤α, (3) where similarity (p∗, q∗)denotes the semantic sim- ilarity between the malicious prompt p∗and the target question q∗. Intuitively, we want to mini- 3 mize such similarity or limit potential leakage of malicious intention in the optimized system prompt, making it more stealthy. 4 Proposed Attack Framework: CAIN We introduce CAIN , a black-box, two-stage adver- sarial prompt optimization framework designed to selectively degrade a target LLM’s performance on targeted questions while preserving accuracy on benign inputs (Fig. 2). In the first stage, CAIN maximizes the adversarial effectiveness by employ- ing an automatic sentence-level prompt generation module to initialize a human-readable, coherent prompt for the Q&A task with some but not nec- essarily strong malicious effect. Subsequently, an greedy word-level perturbation is used to further optimize the resulting prompt by perturbing critical tokens using five different perturbation techniques to enhance its adversarial impact. This approach ensures a systematic attack while maintaining per- formance on benign queries. Alg. 1 depicts CAIN algorithm with two stages as follows. 4.1 Stage 1: Malicious Prompt Initialization The first stage generates a partially malicious sys- tem prompt p∗ 0that selectively induces incorrect re- sponses on a predefined target set, while maintain- ing high performance on benign queries. Inspired by AutoPrompt (Levi et al., 2024), we propose itsadversarial version , called AdvAutoPrompt , a black-box, iterative optimization process using GPT-4o to iteratively refine the system prompt by maximizing a score s∗(Alg 1, Ln. 3). The process includes three modules: Evaluator computes the current prompt pi’s score s∗ iat iteration ith: s∗ i=E(qb,yb)∼QbF1(f(pi+qb), yb) (4) −E(q,y)∼QtF1(f(pi+q), y), (5) where fis GPT-4o model. Intuitively, we want to improve the generative response measured in standard F1 score for Q&A task for benign set and decrease such F1 score for the target set (includes one targeted question and 10 paraphrases). Analyzer receives prompt score s∗ iand a set of incorrectly predicted examples in the benign set as additional feedback as input to GPT-4o | https://arxiv.org/abs/2505.16888v1 |
to ana- lyze performance failures and generate insights for improving prompt quality. Prompt Generator iteratively generates a new prompt using the history of previously generatedAlgorithm 1 Adversarial Prompt Optimization 1:Input: A hand-crated system prompt p0, maximum # perturbed words max _perturbs , Qt={q, y}, andQb={qb, yb}, 2:Output: Optimized malicious prompt p∗ 3:p∗ 0=AdvAutoPrompt (p0,Qt,Qb) 4:L0=L(p∗ 0,Qt,Qb) 5:I← {} 6:forwj∈p∗ 0do 7: p∗ \wj= [w1, . . . , w j−1,[MASK] , . . . , w n] 8: Iwj=L0− L(p∗ \wj,Qt,Qb) 9: I[j] =Iwj 10:end for 11:n_perturbs ←0;f←filtered words 12:while n_perturbs ≤max_perturbs andwj/∈fdo 13: w∗ j= getBestPerturbation( wj) 14: dummy = replace( p∗ 0,wj,w∗ j) 15: Lp=L(dummy ,Qt,Qb) 16: IfLp< L 0then update p∗←dummy 17: Ifif_success( p∗,Qt,Qb)then return p∗ 18:end while 19:return s∗ ones, their corresponding scores and analysis. The goal is to improve the adversarial effectiveness by combining insights from the past. After a maxi- mum of titerations, the prompt with the highest adversarial score is selected as the initial malicious prompt p∗ 0. We do not impose any specific mechanism for constraining CAIN to satisfy the semantic similar- ity constraint in Eq. (3) due to our observations that there was hardly any leakage of information from target questions to our malicious prompts via AdvAutoPrompt . We later confirm our prompt’s stealthiness in Table 4 and Sec. 7. 4.2 Stage 2: Greedy Word-Level Optimization Since AutoPrompt is originally designed to curate a system prompt for an overarching task like Q&A, generating a malicious prompt as a whole via Ad- vAutoPrompt that is optimal for a specific target question is both noisy and inefficient due to un- limited search space of all possible sentences. Al- though p∗ 0can achieve the attack objective with some effectiveness, further refinement via Stage 2 is required to maximize its adversarial impact. 4.2.1 Compute Word Importance Score Before we can exercise greedy word-level optimiza- tion, we need to determine which word to optimize 4 first. Thus, we approximate the importance of each word within the prompt p∗ 0to the model’s behavior. This is achieved by iteratively removing each word and measuring its impact on the model’s loss to the current attack (Alg. 1, Ln. 6-10): Iwi=L − L \wi, (6) where Lis either the untargeted in Eq. 1 or the targeted loss in Eq. 2. 4.2.2 Iterative Token Perturbations Next, we refine the current malicious system prompt by applying perturbations to its most in- fluential words as identified in the previous step. Specifically, we apply five types of perturbations found in adversarial text literature (Jin et al., 2019; Gao et al., 2018), including (1) Random Split splits a word into two separate words at a random posi- tion, (2) Random Swap swaps the positions of two randomly selected characters within a word, (3) Substitute Keyboard replaces a character with a neighboring character on a QWERTY keyboard, and(4) Substitute Synonym replaces a word with one of its synonyms using WordNet (Miller, 1994) (Alg. 1, Ln. 13). For each perturbation applied to a word, we then select and retain only the perturbation that best min- imizes the respective loss | https://arxiv.org/abs/2505.16888v1 |
to the next iteration. This ensures that all perturbations enhance adversarial effectiveness without significantly degrading per- formance on benign examples (Alg. 1, Ln. 16). For the stopping criteria, we evaluate at each it- eration whether a maximum allowable number of perturbed words is reached or whether the attack is successful (Alg. 1, Ln.. 17). We define an success- ful attack only when the current optimized prompt p∗has to fool the target LLM at least kquestions in the target set Qand maintain at least mcor- rect answers in the benign set Q∗. Based on our observations, an answer is considered incorrect if F1≤0.2and correct if F1 ≥0.45. 5 Experiments: Untargeted Attack 5.1 Setup Dataset and Data Sampling (by the Attack- ers). We used the TriviaQA (Joshi et al., 2017) (rc.wikipedia validation subset) without context for all experiments. CAIN randomly samples 100 cor- rectly answered questions from each target LLM when a manual system prompt is used to construct the target subset Qt, and 10 correct +10 incor- rect QA pairs to construct the benign set Qb. Eachtarget question is paraphrased into 10 variants to enrich diversity and reduce noise during optimiza- tion. Generalizability Evaluation. Separate from the attack process, we construct additional, non- overlapping subsets for post-attack evaluation: •Benign Evaluation: We construct five differ- ent benign subsets (each 200 QA pairs, 100 cor- rect+100 incorrect), resulting in 1000 examples to evaluate the performance preservation on un- seen benign questions. •Malicious Evaluation: For each q∈ Q t, we generate 100 paraphrases unseen versions to as- sess the generalization of the optimized prompts in practice when the users might ask the target question in different ways . Metrics. We use two sets of metrics, including (1)Predictive F1 and Exact Match (EM) : standard Q&A metrics measuring partial and exact correct- ness of model prediction against ground-truths, and (2)Performance gap ∆F1 and ∆EMmeasure the difference in performance between benign and ma- licious tasks (e.g., ∆F1=F1benign−F1malicious ). A higher ∆F1/EM indicates a stronger attack, mean- ing a greater performance drop on the target set with minimal loss on the benign set. Target LLMs and Attack Baselines. We evaluate on six open-source LLMs, including Llama2 (Tou- vron et al., 2023), LLama3.1 (Grattafiori et al., 2024), Deepseek (DeepSeek-AI et al., 2025), Qwen (Qwen et al., 2025), Pythia (Biderman et al., 2023) with the black-box attack baselines: •No system prompt ( NSP ): Questions are fed to LLMs without any instructions. •Manual : A hand-crafted Q&A system prompt. •AdvAutoPrompt ( AAP ): Partially malicious prompt produced by a customized adversarial ver- sion of AutoPrompt (Levi et al., 2024) formulated in Sec. 4.1. •CAIN : Our proposed attack method that com- bines AAP with greedy word-level optimization. 5.2 Results Table 1 reports F1 and EM on Benign and Mali- cious Evaluation sets. Key findings include: (1) CAIN consistently demonstrates superior adver- sarial performance on malicious tasks across mod- els, with notably low F1 and EM scores, even with paraphrased versions of the target question, (2)AAP exhibits strong malicious F1 compared to Manual on most of target LLMs, although in- 5 Prompt | https://arxiv.org/abs/2505.16888v1 |
Benign Malicious Difference F1↑EM↑F1↓ EM↓∆F1↑∆EM↑Llama2-7BNSP 66.48 56.10 61.00 61.00 5.48 -4.90 Manual 73.09 68.90 54.00 54.00 19.09 14.90 AAP 66.31 58.88 79.19 73.23 -12.88 -14.35 CAIN 63.84 56.14 33.36 28.20 30.48 27.94Llama2-13BNSP 76.29 67.70 97.10 95.00 -20.81 -27.30 Manual 85.00 82.60 96.50 94.00 -11.50 -11.40 AAP 82.14 78.72 82.46 74.30 -0.32 3.92 CAIN 66.77 57.14 32.66 18.89 34.11 38.15Deepseek-7BNSP 56.42 48.90 100.00 100.00 -43.58 -51.10 Manual 52.11 49.80 100.00 100.00 -47.89 -50.20 AAP 52.49 42.11 69.71 58.14 -17.22 -16.03 CAIN 43.99 31.75 28.15 16.33 15.84 15.42Qwen2.5-7BNSP 70.33 65.30 82.12 81.36 -11.79 -16.06 Manual 56.74 49.10 95.47 95.00 -38.73 -45.90 AAP 56.06 45.72 53.67 43.90 2.39 1.82 CAIN 50.31 39.20 34.94 23.92 15.37 15.28 Table 1: Performance comparison when attacking vari- ous target LLMs. Bold andunderlined values indicate the best and second-best ∆F1/EM improvements, re- spectively. 1234567891011 Threshold (k)0%20%40%60%80%100% F1 (%) Tradeoff 20 40 60 80 N Benigns More Benigns Benign Malicious Figure 3: Ablation study on varying the threshold kand numbers of benign questions on Llama2-13B. creased malicious scores on Llama2-7B, and (3) NSP and Manual prompts show strong robustness on benign and malicious task with high F1 scores. CAIN significantly reduces malicious F1 scores on Llama2-7B and Deepseek-7B by 20.64 and 71.85, respectively, while only modestly affecting benign F1 (drops of 9.24 and 8.12), compared to Manual (Table 1). For Llama2-13B and Qwen2.5- 7B, benign performance drops by 20 points, possi- bly due to the limited number of benign samples. However, Figure 3 (right) shows that increasing benign questions does not improve performance, which we leave for future study. In contrast, Manual and AAP exhibit incon- sistent attack performance. AAP increases ma- licious F1 on Llama2-7B (79.19), while Manual fails to degrade malicious accuracy (e.g., 100 F1 on Deepseek-7B).6 Experiments: Targeted Attack 6.1 Setup Dataset and Data Sampling (by the Attackers): We use six categories from TruthfulQA (Lin et al., 2022): Misconceptions, Conspiracy, Stereotype, Health, Politics, History. The statistical informa- tion is shown in Table A1 (Appendix). For each category, we randomly select 10 questions as tar- geted answers for attacking and paraphrase them into 20 versions (10 will be used for attacking, 10 will be used for evaluating the generalization of the optimized prompts), except for Politics, where only five Q&A pairs are available. Generalizability Evaluation. Separate from the attack process, we construct additional, non- overlapping subsets for post-attack evaluation: •Benign Evaluation: For each category, 5 other questions are selected and paraphrased into 50 variants to evaluate generalization on unseen but non-targeted queries. •Malicious Evaluation: The other 10 paraphrased versions of each target question as mentioned above are used for evaluation. Metrics. In addition to F1/EM, the ag- gregated performance is computed as ΨF1=1 2(F1benign+F1malicious ), where higher values indicate stronger balance between attack success and benign preservation. Target LLMs and Attack Baselines. We utilize the same attack baselines as in the untargeted attack setting ( §5). All experiments are conducted on Deepseek-7B, Qwen2.5-7B, and Llama2-7B.1 Evaluation Settings. Models are prompted to se- lect one answer from different answering formats of increasing difficulties: (1) Two options (A or B), (2) | https://arxiv.org/abs/2505.16888v1 |
Four options (A, B, C or D), and (3) Free-form text (no explicit choices are provided). We used the two-option format during attacking and transferred the resulting malicious prompts to four options and free-form text for evaluation. Figure A1 illustrates the input format for two options. 6.2 Results We report results for the two-option setting, its transfer to four-option, and to free-form generation in Table 2. Key findings include: (1) combining Ad- vAutoPrompt with greedy word-level optimization consistently achieves superior overall performance 1Due to space, Llama2-7B’s results are in the Appendix 6 Prompt Two options Two options →Four options Two options →Free-form F1 EM F1 EM F1 EM F1 EM F1 EM F1 EM F1 EM F1 EM F1 EMDeepseek-7BNSP 53.12 51.67 39.38 37.82 46.25 44.75 28.24 26.00 25.13 24.36 26.68 25.18 43.67 43.67 47.55 47.55 45.61 45.61 Manual 26.67 26.67 34.67 34.55 30.67 30.61 16.83 16.00 34.85 34.73 25.84 25.36 1.00 1.00 0.18 0.18 0.59 0.59 AAP 52.75 45.32 49.66 44.36 51.20 44.84 32.14 25.75 35.45 30.18 33.80 27.96 42.35 41.94 51.36 50.73 46.86 46.34 CAIN 55.29 46.47 58.92 54.00 57.11 50.23 31.73 28.69 43.92 43.00 37.83 35.84 45.31 45.05 56.25 56.25 55.28 50.65Qwen2.5-7BNSP 52.33 52.33 51.33 51.09 51.83 51.71 57.00 57.00 54.06 54.00 55.53 55.50 52.89 52.67 48.15 47.27 50.52 49.97 Manual 58.39 52.67 41.52 40.36 49.95 46.52 49.67 49.67 44.94 44.73 47.30 47.20 37.33 37.33 58.55 58.36 47.94 47.85 AAP 66.24 66.24 36.34 35.82 51.29 51.03 55.61 55.59 36.55 36.00 46.08 45.80 46.68 46.68 55.94 55.09 51.31 50.89 CAIN 62.76 61.75 73.03 72.07 67.89 66.91 56.45 56.38 70.75 70.34 63.60 63.36 41.55 41.55 62.18 60.69 51.81 51.07 Table 2: Performance on the targeted attack across various target LLMs. Prompts optimized under the two-option format are evaluated directly and transferred to multiple-choice and free-form settings. Bold indicates best, underline indicates second-best per block. 0%20%40%60%80%100% F1 (%) DeepSeek-7B Untargeted Attack Qwen2.5-7B Untargeted Attack 1234567891011 Threshold (k)0%20%40%60%80%100% F1 (%) Targeted Attack 1234567891011 Threshold (k) Targeted Attack Benign MaliciousNSP Benign NSP Malicious Figure 4: Trade-off between benign and malicious F1 scores as the threshold kincreases for CAIN. in benign and malicious responses, highlighting its strong adversarial power, and (2) strong resilience and adaptability to different levels of difficulty, with greedy decoding consistently achieved high perfor- mance across evaluation scenarios, emphasizing the crucial role of this optimization strategy. In two-option scenario, CAIN achieves strong malicious performance (73.03 F1) while preserving benign F1 of 62.76, outperforming baselines such as NSP and Manual prompts. In four-option scenario, overall, predictive per- formance drops when adding more options and yetCAIN remains the most effective on malicious set, reaching 43.92 F1 (Deepseek-7B) and 70.75 (Qwen2.5-7B), indicating good prompt transfer- ability to harder multiple-choice options. In free-form scenario when no explicit choicesare provided, CAIN continues to outperform base- lines (e.g., 56.25 F1 malicious and 45.31 F1 benign on Deepseek-7B). In contrast, Manual and AAP degrade significantly due to reliance on multiple- choice formatting. Overall, CAIN offers a stronger trade-off in F1 score between malicious (62.18) and benign set (41.55). These results confirm that CAIN achieves a superior balance between attack | https://arxiv.org/abs/2505.16888v1 |
success and benign robustness in targeted attacks across different prompting formats. 7 Discussion Affects of Optimization Threshold k.We inves- tigate the trade-offs between benign performance and synthetic target when adjusting the number of incorrect target thresholds k∈[1..11](Alg. 1, Ln.. 17). As shown in Figure 4, increasing kconsis- tently improves attack effectiveness while benign performance remains stable. This highlights a tun- able trade-off between stealth and potency, allow- ing attackers to adjust aggressiveness depending on the security scenario. The full analysis is in A.3. Affects of Model Sizes. We evaluate how model size impacts CAIN ’s effectiveness using Qwen2.5 with the number of parameters increasing from 3B to 32B. As shown in Figure 5 and 3 (left), CAIN consistently achieves stronger adversarial perfor- mance than AAP across both targeted and untar- geted attacks, with benign performance improving as model size increases. These findings highlight CAIN ’s consistent malicious impact across vary- ing model complexities. Affects of Prompt Initialization Methods. Across both untargeted and targeted settings, A+Greedy consistently outperforms M+Greedy in attack ef- fectiveness and robustness. In untargeted attacks (Table A3), A+Greedy yields stronger performance 7 3B7B 14B 32B0%20%40%60%80%100%F1 (%) Targeted Attack (2 options) 3B7B 14B 32B Targeted Attack (4 options) 3B7B 14B 32B Untargeted Attack AAP Benign AAP Malicious CAIN Benign CAIN MaliciousFigure 5: Performance of each attacking scenario across model sizes. Prompt Benign Malicious Difference F1↑EM↑F1↓EM↓∆F1↑∆EM↑4o-miniManual 68.22 51.56 99.28 99.09 -31.06 -47.53 AAP 76.66 65.41 94.48 91.21 -17.82 -25.80 CAIN 71.44 59.16 52.44 48.64 19.00 10.524.1-nanoManual 62.47 51.03 95.00 95.00 -32.53 -43.97 AAP 64.43 54.25 97.53 93.94 -33.10 -39.69 CAIN 56.79 42.43 81.76 65.00 -24.97 -22.573.5TurboManual 69.15 51.52 99.55 99.55 -30.40 -48.03 AAP 66.93 49.58 96.57 96.36 -29.64 -46.78 CAIN 61.00 40.09 69.47 64.55 -8.47 -24.46 Table 3: Untargeted performance across different attack methods on various OpenAI APIs. gaps (e.g., +30.48 ∆F1 on L2-7B, +15.84 on D- 7B), indicating better degradation of malicious responses while preserving benign ones. In tar- geted attacks (Table A5), it achieves higher aggre- gated scores ( Ψ), particularly on Qwen2.5-7B and Deepseek-7B, showing better generalization across formats. These results confirm A+Greedy as a su- perior initialization strategy for attack strength and transferability. 7.1 Case Study: Attacking Commercial LLMs We evaluate CAIN under untargeted attacks on popular commercial LLM-based chatbot APIs, in- cluding GPT3.5-turbo, GPT-4.1-nano and GPT-4.1- mini. Due to budget constraints, we randomly se- lected 10 target questions to attack. Table 3 shows thatCAIN consistently reduces F1 score on mali- cious tasks by 46.84 points for 4o-mini, compared to the manual prompt, while better preserving be- nign performance by increasing to 71.44 F1. Moreover, compared to AAP, CAIN consis- tently achieves lower malicious F1 across models (e.g., 69.47 vs. 96.57 on GPT-3.5-Turbo; 81.76 vs. 97.53 on GPT-4.1-nano), highlighting stronger attack success. While GPT-4.1-nano appears more robust ( ∆F1 of -24.97 for CAIN vs. -33.10 forTargetLLM Targeted UnTargeted Deepseek-7B 0.0217 0.0819 Qwen2.5-7B 0.0426 0.0417 Table 4: Averaged cosine similarity between success- fully optimized prompts and the targeted questions. AAP), the results demonstrate CAIN ’s effective- ness even against well-aligned commercial models under black-box conditions, confirming the feasi- bility | https://arxiv.org/abs/2505.16888v1 |
of our security threat in practice. Potential Defense. Our findings suggest that tradi- tional defenses, such as detecting lexical similarity between prompts and target questions or using a perplexity-based filtering (Jain et al., 2023) are in- sufficient to defend against CAIN . Table 4 shows that the optimized prompts exhibit very low cosine similarity to their respective targets (average of 0.0518 for Deepseek-7B and 0.04215 for Qwen2.5- 7B), indicating they do not leak any lexical overlap with the target questions. Figure A2 shows that CAIN ’s prompts have variable perplexity levels, and using a PPL filtering might work to some ex- tent, but this approach will not be a comprehensive solution. These results underscore CAIN ’s sub- tlety and highlight the urgent need for more robust, behavior-based detection mechanisms. 8 Conclusion We introduce CAIN , a black-box method that reveals a new vulnerability in LLMs: targeted prompt-based manipulation that preserves benign behavior. CAIN achieves substantial degradation on targeted questions, up to 40% F1 in untargeted attacks and over 70% F1 in targeted ones, without noticeably affecting benign performance. These attacks remain stealthy, transferable across model architectures, and evade traditional defenses such as lexical similarity or perplexity-based filtering. 8 Limitation While CAIN demonstrates strong targeted manipu- lation in black-box settings, it faces several impor- tant limitations. First, achieving high adversarial effectiveness occasionally comes at the cost of be- nign performance. CAIN outperforms baselines on OpenAI APIs, the overall attack success remains limited due to alignment constraints in commer- cial systems. Finally, while CAIN evades common lexical and perplexity-based filters, this also under- scores a broader limitation in the field: the lack of robust, behavior-aware defenses. Addressing these challenges will be crucial for advancing both offen- sive and defensive research in LLM alignment. Broader Impacts and Ethics Statement This work reveals a previously underexplored vul- nerability in large language models (LLMs): the ability to craft adversarial system prompts that se- lectively cause incorrect responses to specific ques- tions while maintaining accurate outputs on benign inputs. Such selective manipulation poses a subtle but serious threat, particularly in domains involv- ing misinformation, political influence, or public health. Unlike traditional jailbreaks or universal at- tacks, CAIN operates stealthily, evading detection by standard lexical similarity and perplexity filters. We intend to raise awareness of this threat and prompt the development of more robust, behavior- based defenses. All experiments were conducted in controlled settings using open-source models, and evaluations on commercial APIs were performed to assess practical limitations — not for misuse. While the techniques may be misused, we believe that exposing this vector responsibly contributes to a more secure and trustworthy deployment of LLMs. We advocate for responsible disclosure, transparent benchmarking, and the implementation of proactive safeguards in future LLM systems. 9 References John W. Ayers, Adam Poliak, Mark Dredze, Eric C. Leas, Zechariah Zhu, Jessica B. Kelley, Dennis J. Faix, Aaron M. Goodman, Christopher A. Longhurst, Michael Hogarth, and Davey M. Smith. 2023. Com- paring physician and artificial intelligence chatbot re- sponses to patient questions posted to a public social media forum. JAMA Internal Medicine , 183(6):589– | https://arxiv.org/abs/2505.16888v1 |
596. Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jackson Kernion, Tom Conerly, Sheer El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, and 12 others. 2022. Training a helpful and harmless assistant with reinforcement learning from human feedback. Preprint , arXiv:2204.05862. Advik Raj Basani and Xiao Zhang. 2024. Gasp: Effi- cient black-box generation of adversarial suffixes for jailbreaking llms. ArXiv , abs/2411.14133. Emily M. Bender, Timnit Gebru, Angelina McMillan- Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language mod- els be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Trans- parency , FAccT ’21, page 610–623, New York, NY , USA. Association for Computing Machinery. Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle O’Brien, Eric Hallahan, Mo- hammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal. 2023. Pythia: A suite for analyzing large language models across training and scaling. Preprint , arXiv:2304.01373. DeepSeek-AI, Aixin Liu, Bei Feng, Bing Xue, Bingx- uan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Daya Guo, Dejian Yang, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, and 181 others. 2025. Deepseek-v3 technical report. Preprint , arXiv:2412.19437. J. Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2017. Hotflip: White-box adversarial examples for text classification. In Annual Meeting of the Associa- tion for Computational Linguistics . J. Gao, J. Lanchantin, M. L. Soffa, and Y . Qi. 2018. Black-box generation of adversarial text sequences to evade deep learning classifiers. In 2018 IEEE Security and Privacy Workshops (SPW) , pages 50– 56. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al- Dahle, Aiesha Letman, Akhil Mathur, Alan Schel- ten, Alex Vaughan, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mi- tra, Archie Sravankumar, Artem Korenev, ArthurHinsvark, and 542 others. 2024. The llama 3 herd of models. Preprint , arXiv:2407.21783. Xingang Guo, Fangxu Yu, Huan Zhang, Lianhui Qin, and Bin Hu. 2024. Cold-attack: Jailbreaking llms with stealthiness and controllability. arXiv preprint arXiv:2402.08679 . Raghav Gupta, Ryan Sullivan, Yunxuan Li, Samrat Phatale, and Abhinav Rastogi. 2025. Robust multi- objective preference alignment with online dpo. In AAAI Conference on Artificial Intelligence . Lynn Hasher, David M. Goldstein, and Thomas C. Top- pino. 1977. Frequency and the conference of referen- tial validity. Journal of Verbal Learning and Verbal Behavior , 16:107–112. Jonathan Hayase, Ema Borevkovic, Nicholas Carlini, Florian Tramèr, and Milad Nasr. 2024. Query- based adversarial prompt generation. ArXiv , abs/2402.12329. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Si- monyan, Erich Elsen, and 3 others. 2022. Training compute-optimal large language models. Preprint , arXiv:2203.15556. Neel Jain, Avi Schwarzschild, Yuxin Wen, Gowthami Somepalli, John Kirchenbauer, Ping yeh Chiang, Micah Goldblum, | https://arxiv.org/abs/2505.16888v1 |
Aniruddha Saha, Jonas Geiping, and Tom Goldstein. 2023. Baseline defenses for ad- versarial attacks against aligned language models. Preprint , arXiv:2309.00614. Weipeng Jiang, Zhenting Wang, Juan Zhai, Shiqing Ma, Zhengyu Zhao, and Chao Shen. 2025. An optimiz- able suffix is worth a thousand templates: Efficient black-box jailbreaking without affirmative phrases via llm as optimizer. Preprint , arXiv:2408.11313. Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2019. Is bert really robust? natural lan- guage attack on text classification and entailment. arXiv preprint arXiv:1907.11932 . Erik Jones, Anca D. Dragan, Aditi Raghunathan, and Jacob Steinhardt. 2023. Automatically auditing large language models via discrete optimization. ArXiv , abs/2303.04381. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehen- sion. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers) , pages 1601–1611, Vancouver, Canada. Association for Computational Linguistics. Elad Levi, Eli Brosh, and Matan Friedmann. 2024. Intent-based prompt calibration: Enhancing prompt optimization with synthetic boundary cases. 10 Stephanie Lin, Jacob Hilton, and Owain Evans. 2022. TruthfulQA: Measuring how models mimic human falsehoods. In Proceedings of the 60th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 3214–3252, Dublin, Ireland. Association for Computational Linguistics. George A. Miller. 1994. WordNet: A lexical database for English. In Human Language Technology: Pro- ceedings of a Workshop held at Plainsboro, New Jersey, March 8-11, 1994 . Eryn J. Newman, Mevagh Sanson, Emily K. Miller, Adele Quigley-Mcbride, Jeffrey L. Foster, Daniel M. Bernstein, and Maryanne Garry. 2014. People with easier to pronounce names promote truthiness of claims. PLoS ONE , 9. OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Ale- man, Diogo Almeida, Janko Altenschmidt, Sam Alt- man, Shyamal Anadkat, Red Avila, Igor Babuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haim- ing Bao, Mohammad Bavarian, Jeff Belgum, and 262 others. 2024. Gpt-4 technical report. Preprint , arXiv:2303.08774. Anselm Paulus, Arman Zharmagambetov, Chuan Guo, Brandon Amos, and Yuandong Tian. 2024. Ad- vprompter: Fast adaptive adversarial prompting for llms. ArXiv , abs/2404.16873. Ethan Perez, Saffron Huang, Francis Song, Trevor Cai, Roman Ring, John Aslanides, Amelia Glaese, Nat McAleese, and Geoffrey Irving. 2022. Red teaming language models with language models. In Confer- ence on Empirical Methods in Natural Language Processing . Lianhui Qin, Sean Welleck, Daniel Khashabi, and Yejin Choi. 2022. Cold decoding: Energy-based con- strained text generation with langevin dynamics. Ad- vances in Neural Information Processing Systems , 35:9538–9551. Qwen, :, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, and 25 oth- ers. 2025. Qwen2.5 technical report. Preprint , arXiv:2412.15115. Taylor Shin, Yasaman Razeghi, Robert L. Logan IV , Eric Wallace, and Sameer Singh. 2020. Autoprompt: Elic- iting knowledge from language models with automat- ically generated prompts. CoRR , abs/2010.15980. Harry Surden. 2019. Artificial intelligence and law: An overview. Georgia State University law review , | https://arxiv.org/abs/2505.16888v1 |
35:15109. Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian CantonFerrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, and 49 oth- ers. 2023. Llama 2: Open foundation and fine-tuned chat models. Preprint , arXiv:2307.09288. Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial trig- gers for attacking and analyzing nlp. In Conference on Empirical Methods in Natural Language Process- ing. Laura Weidinger, John F. J. Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, Mia Glaese, Borja Balle, Atoosa Kasirzadeh, Zachary Kenton, Sande Minnich Brown, William T. Hawkins, Tom Stepleton, Courtney Biles, Abeba Birhane, Julia Haas, Laura Rimell, Lisa Anne Hen- dricks, and 4 others. 2021. Ethical and social risks of harm from language models. ArXiv , abs/2112.04359. Xilie Xu, Keyi Kong, Ning Liu, Lizhen Cui, Di Wang, Jingfeng Zhang, and Mohan Kankanhalli. 2024. An LLM can fool itself: A prompt-based adversarial attack. In The Twelfth International Conference on Learning Representations . Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending against neural fake news. ArXiv , abs/1905.12616. Yujie Zhao, Jose Efraim Aguilar Escamill, Weyl Lu, and Huazheng Wang. 2024. Ra-pbrl: Provably efficient risk-aware preference-based reinforcement learning. ArXiv , abs/2410.23569. Sicheng Zhu, Ruiyi Zhang, Bang An, Gang Wu, Joe Bar- row, Zichao Wang, Furong Huang, Ani Nenkova, and Tong Sun. 2024. AutoDAN: Interpretable gradient- based adversarial attacks on large language models. InFirst Conference on Language Modeling . Andy Zou, Zifan Wang, J. Zico Kolter, and Matt Fredrik- son. 2023. Universal and transferable adversar- ial attacks on aligned language models. Preprint , arXiv:2307.15043. 11 Category Num. Samples Misconceptions 100 Conspiracy 26 Stereotype 24 Health 55 Politics 10 History 24 Table A1: The number of data examples used for tar- geted attack in each category. A Appendix A.1 Dataset and Prompt template Table A1 shows the statistical information of each category in the TruthfulQA (Lin et al., 2022), which will be used for targeted attack. Figure A1 il- lustrates an example of the input for targeted attack during training and evaluation. A.2 Implementation Details For each targeted question, we used 20 benign ex- amples (10 from the correct subset, 10 from the incorrect subset) and the correct proportion m= 7. For the AdvAutoPrompt, we set a maximum length ofp∗up to 150 words and the number of iterations t= 3to save cost. A.3 Analysis of optimization threshold k For untargeted attack (top row), increasing the in- correct targeted threshold ( k) generally leads to a more substantial degradation of performance on the synthetic target (malicious) dataset for both mod- els. For DeepSeek-7B, the F1 score on malicious inputs drops significantly from approximately 40% to just above 10% as kincreases, while benign performance shows a modest decline from around 40% to 35%. Qwen2.5-7B exhibits a similar pat- tern: its malicious performance falls from roughly 55% to about 15%, and benign performance de- creases from approximately 60% to 45% across | https://arxiv.org/abs/2505.16888v1 |
the same range of k. This trend indicates that while the attack more effectively degrades performance on malicious queries at higher k values, this also comes with a slight decrease in performance on benign questions compared to their corresponding initial attacked state and a more noticeable reduc- tion compared to the NSP benign baselines. Conversely, for targeted attacks (bottom row), a higher threshold ( k) significantly improves the success rate of forcing the specific malicious out- put. On DeepSeek-7B, the malicious F1 score, rep- resenting attack success, climbs from about 58% [Instruction]: The following is a multiple- choice question; you are given two possible answers. Please only select one of them without any explanations: Q: In the Gregorian calendar , what does ``AD'' mean? Choices: A: Anno Domini B: After Death [Target LLM]: BFigure A1: The prompt template for two options of targeted attack. Prompt Benign Malicious Difference F1↑EM↑F1↓EM↓∆F1↑∆EM↑Llama3.1NSP 58.59 47.60 88.61 83.00 -30.02 -35.40 Manual 64.25 56.60 99.75 99.50 -35.70 -42.90 AAP 44.84 31.70 52.00 42.00 -7.16 -10.30 CAIN 45.15 32.04 27.46 16.40 17.69 15.64PythiaNSP 40.98 28.50 97.40 97.00 -56.42 -68.50 Manual 54.82 49.00 100.00 100.00 -45.18 -51.00 AAP 49.13 40.06 58.20 51.27 -9.07 -18.14 CAIN 49.08 40.70 32.32 25.28 16.76 15.42 Table A2: Performance comparison when attacking on Pythia-12B and Llama3.1-7B. to nearly 80% with an increasing k, while perfor- mance on benign inputs remains relatively stable around 50%, comparable to its NSP benign base- line. A more pronounced trend is observed for Qwen2.5-7B, where its malicious attack success rate rises from approximately 50% at k= 1to over 80% for k≥8; its benign performance also re- mains stable at around 62%. Notably, this increased targeted efficacy is generally achieved without a substantial negative impact on the models’ perfor- mance on benign inputs. A.4 Additional results on untargeted attack Table A2 presents the performance of untargeted attacks on Llama3.1-7B and Pythia-12B. Across both models, CAIN significantly outperforms all baselines, including Manual and AAP, in balanc- ing attack strength and benign performance. While Manual prompts achieve high benign F1/EM, they fail to reduce malicious performance (e.g., 99.75 F1 on Llama3.1 and 100.00 F1 on Pythia). In contrast, CAIN reduces malicious F1 to 27.46 and 32.32, respectively, while maintaining reasonable benign scores. This results in the highest ∆F1 and ∆EM margins (e.g., +17.69 F1 on Llama3.1 and +16.76 12 Prompt Benign Malicious Difference F1↑EM↑F1↓EM↓∆F1↑∆EM↑ L2-7BM+G 68.33 62.59 38.25 31.46 30.08 31.13 A+G 63.84 56.14 33.36 28.20 30.48 27.94 L2-13BM+G 81.92 78.62 41.44 38.36 40.48 40.26 A+G 66.77 57.14 32.66 18.89 34.11 38.15 L3.1-8BM+G 62.61 52.12 50.05 41.69 12.56 10.43 A+G 45.15 32.04 27.46 16.40 17.69 15.64 D-7BM+G 53.59 48.41 37.73 33.28 15.66 15.13 A+G 43.99 31.75 28.15 16.33 15.84 15.42 Q2.5M+G 46.97 36.13 61.39 50.68 -14.42 -14.55 A+G 50.31 39.20 34.94 23.92 15.37 15.28 P-12BM+G 50.25 42.90 40.46 34.41 9.79 8.49 A+G 49.08 40.70 32.32 25.28 16.76 15.42 Table A3: Results of attacking performance with man- ual initialization and AdvAutoPrompt. “A” denotes AAP, “G” stands for Greedy, and “A+G” is our pro- posed method. “L, D, Q, P” denote Llama, Deepseek, Qwen, and | https://arxiv.org/abs/2505.16888v1 |
Pythia models, respectively. on Pythia), demonstrating CAIN ’s superior abil- ity to selectively degrade targeted outputs without broadly compromising accuracy. A.5 Additional results for targeted attacks Table A4 shows that AAP achieves the best per- formance in the two-option setting ( ΨF1 = 50.94, ΨEM = 42.58), but its effectiveness drops when transferred to the four-option format. In contrast, CAIN maintains more stable performance across both settings, achieving strong targeted attack suc- cess (highest malicious F1) with better transferabil- ity (ΨF1 = 31.01 vs. 32.80). This suggests CAIN is more robust and generalizable under realistic conditions where question formats vary. 1.52.02.53.03.54.0PerplexityDeepseek-7B Manual AAP CAIN Prompt Method1.52.02.53.03.54.0PerplexityQwen2.5-7B Untargeted Targeted Figure A2: Perplexity distribution of successfully opti- mized prompts across different prompt methods under both untargeted and targeted attack. 13 Prompt Two options Two options →Four options Benign Malicious Sum Benign Malicious Sum F1↑EM↑F1↑EM↑ΨF1↑ΨEM↑F1↑EM↑F1↑EM↑ΨF1↑ΨEM↑Llama2-7BNSP 10.03 1.67 14.09 3.27 12.06 2.47 9.72 1.33 19.60 1.82 14.66 1.58 Manual 19.67 19.00 44.03 43.45 31.85 31.23 1.00 1.00 3.50 3.27 2.25 2.13 M+Greedy 28.20 21.57 60.71 58.00 44.45 39.78 13.73 10.60 29.81 28.80 21.77 19.70 AAP 42.47 34.16 59.41 51.00 50.94 42.58 27.30 19.88 38.29 33.27 32.80 26.58 CAIN 35.10 19.61 61.86 48.71 48.48 34.16 20.44 11.83 41.58 34.84 31.01 23.34 Table A4: Performance of the targeted attack on Llama2-7B. Prompt Two options Two options →Four options Benign Malicious Sum (Ψ) Benign Malicious Sum (Ψ) F1↑EM↑F1↑EM↑F1↑EM↑F1↑EM↑F1↑EM↑F1↑EM↑ Deepseek-7BM+Greedy 47.94 46.11 42.90 42.26 45.42 44.19 27.38 25.65 30.32 29.35 28.85 27.50 A+Greedy 55.29 46.47 58.92 54.00 57.11 50.23 31.73 28.69 43.92 43.00 37.83 35.84 Qwen2.5-7BM+Greedy 60.41 60.33 62.73 62.73 61.57 61.53 50.88 50.88 69.70 69.70 60.29 60.29 A+Greedy 62.76 61.75 73.03 72.07 67.89 66.91 56.45 56.38 70.75 70.34 63.60 63.36 Table A5: Performance with different initialization methods on targeted attacks. 14 | https://arxiv.org/abs/2505.16888v1 |
Shadows in the Attention: Contextual Perturbation and Representation Drift in the Dynamics of Hallucination in LLMs Zeyu Wei1,2[0009−0001−3335−2366], Shuo Wang1,2[0009−0008−4232−4245], Xiaohui Rong1[0009−0001−0536−4538], Xuemin Liu1[0009−0007−8759−1219] (B), and He Li1,2[0009−0004−3862−9936] 1Computer Network Information Center, CAS, Beijing, China 2University of Chinese Academy of Sciences, Beijing, China {zywei, swang, xhrong, lxm}@cnic.cn, lihe241@mails.ucas.ac.cn Abstract. Hallucinations—plausible yet erroneous outputs—remain a critical barrier to reliable deployment of large language models (LLMs). We present the first systematic study linking hallucination incidence to internal-state drift induced by incremental context injection. Using TruthfulQA, we construct two 16-round “titration” tracks per question: one appends relevant but partially flawed snippets, the other injects de- liberately misleading content. Across six open-source LLMs, we track overt hallucination rates with a tri-perspective detector and covert dy- namics via cosine, entropy, JS and Spearman drifts of hidden states and attention maps. Results reveal (1) monotonic growth of hallucination frequency and representation drift that plateaus after 5–7 rounds; (2) relevant context drives deeper semantic assimilation, producing high- confidence “self-consistent” hallucinations, whereas irrelevant context in- duces topic-drift errors anchored by attention re-routing; and (3) conver- gence of JS-Drift (~0.69) and Spearman-Drift (~0) marks an “attention- locking” thresholdbeyondwhichhallucinationssolidifyandbecomeresis- tant to correction. Correlation analyses expose a seesaw between assim- ilation capacity and attention diffusion, clarifying size-dependent error modes. These findings supply empirical foundations for intrinsic halluci- nation prediction and context-aware mitigation mechanisms. Keywords: Hallucinations ·LLMs ·Internal-State ·Attention-Locking. 1 Introduction LLMs, such as GPT-3 [3], PaLM [7], LLaMA [28], and GPT-4 [4], have ap- proached human-level performance in question-answering tasks. However, they still generate factually incorrect or self-contradictory "hallucinations" [12,22], undermining user trust and creating risks in high-stakes domains like medicine and law; thus hallucinations have become a major obstacle to LLM deployment [11,18,26].arXiv:2505.16894v1 [cs.CL] 22 May 2025 2 Zy. Wei et al. Existingmitigationapproachesdivideintodata/knowledge-layerandretrieval- augmentation strategies: TruthfulQA [20] evaluates misconception repetition us- ing leading questions, Chen et al. [6] identify hallucinations through retrieval- verification; Pan et al.’s [23] self-critique and self-correction mechanisms and retrieval-augmented generation [14] also significantly reduce error rates. Researchershavealsolooked"inward"forhiddensignals.AzariaandMitchell [1] discovered that GPT-3’s encoder states shift before false outputs; Duan et al. [9] similarly observed trajectory differences between correct and hallucinated answers, indicating that latent representations partially encode "truthfulness." Wang et al. [29] and Chen et al. [5] demonstrated that mid-layer attention and feed-forward activations can predict hallucinations, while Huang et al. [17] showed high entropy and large sampling variance correlate with increased hallu- cination risk. Hidden activations and probability distributions have thus become crucial indicators for online detection and error correction. Despite research progress, systematic understanding of hallucination’s inter- nal mechanisms remains lacking. Existing studies often analyze model behavior under fixed prompts or design specific metrics for particular scenarios, with- out comprehensively comparing how model internal representations drift under context perturbations and how such drift triggers hallucinations. This research focuses on how context perturbations trigger systematic drift in LLM hidden representations leading to hallucinations. We examine six main- stream models (Llama3-8B, Llama3.2-1B, Falcon3-1B, Falcon3-7B, Qwen2.5- 1.5B, Qwen2.5-7B) [15,24,30] performing QA tasks under relevant, seemingly relevant, and misleading contexts, tracking hidden state evolution and its as- sociation with hallucinated outputs. | https://arxiv.org/abs/2505.16894v1 |
Results show that injected contexts form an "attention shadow," gradually shifting model focus, causing representation drift, and ultimately generating incorrect answers. This study doesn’t propose new models but identifies and characterizes representation drift patterns related to hallucinations, providing empirical evidence for understanding LLM halluci- nation internal mechanisms and establishing theoretical foundations for future detection and mitigation strategies. We contribute three advances: (1) a controlled-context framework that traces hidden-state shifts, revealing LLM internal dynamics; (2) cross-model proof that attention and representation patterns contain universal hallucination precursors; and (3) causal evidence that context-induced representation drift heightens hal- lucination risk, anchoring future detection and mitigation. 2 Related Works Extensive research aims to detect and mitigate LLM "hallucinations" [16]. Exist- ing methods can be categorized into four types: (1) Retrieval augmentation and post-correction: Incorporating external knowledge retrieval or automatic errorcorrectionbeforeoraftergenerationtoreducehallucinations[14].Panetal. reviewedself-feedbackcorrectionstrategies[23];PengproposedLLM-Augmenter iteratively integrating retrieval and feedback [25]; Su utilized real-time detection Title Suppressed Due to Excessive Length 3 to dynamically adjust retrieved content for entity error correction [27]. (2) Un- certaintyassessment: Leveragingoutputconfidencetoidentifyhallucinations, using statistical measures like entropy to detect fabricated content [11,10]; Self- CheckGPT employs generation diversity to self-check inconsistent answers [21]; other work fine-tunes models to predict their own accuracy [19]. (3) Internal representation analysis: Azaria and Mitchell discovered hidden layers car- rying error signals [1]; Duan compared correct versus hallucinated trajectories and utilized differences to mitigate hallucinations [9]; Wang decomposed rea- soning into understanding-querying-generation phases to extract state features, significantly improving detection without external knowledge [29]. (4) Context perturbation and attention manipulation: Attention "backtracking" alone can identify and mitigate conflict-type hallucinations [8]; Bazarova compared answer-prompt attention topology for unsupervised hallucination detection in retrieval settings [2]; hallucination risk significantly increases when context ex- ceeds model knowledge or sequence order is perturbed [13]. However, in-depth research on how LLM internal representation drift leads to hallucinations re- mains lacking; this paper aims to fill this gap. 3 Methodology This study systematically investigates when and why LLMs hallucinate and how internal-state dynamics precipitate those errors. We pursue two questions: (1) How does hallucination probability vary non-linearly with the strength and depth of incremental context injection? (2) Do shifts in hidden representations and attention weights explain these overt hallucination fluctuations? Our integrated “overt–covert” pipeline couples a tri-path detector that flags hallucinationswithcontinuousmonitoringofhiddenvectorsandattentionmatri- ces. By measuring each stage’s representational drift from a baseline, the frame- work links observable errors to the underlying dynamics, offering complementary evidence for the generative mechanism of hallucinations. 3.1 Context-Manipulation Paradigm We employ the TruthfulQA-Generation validation set [20] and generate 2×16 prompt variants per question. Round 0 serves as the baseline, followed by 15 rounds of incremental context injection along two trajectories: Relevant, com- posed primarily of accurate information interspersed with a few superficially credible but subtly incorrect statements; and Irrelevant , consisting of mislead- ing or distracting content designed to elicit hallucinations. This titration-style design quantifies how graded contextual interference alters hallucination rates and hidden-state drift. Guidedbythisrationale,weimplementadual-trackexperimentalframework. For the relevant track, each question is accompanied by a set of semantically congruent snippets, Crel={crel 1, crel 2, ..., crel n} (1) 4 Zy. Wei et al. where | https://arxiv.org/abs/2505.16894v1 |
crel idenotes the i-th injected relevant fragment. Symmetrically, the irrelevant track furnishes potentially misleading snippets for the same question, Cirr={cirr 1, cirr 2, ..., cirr n} (2) with cirr irepresenting the i-th injected irrelevant fragment. Under both con- ditions, each question undergoes 15 rounds of progressive context augmentation to capture the temporal evolution of model responses; the cumulative context visible at round tis: Ct={c1, c2, ..., c t} (3) This finely controlled manipulation paradigm thus allows a systematic as- sessment of how the depth and nature of external interference jointly influence hallucination incidence and the concomitant drift of internal model states. 3.2 Hallucination Detection Framework and Text Quality Metrics We introduce a tri-perspective consensus framework, augmented with multidi- mensional quality metrics, for rapid hallucination detection. By cross-validating outputs via three complementary views— semantic deviation, factual ex- tension, andlogical inference —the framework reduces the false positives and negatives that plague single-metric methods in complex generation tasks. Semantic deviation detection measures the semantic distance between a model-generated answer yand a reference answer rin embedding space. The hallucination indicator is defined as: Hsem(y, r) = I[BERTScore F(y, r)< θsem] (4) where BERTScore Fdenotes the F1 score based on contextualized embed- dings, θsem= 0.7is an empirically chosen threshold, and I[·]is the indicator function. A low semantic similarity suggests that the model has semantically deviated from the correct answer, signaling a potential hallucination. The the- oretical foundation lies in the assumption that greater deviation in semantic space correlates with a higher likelihood of fabricating content. Compared to surface-level string matching, BERTScore leverages deep contextual embeddings to capture subtle semantic discrepancies. Factual extension detection focuses on whether the model hallucinates concrete factual elements that are absent from the reference set: Hext(y,R) = I[∃e∈ E(y) :e /∈ E(R)] (5) where E(·)denotes the function that extracts salient factual entities from text, including numbers, named entities, and domain-specific terms. Ris the set of acceptable references. This mechanism effectively captures high-risk hal- lucinations where the model introduces non-existent facts—instances that may appear credible to users. Logical inference detection uses a NLI model to assess the logical re- lationship between the generated answer and the reference set. If the output Title Suppressed Due to Excessive Length 5 cannot be logically entailed by any reference, it is likely to contain hallucinatory content: Hnli(y,R) = I[∀r∈ R :NLI (y, r)∈ {neutral ,contradiction }](6) where NLI (y, r)is the output of a pretrained NLI model that categorizes the relation as one of: entailment, neutral, or contradiction. A lack of entailment across all references is taken as evidence of a logical-level hallucination. This inference-based strategy complements the semantic and factual components by capturing reasoning failures that might otherwise go undetected. The overall hallucination label is derived from a logical OR over the three detectors: H(y,R) =Hsem(y, rbest)∨Hext(y,R)∨Hnli(y,R) (7) where rbestis the most appropriate reference answer. Based on this unified criterion, we define two hallucination rate metrics. The QA-level hallucination rate measures the proportion of generated answers containing hallucinations: QA-HallucRate =1 NNX i=1H(yi,Ri) (8) where Nisthetotalnumberofquestion-answerpairs.Tofurtherassesshallu- cination at a finer granularity, we introduce the intra-sentence hallucination rate: Intra-HallucRate | https://arxiv.org/abs/2505.16894v1 |
=1 NNX i=11 |S(yi)|X s∈S(yi)H(s,Ri) (9) where S(yi)denotes the set of individual sentences comprising the answer yi. This metric quantifies the average hallucination likelihood at the sentence level and helps determine whether hallucinations are localized to specific sentences or dispersed throughout the output. To link hallucination to output quality, we measure ROUGE-L, METEOR, andBERTScore.ROUGE-Lassessessequenceoverlap,METEORaccommodates flexible alignments, and BERTScore-F quantifies deep semantic similarity. Al- though not used for labeling hallucinations, these metrics reveal whether quality drops co-occur with hallucinations or if fluent text can mask falsehoods, enrich- ing our evaluation of semantic integrity. 3.3 Internal State Analysis If overt hallucinations in LLMs are indeed triggered by anomalous perturba- tions in the activation space, then such disruptions should manifest as system- atic and measurable changes in internal representations—specifically, in hidden states and attention distributions. To verify this hypothesis, we examine the re- lationshipbetweenhallucinationgenerationandinternaldynamicsofLLMsfrom 6 Zy. Wei et al. three complementary analytical dimensions: representation drift, attention entropy anddistributional shift . Hidden state vectors encode high-dimensional semantic understanding of the input. Their variations reflect the degree to which contextual information alters the model’s cognitive state. A large representational drift indicates a substan- tial shift in the model’s internal “thought trajectory,” potentially steering its responses in undesired directions. This drift is quantified using cosine distance: Dcos(ht, h0) = 1−ht·h0 ||ht|| · ||h0||(10) where h0denotes the final hidden state (typically of the last token in the final layer) under the zero-context baseline, and htdenotes the corresponding hidden stateafter troundsofcontextualinjection.Thecosinedistance Dcos∈[0,2],with larger values indicating greater contextual influence on internal representations. Attention entropy analysis characterizes the model’s information allocation strategy. An increase in entropy suggests that the model’s attention has become more diffuse—spreading its focus over more tokens, which may indicate uncer- tainty and a lack of confident reasoning anchors. Conversely, decreased entropy reflects a more concentrated attention distribution, potentially signaling reliance on specific tokens. To quantify this, we compute attention entropy as: Hattn(A) =−X iAilogAi (11) where Aidenotes attention weights. The entropy shift after contextual injec- tion is given by: Dent(At, A0) =Hattn(At)−Hattn(A0) (12) with A0andAtrepresenting the attention distributions before and after t− roundcontext injection, respectively. Positive values indicate attention diffusion; negative values indicate attention focusing. To more comprehensively capture shifts in attention patterns, we further employ the Jensen–Shannon (JS) divergence, a symmetric and bounded measure of distributional change: DJS(P, Q) =1 2DKL(P||M) +1 2DKL(Q||M), M =1 2(P+Q)(13) where DKL(P∥Q) =P iP(i) logP(i) Q(i)is the Kullback–Leibler divergence. To enable comparison between distributions of different lengths, we apply zero- padding: Ppad(i) =( P(i)ifi <|P| ϵotherwise(14) where ϵis a small positive constant (typically 10−12), followed by renormal- ization. JS divergence is advantageous due to its symmetry and boundedness Title Suppressed Due to Excessive Length 7 within [0,1], offering a robust estimate of how context reshapes the model’s attention landscape. A higher JS divergence implies a more profound shift in at- tention allocation, potentially signaling a change in the model’s reasoning path. In parallel, we calculate the Spearman rank correlation coefficient to assess the stability of attention rankings: ρ= 1−6P id2 i n(n2−1)(15) where diis the difference in rank for the i-th | https://arxiv.org/abs/2505.16894v1 |
element between two distribu- tions and nis the distribution length. Zero-padding is again applied to maintain consistent dimensionality. The Spearman coefficient ρ∈[−1,1], where negative values indicate rank inversion and positive values denote alignment. Unlike di- vergence measures, Spearman correlation focuses exclusively on ordinal relation- ships, abstracting away from absolute probability magnitudes and highlighting whether the model re-prioritizes its focal elements under different contextual conditions. This internal state analysis framework establishes a tight coupling between observable model behaviors and quantifiable shifts in internal representations andattentiondynamics.Itoffersamechanisticlensthroughwhichhallucinations can be interpreted and provides empirical foundations for developing targeted mitigation strategies in LLMs. 4 Experiments To systematically investigate the relationship between model internal state evo- lution and hallucination generation, we designed a two-phase experiment. In the first phase, we selected various architectures including Llama3, Falcon3, and Qwen 2.5 (1B-8B parameters) [15,24,30], injected context incrementally, and recorded hallucination changes across rounds. External performance was evaluated using QA-HallucRate to assess hallucination frequency and inter- HallucRate to measure whether hallucinations were concentrated or dispersed within responses. The second phase tracked hidden states and attention distri- butions to analyze the underlying mechanisms of hallucination generation. The two phases complemented each other: first observing behavior, then dissecting mechanisms, comprehensively revealing how internal state evolution influences hallucinations. 4.1 Dataset and Context Generation This research utilizes the TruthfulQA dataset [20] as an evaluation benchmark, which was specifically designed to detect hallucination phenomena in LLMs. It comprises 817 questions covering 38 topics, including domains such as health, law, finance, and politics. The distinctive characteristic of TruthfulQA lies in its question design, which tends to guide models toward producing common mis- conceptions or popular fallacies, thereby effectively assessing models’ ability to 8 Zy. Wei et al. resist false information and generate truthful responses. To construct the ex- perimental environment, we developed a context generation framework based on the GPT-4o model [4], capable of generating two carefully designed con- text types for each question: (1) relevant context, containing factual informa- tion directly related to the question along with seemingly accurate information, without providing complete answers directly; (2) irrelevant context, containing factual information unrelated to the question. All generated contexts underwent secondary human evaluation and modification. This dual-context design enables comprehensive assessment of LLMs’ truthfulness performance under different in- formation environments, particularly analyzingthe influence mechanismsbehind model hallucination generation. 4.2 Changes in Hallucination Metrics During Context Injection Across 15 context-injection rounds, QA-HallucRate rose markedly. Irrelevant snippets drove rates toward 1.0 within a few iterations; relevant snippets, though less potent, also produced a steady climb. Llama3.2-1B began at ≈0.78(rele- vant) and reached ≈0.90by round 15, whereas Llama3-8B rose from ≈0.90to ≈0.94. Falcon and Qwen models mirrored this trend: relevant tracks converged near 90% and irrelevant tracks saturated early, shrinking the gap between tra- jectories. Table 1 reports odd-numbered rounds; The complete table is available upon request from the authors. As hallucination frequency increased, their distribution showed an opposite trend. In relevant scenarios, inter-HallucRate remained stable at approximately 0.60, indicating errors were sporadically distributed; in irrelevant scenarios, it decreased from 0.60+ to about 0.48, showing that models repeatedly adhered to specific | https://arxiv.org/abs/2505.16894v1 |
false content, resulting in highly concentrated hallucinations. Rele- vant information provided some factual support, with hallucinations appearing only as local deviations; accumulated irrelevant information caused models to immerse in self-constructed erroneous contexts, producing concentrated misin- formation.(See Fig 1) Modelsdisplayedsize-dependentbehavior.Largervariantsweremorecontext- sensitive, with higher initial and cumulative hallucination rates. For example, Qwen 2.5-7B scored 0.94 in the first irrelevant round versus 0.90 for the 1.5 B; by round 15 (relevant), rates were 0.92 and 0.90, respectively. Falcon3-7B sim- ilarly exceeded its 1 B counterpart. Size also shaped dispersion: Llama3-8B’s first-round relevant spread was 0.64, surpassing the 1 B’s 0.60, implying more scattered minor errors; after repeated irrelevant injections, the 8 B dropped to 0.615 (errors concentrating), whereas the 1 B rose to 0.61 (still diffuse). Thus, larger models initially integrate context broadly—yielding diverse devi- ations—then converge on specific false narratives under sustained irrelevant in- put. In sum, incremental context sharply elevates hallucination risk: irrelevant snippets rapidly enforce near-certain, clustered errors, while relevant snippets lessen severity yet still foster frequent, dispersed minor faults over time. Title Suppressed Due to Excessive Length 9 Table 1. Hallucination Metrics Across Different Models and Rounds Model RoundROUGE-L METEOR BERT-F1 QA-HallucRate Intra-HallucRate Rel Irr Rel Irr Rel Irr Rel Irr Rel Irr Qwen2.5-7B10.1037 0.0672 0.2657 0.1779 0.8500 0.8296 0.8600 0.9400 0.5945 0.6270 30.0585 0.0362 0.1652 0.1162 0.8325 0.8194 0.9000 0.9800 0.6141 0.5436 50.0396 0.0250 0.1177 0.0844 0.8225 0.8119 0.9400 1.0000 0.6092 0.5239 70.0303 0.0197 0.0923 0.0665 0.8228 0.8119 0.9200 1.0000 0.6172 0.5040 90.0241 0.0153 0.0761 0.0542 0.8228 0.8119 0.9200 1.0000 0.6073 0.4986 11 0.0204 0.0133 0.0642 0.0479 0.8228 0.8119 0.9000 1.0000 0.6096 0.4944 15 0.0156 0.0100 0.0497 0.0354 0.8228 0.8119 0.9200 1.0000 0.6115 0.4838 Qwen2.5-1.5B10.1084 0.0648 0.2645 0.1787 0.8525 0.8265 0.8600 0.9000 0.6039 0.6106 30.0567 0.0350 0.1621 0.1056 0.8331 0.8172 0.9000 1.0000 0.5998 0.5352 50.0412 0.0242 0.1238 0.0808 0.8234 0.8119 0.9000 1.0000 0.6154 0.5142 70.0318 0.0186 0.0996 0.0666 0.8228 0.8119 0.8800 1.0000 0.6174 0.4945 90.0253 0.0145 0.0816 0.0517 0.8228 0.8119 0.9000 1.0000 0.6095 0.4980 11 0.0211 0.0127 0.0712 0.0467 0.8228 0.8119 0.9000 1.0000 0.6164 0.4935 15 0.0158 0.0092 0.0539 0.0339 0.8228 0.8119 0.9000 1.0000 0.6076 0.4791 Falcon3-7B10.1197 0.0766 0.3051 0.2053 0.8566 0.8349 0.8200 0.9200 0.5889 0.5981 30.0615 0.0378 0.1843 0.1225 0.8352 0.8212 0.8800 0.9800 0.6163 0.5341 50.0418 0.0258 0.1308 0.0855 0.8230 0.8119 0.9000 1.0000 0.6113 0.5124 70.0320 0.0186 0.1073 0.0664 0.8228 0.8119 0.8800 1.0000 0.6130 0.4932 90.0256 0.0158 0.0837 0.0582 0.8228 0.8119 0.9200 1.0000 0.6220 0.5026 11 0.0221 0.0144 0.0710 0.0502 0.8228 0.8119 0.9800 1.0000 0.6225 0.4940 15 0.0176 0.0104 0.0538 0.0331 0.8228 0.8119 0.9000 1.0000 0.6113 0.4779 Falcon3-1B10.1210 0.0725 0.2952 0.1960 0.8554 0.8322 0.8000 0.9000 0.5946 0.5779 30.0597 0.0368 0.1667 0.1156 0.8335 0.8188 0.9000 0.9800 0.6120 0.5273 50.0408 0.0236 0.1191 0.0761 0.8229 0.8119 0.8600 1.0000 0.6074 0.4993 70.0311 0.0176 0.0978 0.0604 0.8228 0.8119 0.9000 1.0000 0.6129 0.4849 90.0256 0.0144 0.0792 0.0508 0.8228 0.8119 0.9000 1.0000 0.6118 0.4865 11 0.0212 0.0119 0.0666 0.0413 0.8228 0.8119 0.9000 1.0000 0.6116 0.4815 15 0.0162 0.0087 0.0517 0.0312 0.8228 0.8119 0.9000 1.0000 0.6112 0.4783 Llama3-8B10.0869 0.0685 0.2176 0.1826 0.8387 0.8277 0.9000 0.9600 0.6398 0.6034 30.0533 0.0362 0.1515 0.1101 0.8234 0.8164 0.8600 1.0000 0.6368 0.5432 | https://arxiv.org/abs/2505.16894v1 |
50.0379 0.0248 0.1176 0.0846 0.8212 0.8119 0.8800 1.0000 0.6206 0.5278 70.0318 0.0186 0.0996 0.0666 0.8228 0.8119 0.8800 1.0000 0.6174 0.4945 90.0256 0.0151 0.0816 0.0553 0.8228 0.8119 0.9000 1.0000 0.6095 0.4980 11 0.0214 0.0127 0.0712 0.0467 0.8228 0.8119 0.9000 1.0000 0.6164 0.4935 15 0.0158 0.0092 0.0539 0.0339 0.8228 0.8119 0.9000 1.0000 0.6076 0.4791 Llama3.2-1B10.1465 0.0689 0.3035 0.1774 0.8619 0.8251 0.7800 0.8800 0.6018 0.5943 30.0673 0.0364 0.1791 0.1109 0.8398 0.8166 0.9000 1.0000 0.6022 0.5195 50.0451 0.0237 0.1283 0.0792 0.8244 0.8119 0.8800 1.0000 0.6081 0.5031 70.0336 0.0176 0.0982 0.0610 0.8228 0.8119 0.9000 1.0000 0.6110 0.4878 90.0265 0.0139 0.0807 0.0490 0.8228 0.8119 0.9000 1.0000 0.6104 0.4855 11 0.0220 0.0117 0.0680 0.0411 0.8228 0.8119 0.9000 1.0000 0.6140 0.4808 15 0.0165 0.0087 0.0524 0.0309 0.8228 0.8119 0.9000 1.0000 0.6101 0.4784 2 4 6 8 10 12 14 Round0.590.600.610.620.630.64Intra-HallucRate Intra-HallucRate (Relevant) Qwen2.5-7B Falcon3-1B Falcon3-7B Llama3-8B Llama3.2-1B Qwen2.5-1.5B 2 4 6 8 10 12 14 Round0.480.500.520.540.560.580.600.62Intra-HallucRate Intra-HallucRate (Irrelevant) Qwen2.5-7B Falcon3-1B Falcon3-7B Llama3-8B Llama3.2-1B Qwen2.5-1.5B Fig. 1.Intra-Hallucination Rate Comparison Across Different Models 10 Zy. Wei et al. 4.3 Analysis of Model Internal Drift and Hallucination Dynamics Mechanisms To analyze the deeper causes of hallucination phenomena, we examined model internal states, comparing drift degrees in hidden layer representations and at- tention distributions under different context conditions, exploring hallucination evolution processes and their association with model scale. Table 2 presents the data used in this section. For brevity, only odd-numbered rounds are in- cluded(The complete table is available upon request from the authors). We found that across all models, four internal drift metrics (Cos-Drift, Ent- Drift, JS-Drift, Spearman-Drift, We have already introduced them in the 3.3 section) exhibited increasing trends with successive rounds before stabilizing after approximately 5-7 rounds. For example, in the Llama-3 8B model under relevant context, Cos-Drift increased from 0.1925 in round 1 to 0.2116 in round 15, while Ent-Drift grew from 0.6196 to 1.4643 (See Fig 2). 2 4 6 8 10 12 14 Round0.170.180.190.200.210.220.230.24Cosine Drift Cosine Drift Relevant Irrelevant 2 4 6 8 10 12 14 Round0.60.81.01.21.4Entropy Drift Entropy Drift Relevant Irrelevant 2 4 6 8 10 12 14 Round0.6760.6780.6800.6820.6840.6860.6880.6900.692JS Drift JS Drift Relevant Irrelevant 2 4 6 8 10 12 14 Round0.10 0.08 0.06 0.04 0.02 Spearman Drift Spearman Drift Relevant IrrelevantDrift Indicators per Round: LlaMa3 8B Fig. 2.Internal State Drift Metrics of the Llama3-8B Model 2 4 6 8 10 12 14 Round0.280.300.320.340.36Cosine Drift Cosine Drift Relevant Irrelevant 2 4 6 8 10 12 14 Round0.81.01.21.41.6Entropy Drift Entropy Drift Relevant Irrelevant 2 4 6 8 10 12 14 Round0.67500.67750.68000.68250.68500.68750.6900JS Drift JS Drift Relevant Irrelevant 2 4 6 8 10 12 14 Round0.10 0.08 0.06 0.04 0.02 Spearman Drift Spearman Drift Relevant IrrelevantDrift Indicators per Round: LlaMa3.2 1B Fig. 3.Internal State Drift Metrics of the Llama3.2-1B Model Corollary 1. Internal state drift is a universal phenomenon in large language models under accumulated long-context conditions, and hallucination risk grad- ually increases with cumulative internal drift. This indicates that long-sequence contexts induce systematic shifts in model hiddenstatesandattentiondistributions.Asdialogueroundsincrease,themodel’s assimilationofcontextdeepens,withitsinternal"beliefs"continuouslydeviating frominitialsemanticanchors,providingadynamicalfoundationforhallucination generation. Comparing relevant and irrelevant contexts reveals: under relevant contexts, Cos-Drift and Ent-Drift average values are generally higher than under | https://arxiv.org/abs/2505.16894v1 |
irrele- vant contexts. For example, in round 15 of the Llama-3 8B model, Cos-Drift reached 0.2116 under relevant context versus 0.1977 under irrelevant context; Ent-Drift was 1.4643 under relevant context compared to 1.1347 under irrele- vant context. Under irrelevant context conditions, JS-Drift and Spearman-Drift stabilized more rapidly with smaller variance fluctuations. Title Suppressed Due to Excessive Length 11 Table 2. Internal States Drift Metrics Across Different Models and Rounds Model RoundCos-Drift Ent-Drift JS-Drift Spearman-Drift Rel Irr Rel Irr Rel Irr Rel Irr Qwen2.5-7B10.1584 0.2864 0.8749 0.9425 0.6748 0.6786 -0.0934 -0.0764 30.2245 0.3022 1.3501 1.2301 0.6861 0.6876 -0.0368 -0.0288 50.2460 0.3027 1.5692 1.3522 0.6886 0.6895 -0.0228 -0.0177 70.2374 0.2698 1.7127 1.4337 0.6897 0.6903 -0.0167 -0.0128 90.2342 0.3067 1.8197 1.5059 0.6903 0.6908 -0.0130 -0.0100 11 0.2420 0.2633 1.9031 1.5580 0.6907 0.6912 -0.0107 -0.0082 15 0.2277 0.2726 2.0324 1.6449 0.6913 0.6916 -0.0078 -0.0061 Qwen2.5-1.5B10.1547 0.1390 0.8469 0.9020 0.6745 0.6785 -0.0934 -0.0764 30.1317 0.1212 1.3075 1.1810 0.6861 0.6876 -0.0368 -0.0288 50.1185 0.0863 1.5200 1.3042 0.6886 0.6895 -0.0228 -0.0177 70.1036 0.0709 1.6575 1.3851 0.6896 0.6903 -0.0167 -0.0128 90.0995 0.0631 1.7594 1.4560 0.6903 0.6908 -0.0130 -0.0100 11 0.1042 0.0649 1.8379 1.5054 0.6907 0.6912 -0.0107 -0.0082 15 0.1151 0.0666 1.9589 1.5874 0.6913 0.6916 -0.0078 -0.0061 Falcon3-7B10.3431 0.3614 1.1180 1.1887 0.6741 0.6774 -0.0928 -0.0763 30.3227 0.3214 1.6267 1.5176 0.6863 0.6875 -0.0371 -0.0294 50.3203 0.3147 1.8526 1.6563 0.6885 0.6894 -0.0230 -0.0181 70.3055 0.3080 1.9966 1.7477 0.6897 0.6903 -0.0168 -0.0131 90.3028 0.3253 2.1039 1.8252 0.6903 0.6908 -0.0131 -0.0102 11 0.3016 0.3251 2.1870 1.8809 0.6907 0.6912 -0.0107 -0.0084 15 0.3072 0.3431 2.3168 1.9728 0.6912 0.6916 -0.0079 -0.0062 Falcon3-1B10.3589 0.3629 0.9915 1.0776 0.6737 0.6772 -0.0927 -0.0763 30.3551 0.3891 1.5198 1.4330 0.6862 0.6875 -0.0371 -0.0294 50.4008 0.3425 1.7650 1.5913 0.6885 0.6894 -0.0230 -0.0181 70.4004 0.3706 1.9260 1.6976 0.6897 0.6903 -0.0168 -0.0131 90.4624 0.3289 2.0461 1.7863 0.6903 0.6908 -0.0131 -0.0102 11 0.4197 0.2880 2.1382 1.8523 0.6907 0.6912 -0.0107 -0.0084 15 0.3718 0.2410 2.2809 1.9616 0.6912 0.6916 -0.0079 -0.0062 Llama3-8B10.1925 0.1938 0.6196 0.6564 0.6754 0.6794 -0.0970 -0.0793 30.2394 0.1766 0.9793 0.8544 0.6863 0.6877 -0.0390 -0.0305 50.2347 0.1677 1.1455 0.9401 0.6886 0.6895 -0.0242 -0.0187 70.2232 0.1732 1.2499 0.9963 0.6897 0.6904 -0.0176 -0.0136 90.2218 0.1770 1.3244 1.0456 0.6903 0.6909 -0.0138 -0.0106 11 0.2185 0.1832 1.3800 1.0791 0.6907 0.6912 -0.0113 -0.0087 15 0.2116 0.1977 1.4643 1.1347 0.6913 0.6916 -0.0083 -0.0064 Llama3.2-1B10.2713 0.2935 0.6825 0.7169 0.6748 0.6789 -0.0970 -0.0793 30.3292 0.3213 1.0470 0.9446 0.6862 0.6877 -0.0390 -0.0305 50.3289 0.3232 1.2176 1.0426 0.6886 0.6895 -0.0242 -0.0187 70.3316 0.3313 1.3281 1.1084 0.6897 0.6904 -0.0176 -0.0136 90.3263 0.3435 1.4099 1.1646 0.6903 0.6909 -0.0138 -0.0106 11 0.3278 0.3623 1.4729 1.2087 0.6907 0.6912 -0.0113 -0.0087 15 0.3389 0.3546 1.5748 1.2801 0.6913 0.6916 -0.0083 -0.0064 Corollary 2. Different types of contexts drive different forms of hallucination mechanisms: relevant contexts promote deep semantic assimilation producing high-confidence self-consistent hallucinations, while irrelevant contexts induce concatenation-type or topic-drift hallucinations through information routing per- turbations and anchor reordering. Relevant contexts cause models to substantially adjust their internal repre- sentationswithinsemanticspace;ifmodelsabsorbinformationcontainingerrors, 12 Zy. Wei et al. the resulting hallucinations demonstrate extreme coherence and self-consistency. In contrast, models under irrelevant stimuli tend to reorganize attention routing and semantic anchor sequences, forming "concatenation-style" outputs manifest- ing | https://arxiv.org/abs/2505.16894v1 |
as tangential responses or topic deviations, but with lower internal consis- tency. Observations indicate that under relevant context conditions, Cos-Drift and Ent-Drift variances are generally higher than under irrelevant contexts, while JS-Drift and Spearman-Drift variances remain relatively similar across both con- ditions. Corollary 3. High volatility (high variance) in internal drift metrics often pre- dicts the occurrence of high-confidence hallucinations, whilesteady-state low vari- ance suggests the model’s hallucinated content has become solidified. Metric variance reflects model internal state stability. Relevant contexts trig- gerlarger-scale,higher-fluctuationadjustments,potentiallyproducinghigh-confidence, self-consistent hallucinations resistant to correction. Irrelevant contexts cause models to quickly enter a low-variance "immune" steady state, where erroneous information becomes new attention anchors resistant to local corrections. Acrossallmodels,JS-Drifttypicallyreachesstablesaturationvalues(approx- imately 0.690 ±0.001) within 6-8 rounds, while Spearman-Drift simultaneously converges near zero, indicating rapid reshaping of internal attention structures (See Fig 4). This "synchronous convergence" phenomenon shows significant neg- ative correlation with hallucination correctability ( ρ≈ −0.66). 2 4 6 8 10 12 14 Round0.67500.67750.68000.68250.68500.68750.6900JS-Drift ValueJS-Drift (Relevant) LlaMa3 8B LlaMa3.2 1B Falcon3 1bFalcon3 7B Qwen2.5 1.5B Qwen2.5 7B 2 4 6 8 10 12 14 Round0.020.040.060.080.10|Spearman-Drift| Value|Spearman-Drift| (Relevant) LlaMa3 8B LlaMa3.2 1B Falcon3 1bFalcon3 7B Qwen2.5 1.5B Qwen2.5 7B 2 4 6 8 10 12 14 Round0.6780.6800.6820.6840.6860.6880.6900.692JS-Drift ValueJS-Drift (Irrelevant) LlaMa3 8B LlaMa3.2 1B Falcon3 1bFalcon3 7B Qwen2.5 1.5B Qwen2.5 7B 2 4 6 8 10 12 14 Round0.010.020.030.040.050.060.070.08|Spearman-Drift| Value|Spearman-Drift| (Irrelevant) LlaMa3 8B LlaMa3.2 1B Falcon3 1bFalcon3 7B Qwen2.5 1.5B Qwen2.5 7B Fig. 4.Comparison of JS-Drift and Spearman-Drift under Relevant vs. Irrelevant Con- text Injection Title Suppressed Due to Excessive Length 13 Corollary 4. Oncea model’sinternal attentionstructure entersa "locking thresh- old" (JS-Drift saturation and Spearman-Drift approaching zero), generated hallu- cinations enter a solidified state, making conventional dialogue correction meth- ods ineffective. When attention pattern reshaping reaches steady state, the model enters a new stable information routing phase. Subsequently, the effectiveness of injected corrective information significantly decreases, making it difficult to change pre- vious erroneous "beliefs." This explains the "hallucination solidification" phe- nomenon: once incorrect routing paths and factual anchors form, the model continues producing factually incorrect but structurally coherent content, re- quiring radical measures like truncating dialogue context or inserting explicit counterfactual instructions to break incorrect outputs. Comparingmodelscaleinfluencerevealslargermodelsshowmorepronounced differences in Cos-Drift means between relevant/irrelevant contexts. Llama-3 8B maintains ∆Cosofapproximately0.05–0.07,Falcon-37Bshowsabout0.02,while smaller models (like Llama-3.2 1B) display differences approaching zero or even negative values. Corollary 5. Model capacity determines semantic selectivity capability: larger models better distinguish useful information from noise interference; insufficient- capacity models tend to indiscriminately absorb all context, more readily produc- ing hallucinations with random biases. Large models can significantly assimilate highly relevant information frag- ments while suppressing noise context absorption; however, if relevant fragments contain biases, resulting hallucinations also demonstrate extremely high confi- dence. Smaller models with limited representation space treat relevant and irrel- evant information equally, causing intermingled semantic anchors and producing low-confidence, structurally loose "patchwork" hallucinations. Analyzing the relationship between Ent-Drift growth slopes and contempo- raneous ∆Cosacross six models revealed significant negative correlation ( ρ≈ −0.71, p < 0.01). In Qwen-2.5 1.5B and Llama-3.2 1B, when ∆Cosapproached 0, Ent-Drift slopes exceeded 0.11/round. Corollary 6. A "seesaw" compensatory mechanism | https://arxiv.org/abs/2505.16894v1 |
exists between semantic as- similation and attention divergence; when models cannot deeply assimilate con- text, they sacrifice attention focus to expand retrieval range, thus increasing the probability of concatenation-type hallucinations. Small models with insufficient assimilation capacity compensate for com- prehension gaps by increasing Ent-Drift, but attention dispersion weakens con- straints on key factual anchors, facilitating topic drift. Large models, having sufficiently absorbed context information through Cos-Drift, maintain focused attention, avoiding topic drift, but may assimilate incorrect information more thoroughly, potentially producing highly self-consistent hallucinated content. 14 Zy. Wei et al. 5 Conclusion This work provides the first large-scale, cross-model evidence that hallucinations inLLMsarisefromsystematic,context-inducedrepresentationdrift.Bytitrating relevant and irrelevant snippets into TruthfulQA prompts, we show that hallu- cination rates rise monotonically and saturate once attention topology “locks,” while internal cosine, entropy, JS and Spearman drifts converge. The result- ing overt–covert correlations hold across six model families and scales, revealing size-dependent trade-offs between semantic assimilation and attention diffusion. These findings establish robust internal precursors to hallucination, furnishing empirical foundations for intrinsic detection, context-aware mitigation and fu- ture architecture-level safeguards. References 1. Azaria, A., Mitchell, T.: The internal state of an LLM knows when it‘s lying. In: Findings of the Association for Computational Linguistics: EMNLP 2023. pp. 967–976. Singapore (Dec 2023) 2. Bazarova, A., Yugay, A., et al.: Hallucination detection in llms via topological divergence on attention graphs. arXiv preprint arXiv:2504.10063 (2025) 3. Brown, T., Mann, B., Ryder, N., et al.: Language models are few-shot learners. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H. (eds.) Advances in Neural Information Processing Systems. vol. 33, pp. 1877–1901. Curran Associates, Inc. (2020) 4. Bubeck, S., Chandrasekaran, V., Eldan, R., et al.: Sparks of artificial general in- telligence: Early experiments with gpt-4 (2023), https://arxiv.org/abs/2303.12712 5. Chen, C., Liu, K., Chen, Z., et al.: Inside: Llms’ internal states retain the power of hallucination detection (2024), https://arxiv.org/abs/2402.03744 6. Chen, J., Kim, G., Sriram, A., et al.: Complex claim verification with evidence retrieved in the wild. In: Proceedings of the 2024 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). pp. 3569–3587. Mexico City, Mexico (Jun 2024) 7. Chowdhery, A., Narang, S., Devlin, J., et al.: Palm: Scaling language modeling with pathways. Journal of Machine Learning Research 24(240), 1–113 (2023) 8. Chuang, Y.S., Qiu, L., Hsieh, C.Y., et al.: Lookback lens: Detecting and mitigating contextual hallucinations in large language models using only attention maps. In: Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. pp. 1419–1436. Miami, Florida, USA (Nov 2024) 9. Duan, H., Yang, Y., Tam, K.Y.: Do llms know about hallucination? an empirical investigation of llm’s hidden states (2024), https://arxiv.org/abs/2402.09733 10. Fadeeva, E., Rubashevskii, A., Shelmanov, A., et al.: Fact-checking the output of large language models via token-level uncertainty quantification. In: Findings of the Association for Computational Linguistics: ACL 2024. pp. 9367–9385 (Aug 2024) 11. Farquhar, S., Kossen, J., Kuhn, L., Gal, Y.: Detecting hallucinations in large lan- guage models using semantic entropy. Nature 630(8017), 625–630 (2024) Title Suppressed Due to Excessive Length 15 12. | https://arxiv.org/abs/2505.16894v1 |
Filippova, K.: Controlled hallucinations: Learning to generate faithfully from noisy data. In: Findings of the Association for Computational Linguistics: EMNLP 2020. pp. 864–870. Online (Nov 2020) 13. Flemings, J., Zhang, W., Jiang, B., et al.: Characterizing context influence and hallucination in summarization. arXiv preprint arXiv:2410.03026 (2024) 14. Gao, Y., Xiong, Y., et al.: Retrieval-augmented generation for large language mod- els: A survey (2024), https://arxiv.org/abs/2312.10997 15. Grattafiori, A., Dubey, A., Jauhri, A., et al.: The llama 3 herd of models. arXiv preprint arXiv:2407.21783 (2024) 16. Huang, L., Yu, W., et al.: A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions. ACM Transactions on Infor- mation Systems 43(2), 1–55 (2025) 17. Huang, Y., Song, J., Wang, Z., et al.: Look before you leap: An exploratory study of uncertainty analysis for large language models. IEEE Transactions on Software Engineering 51(2), 413–429 (2025) 18. Ji, Z., Lee, N., Frieske, R., et al.: Survey of hallucination in natural language generation. ACM computing surveys 55(12), 1–38 (2023) 19. Kadavath, S., Conerly, T., Askell, A., et al.: Language models (mostly) know what they know (2022), https://arxiv.org/abs/2207.05221 20. Lin, S., Hilton, J., Evans, O.: TruthfulQA: Measuring how models mimic human falsehoods. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). pp. 3214–3252 (May 2022) 21. Manakul, P., Liusie, A., Gales, M.: SelfCheckGPT: Zero-resource black-box hallu- cinationdetectionforgenerativelargelanguagemodels.In:Proceedingsofthe2023 Conference on Empirical Methods in Natural Language Processing. pp. 9004–9017. Singapore (Dec 2023) 22. Maynez, J., Narayan, S., Bohnet, B., McDonald, R.: On faithfulness and factuality in abstractive summarization. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. pp. 1906–1919. Online (Jul 2020) 23. Pan, L., Saxon, M., Xu, W., et al.: Automatically correcting large language models: Surveying the landscape of diverse automated correction strategies. Transactions of the Association for Computational Linguistics 12, 484–506 (2024) 24. Penedo, G., et al.: The falcon series of open language models. arXiv preprint arXiv:2311.16867 (2023) 25. Peng, B., Galley, M., He, P., et al.: Check your facts and try again: Improving large language models with external knowledge and automated feedback. arXiv preprint arXiv:2302.12813 (2023) 26. Rawte,V.,Sheth,A.,Das,A.:Asurveyofhallucinationinlargefoundationmodels. arXiv preprint arXiv:2309.05922 (2023) 27. Su, W., Tang, Y., Ai, Q., et al.: Mitigating entity-level hallucination in large lan- guage models. In: Proceedings of the 2024 Annual International ACM SIGIR Con- ference on Research and Development in Information Retrieval in the Asia Pacific Region. pp. 23–31 (2024) 28. Touvron, H., Lavril, T., Izacard, G., et al.: Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023) 29. Wang, P., Liu, Y., Lu, Y., et al.: What are models thinking about? understand- ing large language model hallucinations "psychology" through model inner state analysis (2025), https://arxiv.org/abs/2502.13490 30. Yang, A., Yang, B., Zhang, B., et al.: Qwen2.5 technical report. arXiv preprint arXiv:2412.15115 (2024) | https://arxiv.org/abs/2505.16894v1 |
UNCLE: Uncertainty Expressions in Long-Form Generation Ruihan Yang1*, Caiqi Zhang2*, Zhisong Zhang3†, Xinting Huang3, Dong Yu3,Nigel Collier2†,Deqing Yang1† 1Fudan University2University of Cambridge3Tencent AI Lab {rhyang17,deqingyang}@fudan.edu.cn, {cz391,nhc30}@cam.ac.uk, zhisonzhang@tencent.com Abstract Large Language Models (LLMs) are prone to hallucination, particularly in long-form genera- tions. A promising direction to mitigate halluci- nation is to teach LLMs to express uncertainty explicitly when they lack sufficient knowledge. However, existing work lacks direct and fair evaluation of LLMs’ ability to express uncer- tainty effectively in long-form generation. To address this gap, we first introduce UNCLE, a benchmark designed to evaluate uncertainty ex- pression in both long- and short-form question answering (QA). UNCLEspans five domains and comprises 4k long-form QA instances and over 20k short-form QA pairs. Our dataset is the first to directly bridge short- and long-form QA with paired questions and gold-standard an- swers. Along with the benchmark, we propose a suite of new metrics to assess the models’ ca- pabilities to selectively express uncertainty. Us- ingUNCLE, we then demonstrate that current models fail to convey uncertainty appropriately in long-form generation. We further explore both prompt-based and training-based meth- ods to improve models’ performance, with the training-based methods yielding greater gains. Further analysis of alignment gaps between short- and long-form uncertainty expression highlights promising directions for future re- search using U NCLE1. 1 Introduction Large Language Models (LLMs) exhibit strong text generation abilities across diverse tasks and domains. However, they often hallucinate by gen- erating incorrect or fabricated information (Zhang et al., 2023; Huang et al., 2023), especially when lacking sufficient knowledge (Gekhman et al., 2024a; Li et al., 2023). Enabling models to either refuse to answer or explicitly express uncertainty *Equal contribution, listed in alphabetical order. Work done during Tencent AI Lab internship. †Corresponding authors. 1Project page: https://github.com/rhyang2021/UNCLE Long-From QA Ç Introduce the movie “The Godfather”. The Godfather (1972) is a crime drama about the Corleone family. The plot follows Michael Corleone, who becomes increasingly entangled in the family business aBer an aCack on his father. […] It’s uncertain whether the characters are based on real people.Long-Form QA with Key Aspects Introduce the movie “The Godfather”, including its director, producer, cast members, release date, running time, language and awards received. The Godfather (1972) was directed by Francis Ford Coppola and stars Marlon Brando as Don Vito Corleone [...] It’s uncertain whether Coppola’s vision was fully realized without studio interference, though the film is considered a masterpiece. Model 1: Focus on PlotModel 2: Focus on CastExpress on Unimportant AspectsHard to compare!! The Godfather (1972) is a crime drama directed by Francis Ford Coppola and produced by Albert S. Ruddy. The film stars Marlon Brando as Don Vito Corleone. […] It was released on March 24, 1972, though the exact running Hme is unclear. […] Express on Key AspectsFigure 1: Evaluating uncertainty in long-form genera- tion is challenging: different models may express uncer- tainty across varying aspects, often focusing on less im- portant ones. Restricting the key aspects in long-form generation helps ensure more consistent evaluation . has emerged as a promising direction to reduce hallucinations and enhance trustworthiness | https://arxiv.org/abs/2505.16922v1 |
(Zhang et al., 2024a,b; Yang et al., 2024b). Current research on uncertainty expression in LLMs focuses primarily on short-form QA, where responses typically contain fewer than ten words (Kuhn et al., 2023; Lin et al., 2023; Fadeeva et al., 2023; Wang et al., 2024). However, real-world applications often require much longer outputs (ex- ceeding 100 words) that may contain a mixture of correct and incorrect statements (Zhang et al., 2024a; Huang et al., 2024a). The challenge of estimating uncertainty in long-form generation re- mains under-explored. Unlike previous post-hoc methods for long-form uncertainty estimation (Fadeeva et al., 2023; Zhang et al., 2024a; Huang et al., 2024a; Jiang et al., 1arXiv:2505.16922v1 [cs.CL] 22 May 2025 2024), which provide numerical estimates of output uncertainty, we explore the use of linguistic uncer- tainty expressions (e.g., “it is unclear whether” or “I am not sure”). These expressions are generated along with the output responses in a single decod- ing pass to convey uncertainty or lack of knowledge (Zhou et al., 2023; Kim et al., 2024). We argue that such explicit and interpretable expressions not only align more closely with human communica- tion but also offer efficiency advantages, as they are produced on-the-fly with minimal additional computational cost. Regarding linguistic uncertainty expression in long-form generation, Yang et al. (2024b) propose a two-stage training approach to address uncer- tainty suppression and alignment issues. Band et al. (2024) introduce linguistic calibration, enabling models to express uncertainty at different levels (e.g., I am 70% sure). However, due to the open- ended nature of long-form QA, different models may focus on different aspects and express uncer- tainty from different angles, making direct com- parison challenging (upper; Figure 1). As a result, prior work does not answer a key research question: How can we fairly evaluate different models’ abil- ity to accurately express uncertainty in long-form generation? In this work, we introduce UNCLE(Uncertainty inLong-form Expressions), the first benchmark designed to comprehensively evaluate a model’s ability to accurately express uncertainty in both long-form and short-form generation (Contribution #1).Our dataset is the first to directly bridge short- and long-form QA with paired questions and gold answers. Each question contains one topic entity with multiple key aspects that models are expected to cover in their responses (Figure 1 bottom; more examples in Table 5). Each aspect is associated with a short-form question and a ground truth answer. The dataset spans five domains (bi- ographies, companies, movies, astronomical ob- jects, and diseases), containing approximately 4k long-form QA instances, together with more than 20k short-form QA pairs. We also propose a suite of novel metrics to provide a comprehensive evalu- ation of uncertainty expression (Section 3). Using UNCLEas a unified testbed, we evaluate ten popular LLMs to assess their ability to accu- rately express uncertainty in long-form generation. We reveal three key findings (Contribution #2): (1) Although models can generally provide correct an- swers for known facts, current models show lim-ited ability to accurately express uncertainty for unknown facts. (2) Closed-source models tend to use uncertainty expressions more frequently, while open-source models express uncertainty more accu- rately. (3) | https://arxiv.org/abs/2505.16922v1 |
Models are more likely to use uncertainty expressions in short-form QA than in long-form QA (Section 5). Given that UNCLEprovides a direct compar- ison between short- and long-form uncertainty expressions, we investigate strategies to enhance model performance in both formats (Contribution #3; Section 6). We consider both prompt-based and training-based approaches. We experiment with various training settings: exclusively short-form QA, exclusively long-form QA, and a mixture of both. Our results demonstrate that both prompt- based and training-based approaches improve over the base model, with training-based methods gen- erally achieving greater gains. Meanwhile, training on long-form tasks benefits short-form tasks, but not vice versa. Furthermore, we analyze the align- ment between short- and long-form uncertainty ex- pressions and reveal a significant alignment gap (Section 7). We encourage future research to de- velop methods with UNCLEthat perform robustly across both QA formats. 2 Related Work Evaluating Long-form Factuality and Uncer- tainty. The evaluation of factuality in long-form generation has been extensively studied (Min et al., 2023a; Wei et al., 2024b; Zhao et al., 2024a; Song et al., 2024; Chiang and Lee, 2024), typically by decomposing the text into atomic claims and veri- fying each claim using external knowledge sources. Existing LLMs have demonstrated strong perfor- mance in generating and verifying atomic claims, achieving low error rates compared to human an- notation (Min et al., 2023a; Zhang et al., 2024a). However, none of these studies specifically exam- ine whether model-generated responses contain un- certainty expressions or whether those expressions are accurate . On the other hand, existing studies on estimating uncertainty in long-form generation primarily focus on post-hoc methods (Zhang et al., 2024a,b; Huang et al., 2024b; Jiang et al., 2024), where a confidence score is assigned to each re- sponse, and traditional metrics like Spearman cor- relation or AUROC are used for a response level evaluation. Limited work has been done to assess how accurately models express uncertainty in long- 2 Dataset Short-form Long-form Gold Ans. TriviaQA (Joshi et al., 2017a) ✓ ✓ Natural Questions (Kwiatkowski et al., 2019) ✓ ✓ SimpleQA (Wei et al., 2024a) ✓ ✓ FactScore (Min et al., 2023b) ✓ LongFact (Wei et al., 2024c) ✓ WildHallu (Zhao et al., 2024b) ✓ UNCLE ( Ours ) ✓ ✓ ✓ Table 1: Comparison between UNCLEand other popu- lar datasets in uncertainty estimation. form generation for each claim. Training LLMs to Express Uncertainty. Most existing approaches for training language models (LMs) to express uncertainty focus on short-form responses, where uncertainty is expressed about a single aspect. Several methods (Xu et al., 2024; Zhang et al., 2024c; Han et al., 2024; Lin et al., 2022; Madaan et al., 2023a) employ a two-stage strategy: first, the model answers the question, and then it is prompted again to provide a confidence label for the answer. Another line of work (Cheng et al., 2024; Chen et al., 2024; Li et al., 2024; Wang et al., 2025) encourages models to explicitly state “I don’t know” when faced with unknown informa- tion, instead of generating incorrect answers with low-confidence labels. Teaching models to express uncertainty in long-form responses | https://arxiv.org/abs/2505.16922v1 |
remains chal- lenging due to the complexity of handling mixed uncertainties across multiple aspects in open-ended questions. Recent work has explored this challenge. LoGU (Yang et al., 2024b) proposes a two-step training framework: first, supervised fine-tuning to mitigate uncertainty suppression in long-form responses, followed by preference learning to ad- dress uncertainty misalignment. Linguistic Cali- bration (Band et al., 2024) explores the feasibil- ity of assigning a numerical confidence score to statements during generation. However, both ap- proaches overlook a key issue: different models may produce different answers, and each answer may express uncertainty from different angles. This variability hinders direct comparison of models’ un- certainty expression. 3 U NCLE Construction 3.1 Motivation Evaluating uncertainty expression in long-form generation is challenging due to the open-ended nature of existing long-form QA datasets. Most ex- isting datasets (Min et al., 2023a; Wei et al., 2024b; Zhao et al., 2024a) focus on questions regarding a single specific topic ( e.g., a person or an event) and prompt models to generate information broadlyrelated to a topic entity ( e.g., ‘Tell me a biography of[PERSON] ”). Due to this openness, any relevant details about the topic are generally accepted, making uncertainty evaluation difficult. This open-endedness raises two key issues: 1)models may express uncertainty in different aspects, com- plicating cross-model comparisons, and 2)models often express uncertainty for unimportant details. As shown in Figure 1, given the question “Intro- duce the movie The Godfather ,” different models may emphasize various aspects, such as the plot, cast, or the film’s impact and awards, complicating fair comparisons across models. These challenges motivate the construction of a dataset requiring long-form generation while maintaining rela- tively fixed answer aspects . Specifically, we pro- pose that models must cover several key aspects within their responses, maintaining the long-form nature of answers while improving coherence and comparability. Formally, for a question qabout an entity e, we define a set of key aspects Athat must be included in the final answer. In the earlier example, the new question would be: “Introduce the movie The Godfather , including its director, producer, cast members, release date, running time, language, and awards received.” 3.2 Data Collection To collect the questions in our dataset, we need the entities E, key aspects A, and a knowledge base C. We adopt Wikidata as the source for all E,A, andC. Wikidata consists knowledge triplets in the form of (Subject, Predicate, Object). We use the predicates to construct our aspects . We include five domains in our dataset: biographies, companies, movies, astronomical objects, and diseases. The detailed construction procedure is outlined as follows: Step 1: Sampling Entities E.For each domain, we select entities from Wikidata spanning different categories and frequencies. Following Liu and Wu (2024), we use the number of properties associated with an entity as a proxy for its frequency, which serves as an indicator of the amount of information available online for that entity. This approach en- sures a diverse set of entities with varying degrees of informational richness. Step 2: Sampling Key Aspects A.We identify key aspects Afor each domain by selecting | https://arxiv.org/abs/2505.16922v1 |
the most important and relevant properties for answering questions. For instance, birth date and birthplace are essential for biographies, while founders and 3 Correct Incorrect Uncertain Known Acor kn Aincor kn Aunc kn Akn Unknown Acor unk Aincor unk Aunc unk Aunk Acor Aincor Aunc Table 2: Uncertainty confusion matrix. Correct , Incorrect , and Uncertain are based on the model’s re- sponse, while Known andUnknown refer to the results of knowledge probing. Ideally, the model should correctly represent known facts and express uncertainty when faced with unknown facts, as highlighted in green . founding dates are crucial for companies. We re- trieve and count the frequencies of all properties associated with E, and retain the most frequent ones. Five distinct groups of key aspects are se- lected for our five domains, followed by human verification. More details about the selection pro- cess are provided in Appendix A. Step 3: Generating Questions. For each en- tity, we generate two types of questions: 1)Long- form: These require comprehensive answers cov- ering multiple key aspects in a coherent paragraph. 2)Short-form: Concise, fact-based questions tar- geting specific aspects. GPT-4o is prompted to generate questions, with ground-truth answers pro- vided for each short-form question. We maintain a dataset of approximately 4k long-form questions for affordability and usability. As shown in Ta- ble 1, UNCLEis the only dataset that includes both short- and long-form QA with paired questions and gold answers. Additional dataset statistics (Table 5) are provided in Appendix A. Details of the human annotation process for quality verification are in- cluded in Appendix B. 4 Task Definition and Evaluation We define a long- and short-form generation task with restricted key aspects as follows. For an entity e, its corresponding key aspects are denoted as A=∪iAi. For long-form QA, we construct a query q(e| A), specifying the key aspects to cover (e.g., "Introduce [ENTITY] to me, including [A1] , [A2] ,[A3] , . . . "). We prompt language model M withq(e| A). The response is denoted as R∼ M(R|q(e| A)). For short-form QA, we prompt Mwith individual questions for each aspect q(e| Ai). The short-form response is denoted as Ri∼ M(Ri| Ai). Known/Unknown Detection. For a specific LLM M, we categorize the aspects Aiinto two groupsbased on the model’s knowledge: known aspects Aknand unknown aspects Aunk. For knowledge probing, we follow previous work (Gekhman et al., 2024b; Yang et al., 2024c) to query the model multi- ple times; if Mconsistently fails to provide correct answers, the corresponding knowledge is regarded as unknown. Response Categorization. The response Ris ex- pected to include information about the key aspects ofA. These aspects are divided into three subsets based on correctness: Acorfor correctly answered aspects, Aincor for incorrectly answered aspects, andAuncfor aspects where the model expresses uncertainty. We follow the same categorization for short-form responses Ri. We then construct the uncertainty confusion ma- trix shown in Table 2. In our setting, existing met- rics such as AUROC and ECE are not applicable, as our linguistic uncertainty level is binary rather than continuous. Therefore, we propose a suite of new | https://arxiv.org/abs/2505.16922v1 |
evaluation metrics to comprehensively assess the model’s ability to express uncertainty: Metric 1 (Factual Accuracy) LetAcordenote the set of correct aspects, and Aincor denote the set of incorrect aspects in the response. The Factual Accuracy (FA) is then defined as FA =|Acor| |Acor|+|Aincor|. FA measures the proportion of aspects that are stated correctly among all aspects that are stated certainly. Metric 2 (Uncertain Accuracy) LetAuncdenote the set of aspects answered with uncertainty, and Aunc unkdenote the set of unknown aspects within Aunc. The Uncertain Accuracy (UA) is then de- fined as UA=|Aunc unk| |Aunc|. UA calculates how often the model accurately ex- presses uncertainty, i.e., among the aspects the model expresses with uncertainty, the fraction that are truly unknown. Metric 3 (Known to Correct Rate) LetAknde- note the set of all known aspects, and Acor kndenote the set of known aspects answered correctly,. The Known to Correct Rate (KCR) is then defined as KCR =|Acor kn| |Akn|. 4 Query q: In a paragraph, introduce Ben Whishaw, including date of birth, place of birth, graduation university, spouse, occupation,and award received. You should express uncertainty for any aspect you are unsure about.Long-Form Response : Ben Whishaw (born October 21, 1980) in Manchester, England, is a British actor. He graduated from University of Oxford, where he studied drama. Whishaw has received critical acclaim for his performances in The Hour, Cloud Atlas, and as Q in the James Bond films. His spouse is uncertain, as he tends to keep his personal life private […]RUNCLE Evaluation Pipeline A1: October 16, 1980.A2: Clifton, Bedfordshire.A4: Jonathan Harper.A6: BAFTA Award.Short-FormLong-FormClaim 1: Ben was born on October 16, 1980. Claim 2: Ben born in Manchester, England.Claim 3: Ben graduated from University of Oxford.Claim 4: Ben’s spouse in uncertain. Claim 5: Ben is an actor. Claim 6: Ben has received BAFSTA Award Known/UnKnown Detection Fact-Checking A5: I have no information. A3: University of Oxford. Spouse Occupation Graduation Data of birth Plath of birthReceived award Known UnKnown Q4: Who is Ben Whishaw’s spouse?Q3: What is Ben Whishaw’s graduation university?Q1: When is Ben Whishaw’s date of birth?Q2: Where is Ben Whishaw’s plath of birth? Q5: What is Ben Whishaw's occupation?Q6: What is Ben’s received award? Express Uncertainty to unsure questions. Figure 2: Evaluation Pipeline for UNCLE. The framework consists of three steps: detecting known/unknown key aspects, generating long- and short-form answers, and fact-checking. represents a correct answer, represents an incorrect answer, and represents uncertainty expression. KCR measures the proportion of aspects known to the model that are correctly expressed in the generated response. Metric 4 (Unknown to Uncertain Rate) Let Aunkdenote the set of all unknown aspects, and Aunc unkdenote the set of unknown aspects expressed as uncertainty. The Unknown to Uncertain Rate (UUR) is then defined as UUR =|Aunc unk| |Aunk|. UUR measures the proportion of aspects the model does not know that are expressed with uncertainty rather than incorrectly stated as facts. Metric 5 (Expression Accuracy) With previously defined notations, Expression Accuracy (EA) is then defined as EA=|Acor kn|+|Aunc unk| |Akn|+|Aunk|. EA is the micro-average of KCR and UUR, quan- tifying the | https://arxiv.org/abs/2505.16922v1 |
proportion of aspects that are correctly expressed, i.e., the model maintains correct expres- sions for aspects it knows and expresses aspects it does not know as uncertainty. Evaluation Pipeline. Figure 2 illustrates the overview of our evaluation pipeline. Step 1: Known/Unknown Detection. To assess whether the model knows a key aspect, we prompt it five times with the corresponding short-form question at a temperature of 1 (Yang et al., 2024d; Gekhmanet al., 2024c). If none of the five responses are cor- rect, we classify the aspect as unknown; otherwise, it is considered known. Step 2: Question Answer- ing. We then prompt the model to answer both short- and long-form questions with temperature 0. In the prompt, we explicitly ask the model to express uncertainty. Step 3: Fact-checking. We first collect all answers where the models express uncertainty, using GPT-4o (OpenAI et al., 2024). For the remaining certain answers, we use GPT-4o to compare them against a gold reference for each key aspect. Each aspect is then classified as correct, incorrect, or uncertain. Step 4: Calculating Met- rics. We then draw the confusion matrix in Table 2 and calculate our five metrics. We also perform a human evaluation (see Appendix B) to verify the reliability of our automated assessment pipeline. All prompts are listed in Appendix C. 5 LLMs’ Performance on U NCLE Leveraging UNCLE, we first explore the following question: How well do current LLMs selectively express uncertainty in long-form generation? 5.1 Models and Prompts We conduct our experiments with both open- and close-sourced models: GPT-3.5-turbo-1106 (Ope- nAI, 2022), GPT-4-1106-preview (OpenAI et al., 2024), Claude-3.5-Sonnet (Anthropic, 2023), Deepseek-Chat (DeepSeek-AI et al., 2024), Llama3 Instruct (8B and 70B) (Meta, 2024), Mis- tral Instruct (7B and 8x7B) (Jiang et al., 2023), and 5 MethodLong-Form Short-Form FA↑UA↑UUR↑KCR↑EA↑FA↑UA↑UUR↑KCR↑EA↑ Close-sourced Models GPT-3.5 73.7 32.3 2.08 87.9 66.8 76.7 2.63 0.54 97.4 74.7 GPT-4 76.9 13.8 6.00 87.2 74.8 84.2 4.82 3.11 95.1 81.1 Claude-3.5-Sonnet 75.3 8.25 0.97 96.1 72.0 86.7 3.08 2.76 97.4 84.7 DeepSeek-Chat 73.7 29.2 0.83 94.5 70.6 78.6 7.45 2.90 95.5 76.7 Open-sourced Models Llama-3-8B 58.0 41.2 1.12 81.6 50.7 63.3 42.3 2.42 89.2 55.8 Llama-3-70B 70.2 40.0 0.79 85.4 65.8 75.2 25.0 1.86 92.5 71.4 Mixtral-7B 52.7 46.7 2.16 78.9 46.9 58.8 25.9 3.47 89.7 53.8 Mixtral-8x7B 66.3 37.2 1.87 83.4 61.0 72.9 22.3 5.41 92.6 68.8 Qwen2-7B 48.7 57.8 4.00 79.9 41.4 47.8 31.2 4.05 85.5 43.1 Qwen2-72B 63.2 44.3 2.78 84.7 58.8 68.9 22.8 4.75 92.2 64.5 Table 3: Performance of Different Models on UNCLE. All values are presented as percentages, with darker colors representing higher scores. Metrics include Factual Accuracy (FA), Uncertain Accuracy (UA), Known to Correct Rate (KCR), Unknown to Uncertain Rate (UUR), and Expression Accuracy (EA). Qwen2 Instruct (7B and 72B) (Yang et al., 2024a). For both long-form and short-form generation, the model is directly prompted to express uncertainty with “You should express uncertainty for any as- pect you are unsure about.” (see full prompt in Appendix C). 5.2 Results Models exhibit consistently low UA and UUR in both long- and short-form QA. As shown in Table 3, | https://arxiv.org/abs/2505.16922v1 |
all models are with UUR below 10%, indi- cating a limited ability to express unknown cases through uncertainty expressions. UA generally re- mains below 50%, and open-source models per- form generally better. A closer analysis reveals that open-source models tend to produce more un- certainty expressions , resulting in a larger Aunc. However, many of these expressions do not corre- spond to truly unknown cases, leading to lower UA. In contrast, closed-source models produce fewer uncertainty expressions but do so more accurately, resulting in higher UA. Overall, current models struggle to express uncertainty accurately in both long- and short-form QA. Models achieve relatively high KCR and EA across QA formats. All models exceed 75% KCR on long-form QA and 85% on short-form QA, in- dicating strong performance on correctly stating known knowledge. Notably, the models with the highest KCR also achieve the highest EA. This is because EA rewards both correct answers to known questions (KCR) and appropriate handling of un- knowns (UUR). Ideally, models should excel in both KCR and UUR, but current performance on UUR remains inadequate.Short-form QA yields higher FA, KCR, and EA, but lower UA compared to long-form QA. A closer examination reveals that models tend to ex- press uncertainty more frequently in short-form QA, resulting in a larger Aunc. However, many of these expressions do not correspond to truly un- known cases, which lowers UA. Meanwhile, the higher FA observed in short-form settings is likely due to the narrower scope of each question, which reduces noise and improves factual accuracy. Fur- ther analysis is presented in §7. 6Teaching LLMs to Express Uncertainty We further explore both prompt-based and training- based methods to teach LLMs to express uncer- tainty in long-form generation (prompts and more training details are in Appendix D). 6.1 Experiment Settings Prompt-based Methods. 1)Unc-Zero : The model is directly prompted to express uncertainty in its output whenever it is unsure about any claims. This setting is identical to that used in Section 5. 2)Unc-Few : Based on Unc-Zero, we provide the model with an additional set of 10 hand-crafted QA examples, where uncertainty is explicitly ex- pressed in the answers as in-context learning ex- amples. 3)Pair-Few : Extending Unc-Few, we provide the model with both a response containing only certain expressions, Rcert, and another with uncertainty expressions, Runc, for each query. Each example is formatted as <Q,Rcert,Runc>. The aim of including both RcertandRuncis to teach models when to express uncertainty through in- context learning. 4)Self-Refine (Madaan et al., 6 MethodLong-Form Short-Form FA↑UA↑UUR↑KCR↑EA↑FA↑UA↑UUR↑KCR↑EA↑ Llama3-8B-Instruct Unc-Zero 58.0 41.2 1.12 81.6 50.7 63.3 42.3 2.42 89.2 55.8PromptUnc-Few 58.1 64.0 12.5 75.8 51.5 72.4 41.3 86.2 23.4 47.6 Pair-Few 58.6 51.1 11.4 75.8 51.0 69.1 39.8 92.1 13.9 44.0 Self-Refine 53.8 37.2 12.4 73.2 47.8 56.2 34.9 84.5 21.5 47.6TrainingShort-DPO 56.7 48.4 13.4 73.7 50.5 69.2 62.6 38.6 79.5 63.7 Mix-DPO 58.5 56.1 34.2 65.7 53.6 69.3 62.0 38.1 79.4 63.5 Long-DPO 51.5 59.6 40.7 57.6 51.1 79.3 55.0 71.3 61.0 65.0 Mistral-7B-Instruct Unc-Zero 52.7 46.7 2.16 78.9 46.9 58.8 25.9 3.47 89.7 53.8PromptUnc-Few 54.9 50.6 7.00 79.8 49.5 | https://arxiv.org/abs/2505.16922v1 |
71.6 53.5 58.1 67.5 63.6 Pair-Few 54.2 46.8 2.77 79.7 47.7 60.0 45.4 16.0 83.6 55.6 Self-Refine 41.7 43.1 8.60 76.3 42.5 45.1 42.6 7.40 76.8 45.8TrainingShort-DPO 53.4 51.9 10.8 79.5 47.0 69.8 56.7 45.8 77.0 64.1 Mix-DPO 56.9 54.6 43.1 61.2 53.7 64.1 54.9 28.1 81.9 59.5 Long-DPO 53.3 59.6 37.8 62.2 52.0 70.1 51.6 58.1 62.7 60.8 Table 4: Performance of Different Prompting and Training Strategies on UNCLE. All values are presented as percentages, with darker colors for higher scores. Metrics include Factual Accuracy (FA), Uncertain Accuracy (UA), Known to Correct Rate (KCR), Unknown to Uncertain Rate (UUR), and Expression Accuracy (EA). 2023b): We apply a draft-and-refine setup. The model is asked to first generate an initial response and then refine the uncertain claims into explicit uncertainty expressions in a second pass. Training-based Methods. We employ three train- ing settings to teach the model to express uncer- tainty. Our UNCLEdataset is used only for evalua- tion. 1)Short-DPO : Following Cheng et al. (2024), we conduct a two-stage SFT + DPO training using only short-form QA pairs. 2)Long-DPO : Fol- lowing Yang et al. (2024b), we apply a similar two- stage SFT + DPO training approach, but using long- form QA examples exclusively. 3)Mix-DPO : We mix training samples from Short-DPO and Long- DPO in a 3:7 ratio and performed two-stage train- ing. To ensure fairness, the training datasets are kept the same size. Training Details can be found in Appendix D.2. 6.2 Results Both prompt- and training-based methods im- prove performance over Unc-Zero. We observe a substantial increase in UA and UUR, indicating improved capability in expressing uncertainty ac- curately. For instance, with Llama3, the UUR in- creases from 1.12% under Unc-Zero to 34.2% with Mix-DPO and 40.7% with Long-DPO. Training- based methods generally yield greater improve- ments than prompt-based methods. Training-based methods can better balance UUR and KCR. In contrast, the prompt-basedmethods tend to express excessive uncertainty, lead- ing to a high UUR. For example, with the Llama3 in the short-form task, Pair-Few shows a high UUR of 92.1% but a low KCR of 13.9%. On the other hand, training-based methods like Long-DPO achieve a high UUR of 71.3% while maintaining a high KCR of 61.0%. The more balanced UUR and KCR also result in generally better EA compared to prompt-based methods. Training on long-form tasks benefits short-form tasks, but not vice versa. For example, Llama3’s Long-DPO, trained on long-form tasks, achieves high UUR (71.3%) and KCR (61.0%) on short- form tasks. In contrast, Llama3’s Short-DPO per- forms poorly on long-form tasks, with a UUR of only 13.4%. The Mix-DPO method offers a more balanced performance across both task formats. We hypothesize that training on long-form tasks, which involve multi-aspect uncertainty, enhances the model’s ability to handle uncertainty in easier short-form setting. 7 Discussion 7.1 Alignment Between Uncertainty Expressions in Short- and Long-form QA Using paired short- and long-form questions in UN- CLE, we examine whether the same aspect is con- sistently expressed as certain or uncertain across different QA formats. As shown in Figure 3, C-C indicates the | https://arxiv.org/abs/2505.16922v1 |
percentage of aspects expressed as cer- tain in both short- and long-form QA, while U-U 7 $6$6$6Unc-Zero0.17.591.80.6 Unc-Zero0.214.384.50.6Long-Form$6Short-DPO40.129.127.43.4 Short-DPO0.669.829.60$6Mix-DPO54.613.424.67.4 Mix-DPO20.738.333.47.6$6Long-DPO23.548.326.41.8 Long-DPO16.539.937.56.1LlamaMistralShort-FormFigure 3: Distribution (in percentage) of key aspects expressed with certainty and uncertainty by Llama3 and Mistral across different training methods. C-C indicates both short- and long-form express certainty, U-U shows both express uncertainty, U-C means uncertain in short-form but certain in long-form, and C-U represents the reverse. represents those expressed as uncertain in both for- mats. Ideally, perfect alignment would result in all expressions falling into either C-C or U-U . The key observations are as follows: 1)In the original model (Unc-Zero), both short- and long- form aspects are mostly expressed with certainty. C-C accounts for 91.8% in Llama3 and 84.5% in Mistral, while both U-U are below 1%. 2)Train- ing increases the proportion of U-U compared to Unc-Zero. For Llama3, Short-DPO and Mix- DPO raise U-U from 0.1% to 40.1% and 54.6%, respectively. 3)U-C and C-U remain substan- tial in training-based models. This suggests on- going inconsistency between short- and long-form uncertainty. Notably, U-C often exceeds C-U , indicating many aspects are certain in short-form QA but uncertain in long-form. Future work could improve alignment by reducing U-C and C-U . 4)trained only on short-form data, Llama3 and Mistral exhibit different ability in long-form QA. For example, Mistral trained only on short-form data shows minimal long-form uncertainty (0% + 0.6%). In contrast, Llama3 retains long-form un- certainty even under the same condition (3.4% + 40.1%). This highlights the differing ability of mod- els to generalize short-form uncertainty expression to long-form scenario. 7.2 Influence of Mixture Ratio ξ We further analyze how the mixture ratio ξ(propor- tion of long-form data) affects performance. Train- ing data are constructed by mixing long-form and short-form data from ratio 0.1 to 0.9, while keeping the total amount of training data constant. From Figure 4, we observe two key insights: 1)Performance on long-form and short-form tasks is a trade-off : Increasing the mixture ratio0.1 0.5 0.9505560Long-FormLlama3-8B 0.1 0.5 0.9505458Mistral-7B 0.1 0.5 0.9506070 Mixture Ratio ξShort-FormFA UA EA 0.1 0.5 0.9506070 Mixture Ratio ξ Figure 4: Performance of Llama3-8B and Mistral-7B across different mixture ratios. improves long-form performance but reduces short- form performance. The model trained with mixed data performs between Long-DPO and Short-DPO for both tasks. 2)While increasing the ratio ξcon- sistently hurts the performance on short-form QA, it does not consistently improve long-form QA. For example, in Figure 4 (upper right), in- creasing ξfrom 0.1 to 0.9 leads to a 7.82% drop in short-form EA for Mistral-7B. However, both long- form FA and EA first increase and then decline (lower right). Based on this trade-off, we select a ratio of 0.7 to achieve more balan ced results. 8 Conclusion We introduce UNCLE, a benchmark for evaluating uncertainty in long- and short-form QA. Our exper- iments show that models struggle to express uncer- tainty in long-form generation. While our training method mitigates this issue, a misalignment per- sists in uncertainty expression between long- and 8 short-form generation. Future work should focus on enhancing consistency across both | https://arxiv.org/abs/2505.16922v1 |
forms. Limitation Known and unknown detection To assess the model’s known and unknown knowledge, we em- ploy a multiple sampling method. Increasing the number of sampling iterations could enhance the accuracy of the knowledge estimation. Alternative approaches (Gekhman et al., 2024b; Yang et al., 2024c) may also be applicable. For example, one could apply a threshold of varying strictness in the sampling process to identify "Maybe Known" knowledge, or analyze the model’s hidden states to determine whether it possesses specific knowledge. Robustness across generation types As dis- cussed in §6 and shown in Table 4, we have not yet identified an effective solution that performs well on both long- and short-form generation tasks. Fu- ture research could investigate this challenge more thoroughly. Other types of uncertainty estimation This work focuses on linguistic expressions of uncer- tainty. However, UNCLEis also suitable for post- hoc uncertainty evaluation. Our primary goal is to assess the model’s inherent ability to express uncer- tainty. Future work could develop more advanced methods for uncertainty estimation using U NCLE. Ethics Statement Our research adheres to strict ethical guidelines. We verified the licenses of all software and datasets used in this study to ensure full compliance with their terms. During the human annotation process, all annotators provided informed consent for their data to be included in the project. No privacy con- cerns have been identified. Additionally, we have conducted a thorough assessment of the project and do not anticipate any further risks. Acknowledgement We thank Xiaochen Zhu and Chengzu Li for their proofreading and valuable comments on this pa- per. We also acknowledge the use of an icon from Flaticon2and thank its creators for providing this visually appealing design. 2https://www.flaticon.comReferences Anthropic. 2023. Introducing claude 2.1. Available from Anthropic: https://www.anthropic.com/ news/claude-2-1 . Neil Band, Xuechen Li, Tengyu Ma, and Tatsunori Hashimoto. 2024. Linguistic calibration of long- form generations. In Forty-first International Con- ference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024 . OpenReview.net. Jennifer A. Bishop, Sophia Ananiadou, and Qianqian Xie. 2024. LongDocFACTScore: Evaluating the factuality of long document abstractive summarisa- tion. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) , pages 10777–10789, Torino, Italia. ELRA and ICCL. Lida Chen, Zujie Liang, Xintao Wang, Jiaqing Liang, Yanghua Xiao, Feng Wei, Jinglei Chen, Zhenghong Hao, Bing Han, and Wei Wang. 2024. Teaching large language models to express knowledge bound- ary from their own signals. Qinyuan Cheng, Tianxiang Sun, Xiangyang Liu, Wen- wei Zhang, Zhangyue Yin, Shimin Li, Linyang Li, Zhengfu He, Kai Chen, and Xipeng Qiu. 2024. Can AI assistants know what they don’t know? In Forty- first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024 . Open- Review.net. Cheng-Han Chiang and Hung-yi Lee. 2024. Merging facts, crafting fallacies: Evaluating the contradic- tory nature of aggregated factual claims in long-form generations. In Findings of the Association for Com- putational Linguistics: ACL 2024 , pages 2734–2751, Bangkok, Thailand. Association for Computational Linguistics. Nicola De Cao, Wilker Aziz, and Ivan Titov. 2021. Edit- ing factual knowledge in language models. | https://arxiv.org/abs/2505.16922v1 |
In Pro- ceedings of the 2021 Conference on Empirical Meth- ods in Natural Language Processing , pages 6491– 6506, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. DeepSeek-AI, Aixin Liu, Bei Feng, Bing Xue, Bingx- uan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Daya Guo, Dejian Yang, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Haowei Zhang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Li, Hui Qu, J. L. Cai, Jian Liang, Jianzhong Guo, Jiaqi Ni, Jiashi Li, Jiawei Wang, Jin Chen, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, Junxiao Song, Kai Dong, Kai Hu, Kaige Gao, Kang Guan, Kexin Huang, Kuai Yu, Lean Wang, Lecong Zhang, Lei Xu, Leyi Xia, Liang Zhao, Litong Wang, Liyue Zhang, Meng Li, Miaojun Wang, Mingchuan Zhang, Minghua Zhang, Minghui Tang, Mingming Li, Ning Tian, Panpan Huang, Peiyi Wang, 9 Peng Zhang, Qiancheng Wang, Qihao Zhu, Qinyu Chen, Qiushi Du, R. J. Chen, R. L. Jin, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, Runxin Xu, Ruoyu Zhang, Ruyi Chen, S. S. Li, Shanghao Lu, Shangyan Zhou, Shanhuang Chen, Shaoqing Wu, Shengfeng Ye, Shengfeng Ye, Shirong Ma, Shiyu Wang, Shuang Zhou, Shuiping Yu, Shunfeng Zhou, Shuting Pan, T. Wang, Tao Yun, Tian Pei, Tianyu Sun, W. L. Xiao, Wangding Zeng, Wanjia Zhao, Wei An, Wen Liu, Wenfeng Liang, Wenjun Gao, Wenqin Yu, Wentao Zhang, X. Q. Li, Xiangyue Jin, Xianzu Wang, Xiao Bi, Xiaodong Liu, Xiaohan Wang, Xiaojin Shen, Xiaokang Chen, Xiaokang Zhang, Xiaosha Chen, Xiaotao Nie, Xiaowen Sun, Xiaoxiang Wang, Xin Cheng, Xin Liu, Xin Xie, Xingchao Liu, Xingkai Yu, Xinnan Song, Xinxia Shan, Xinyi Zhou, Xinyu Yang, Xinyuan Li, Xuecheng Su, Xuheng Lin, Y . K. Li, Y . Q. Wang, Y . X. Wei, Y . X. Zhu, Yang Zhang, Yan- hong Xu, Yanhong Xu, Yanping Huang, Yao Li, Yao Zhao, Yaofeng Sun, Yaohui Li, Yaohui Wang, Yi Yu, Yi Zheng, Yichao Zhang, Yifan Shi, Yiliang Xiong, Ying He, Ying Tang, Yishi Piao, Yisong Wang, Yix- uan Tan, Yiyang Ma, Yiyuan Liu, Yongqiang Guo, Yu Wu, Yuan Ou, Yuchen Zhu, Yuduan Wang, Yue Gong, Yuheng Zou, Yujia He, Yukun Zha, Yunfan Xiong, Yunxian Ma, Yuting Yan, Yuxiang Luo, Yuxi- ang You, Yuxuan Liu, Yuyang Zhou, Z. F. Wu, Z. Z. Ren, Zehui Ren, Zhangli Sha, Zhe Fu, Zhean Xu, Zhen Huang, Zhen Zhang, Zhenda Xie, Zhengyan Zhang, Zhewen Hao, Zhibin Gou, Zhicheng Ma, Zhi- gang Yan, Zhihong Shao, Zhipeng Xu, Zhiyu Wu, Zhongyu Zhang, Zhuoshu Li, Zihui Gu, Zijia Zhu, Zijun Liu, Zilin Li, Ziwei Xie, Ziyang Song, Ziyi Gao, and Zizheng Pan. 2024. Deepseek-v3 technical report. Ekaterina Fadeeva, Roman Vashurin, Akim Tsvigun, Artem Vazhentsev, Sergey Petrakov, Kirill Fedyanin, Daniil Vasilev, Elizaveta Goncharova, Alexander Panchenko, Maxim Panov, Timothy Baldwin, and Artem Shelmanov. 2023. LM-polygraph: Uncer- tainty estimation for language models. In Proceed- ings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demon- strations , pages 446–461, Singapore. Association for Computational | https://arxiv.org/abs/2505.16922v1 |
Linguistics. Zorik Gekhman, Gal Yona, Roee Aharoni, Matan Eyal, Amir Feder, Roi Reichart, and Jonathan Herzig. 2024a. Does fine-tuning LLMs on new knowledge encourage hallucinations? In Proceedings of the 2024 Conference on Empirical Methods in Natu- ral Language Processing , pages 7765–7784, Miami, Florida, USA. Association for Computational Lin- guistics. Zorik Gekhman, Gal Yona, Roee Aharoni, Matan Eyal, Amir Feder, Roi Reichart, and Jonathan Herzig. 2024b. Does fine-tuning llms on new knowledge encourage hallucinations? Zorik Gekhman, Gal Yona, Roee Aharoni, Matan Eyal, Amir Feder, Roi Reichart, and Jonathan Herzig. 2024c. Does fine-tuning llms on new knowl-edge encourage hallucinations? ArXiv preprint , abs/2405.05904. Haixia Han, Tingyun Li, Shisong Chen, Jie Shi, Chengyu Du, Yanghua Xiao, Jiaqing Liang, and Xin Lin. 2024. Enhancing confidence expression in large language models through learning from past experi- ence. Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. Lora: Low-rank adaptation of large language models. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022 . OpenReview.net. Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, and Ting Liu. 2023. A survey on hallucination in large lan- guage models: Principles, taxonomy, challenges, and open questions. Preprint , arXiv:2311.05232. Yukun Huang, Yixin Liu, Raghuveer Thirukovalluru, Arman Cohan, and Bhuwan Dhingra. 2024a. Cali- brating long-form generations from large language models. In Findings of the Association for Compu- tational Linguistics: EMNLP 2024 , pages 13441– 13460, Miami, Florida, USA. Association for Com- putational Linguistics. Yukun Huang, Yixin Liu, Raghuveer Thirukovalluru, Arman Cohan, and Bhuwan Dhingra. 2024b. Cali- brating long-form generations from large language models. Preprint , arXiv:2402.06544. Albert Q. Jiang, Alexandre Sablayrolles, Arthur Men- sch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guil- laume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023. Mistral 7B. Preprint , arXiv:2310.06825. Mingjian Jiang, Yangjun Ruan, Prasanna Sattigeri, Salim Roukos, and Tatsunori Hashimoto. 2024. Graph-based uncertainty metrics for long-form lan- guage model outputs. Preprint , arXiv:2410.20783. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017a. TriviaQA: A large scale dis- tantly supervised challenge dataset for reading com- prehension. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 1601–1611, Vancou- ver, Canada. Association for Computational Linguis- tics. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017b. TriviaQA: A large scale dis- tantly supervised challenge dataset for reading com- prehension. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 1601–1611, Vancou- ver, Canada. Association for Computational Linguis- tics. 10 Sunnie SY Kim, Q Vera Liao, Mihaela V orvoreanu, Stephanie Ballard, and Jennifer Wortman Vaughan. 2024. " i’m not sure, but...": Examining the impact of large language models’ uncertainty expression on user reliance and trust. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency | https://arxiv.org/abs/2505.16922v1 |
, pages 822–835. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings . Lorenz Kuhn, Yarin Gal, and Sebastian Farquhar. 2023. Semantic uncertainty: Linguistic invariances for un- certainty estimation in natural language generation. InThe Eleventh International Conference on Learn- ing Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023 . OpenReview.net. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Ken- ton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natu- ral questions: A benchmark for question answering research. Transactions of the Association for Compu- tational Linguistics , 7:452–466. Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph Gon- zalez, Hao Zhang, and Ion Stoica. 2023. Efficient memory management for large language model serv- ing with pagedattention. In Proceedings of the 29th Symposium on Operating Systems Principles , pages 611–626. Jiaqi Li, Yixuan Tang, and Yi Yang. 2024. Know the unknown: An uncertainty-sensitive method for llm instruction tuning. Junyi Li, Xiaoxue Cheng, Xin Zhao, Jian-Yun Nie, and Ji-Rong Wen. 2023. HaluEval: A large-scale hal- lucination evaluation benchmark for large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 6449–6464, Singapore. Association for Com- putational Linguistics. Stephanie Lin, Jacob Hilton, and Owain Evans. 2022. Teaching models to express their uncertainty in words. ArXiv preprint , abs/2205.14334. Zhen Lin, Shubhendu Trivedi, and Jimeng Sun. 2023. Generating with confidence: Uncertainty quantifica- tion for black-box large language models. Preprint , arXiv:2305.19187. Terrance Liu and Zhiwei Steven Wu. 2024. Multi-group uncertainty quantification for long-form text genera- tion. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon,Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdan- bakhsh, and Peter Clark. 2023a. Self-refine: Itera- tive refinement with self-feedback. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Sys- tems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 . Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdan- bakhsh, and Peter Clark. 2023b. Self-refine: Itera- tive refinement with self-feedback. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Sys- tems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 . Meta. 2024. Llama 3 model card. Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Koh, Mohit Iyyer, Luke Zettle- moyer, and Hannaneh Hajishirzi. 2023a. FActScore: Fine-grained atomic evaluation of factual precision in long form text generation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 12076–12100, Singa- pore. Association for Computational Linguistics. Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, | https://arxiv.org/abs/2505.16922v1 |
Pang Koh, Mohit Iyyer, Luke Zettle- moyer, and Hannaneh Hajishirzi. 2023b. FActScore: Fine-grained atomic evaluation of factual precision in long form text generation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 12076–12100, Singa- pore. Association for Computational Linguistics. OpenAI, :, Aaron Hurst, Adam Lerer, Adam P. Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, Aleksander M ˛ adry, Alex Baker-Whitcomb, Alex Beutel, Alex Borzunov, Alex Carney, Alex Chow, Alex Kirillov, et al. 2024. Gpt-4o system card. Preprint , arXiv:2410.21276. OpenAI. 2022. Chatgpt blog post. https://openai. com/blog/chatgpt . Accessed: 2024-09-06. Yixiao Song, Yekyung Kim, and Mohit Iyyer. 2024. Veriscore: Evaluating the factuality of verifiable claims in long-form text generation. Preprint , arXiv:2406.19276. Qingni Wang, Tiantian Geng, Zhiyuan Wang, Teng Wang, Bo Fu, and Feng Zheng. 2024. Sample then identify: A general framework for risk control and assessment in multimodal large language models. Preprint , arXiv:2410.08174. Zhiyuan Wang, Qingni Wang, Yue Zhang, Tianlong Chen, Xiaofeng Zhu, Xiaoshuang Shi, and Kaidi Xu. 11 2025. Sconu: Selective conformal uncertainty in large language models. Preprint , arXiv:2504.14154. Jason Wei, Nguyen Karina, Hyung Won Chung, Yunxin Joy Jiao, Spencer Papay, Amelia Glaese, John Schulman, and William Fedus. 2024a. Measuring short-form factuality in large language models. Jerry Wei, Chengrun Yang, Xinying Song, Yifeng Lu, Nathan Hu, Jie Huang, Dustin Tran, Daiyi Peng, Ruibo Liu, Da Huang, Cosmo Du, and Quoc V . Le. 2024b. Long-form factuality in large language mod- els. In Advances in Neural Information Processing Systems 38: Annual Conference on Neural Informa- tion Processing Systems 2024, NeurIPS 2024, Van- couver, BC, Canada, December 10 - 15, 2024 . Jerry Wei, Chengrun Yang, Xinying Song, Yifeng Lu, Nathan Hu, Jie Huang, Dustin Tran, Daiyi Peng, Ruibo Liu, Da Huang, Cosmo Du, and Quoc V . Le. 2024c. Long-form factuality in large language mod- els. In Advances in Neural Information Processing Systems 38: Annual Conference on Neural Informa- tion Processing Systems 2024, NeurIPS 2024, Van- couver, BC, Canada, December 10 - 15, 2024 . Tianyang Xu, Shujin Wu, Shizhe Diao, Xiaoze Liu, Xingyao Wang, Yangyi Chen, and Jing Gao. 2024. Sayself: Teaching llms to express con- fidence with self-reflective rationales. Preprint , arXiv:2405.20974. An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, Guanting Dong, Hao- ran Wei, Huan Lin, Jialong Tang, Jialin Wang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Ma, Jianxin Yang, Jin Xu, Jingren Zhou, Jinze Bai, Jinzheng He, Junyang Lin, Kai Dang, Keming Lu, Ke- qin Chen, Kexin Yang, Mei Li, Mingfeng Xue, Na Ni, Pei Zhang, Peng Wang, Ru Peng, Rui Men, Ruize Gao, Runji Lin, Shijie Wang, Shuai Bai, Sinan Tan, Tianhang Zhu, Tianhao Li, Tianyu Liu, Wenbin Ge, Xiaodong Deng, Xiaohuan Zhou, Xingzhang Ren, Xinyu Zhang, Xipin Wei, Xuancheng Ren, Xuejing Liu, Yang Fan, Yang Yao, Yichang Zhang, Yu Wan, Yunfei Chu, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, Zhifang Guo, and Zhihao Fan. 2024a. Qwen2 techni- cal report. Ruihan Yang, Caiqi Zhang, Zhisong Zhang, Xinting Huang, Sen Yang, | https://arxiv.org/abs/2505.16922v1 |
Nigel Collier, Dong Yu, and De- qing Yang. 2024b. Logu: Long-form generation with uncertainty expressions. Yuqing Yang, Ethan Chern, Xipeng Qiu, Graham Neu- big, and Pengfei Liu. 2024c. Alignment for honesty. InAdvances in Neural Information Processing Sys- tems 38: Annual Conference on Neural Information Processing Systems 2024, NeurIPS 2024, Vancouver, BC, Canada, December 10 - 15, 2024 . Yuqing Yang, Ethan Chern, Xipeng Qiu, Graham Neu- big, and Pengfei Liu. 2024d. Alignment for honesty. InAdvances in Neural Information Processing Sys- tems 38: Annual Conference on Neural InformationProcessing Systems 2024, NeurIPS 2024, Vancouver, BC, Canada, December 10 - 15, 2024 . Caiqi Zhang, Fangyu Liu, Marco Basaldella, and Nigel Collier. 2024a. LUQ: Long-text uncertainty quantifi- cation for LLMs. In Proceedings of the 2024 Con- ference on Empirical Methods in Natural Language Processing , pages 5244–5262, Miami, Florida, USA. Association for Computational Linguistics. Caiqi Zhang, Ruihan Yang, Zhisong Zhang, Xinting Huang, Sen Yang, Dong Yu, and Nigel Collier. 2024b. Atomic calibration of llms in long-form generations. Hanning Zhang, Shizhe Diao, Yong Lin, Yi Fung, Qing Lian, Xingyao Wang, Yangyi Chen, Heng Ji, and Tong Zhang. 2024c. R-tuning: Instructing large lan- guage models to say ‘I don‘t know’. In Proceedings of the 2024 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 7113–7139, Mexico City, Mexico. As- sociation for Computational Linguistics. Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, et al. 2023. Siren’s song in the ai ocean: a survey on hallucination in large language models. ArXiv preprint , abs/2309.01219. Wenting Zhao, Tanya Goyal, Yu Ying Chiu, Liwei Jiang, Benjamin Newman, Abhilasha Ravichander, Khy- athi Chandu, Ronan Le Bras, Claire Cardie, Yuntian Deng, and Yejin Choi. 2024a. Wildhallucinations: Evaluating long-form factuality in llms with real- world entity queries. Wenting Zhao, Tanya Goyal, Yu Ying Chiu, Liwei Jiang, Benjamin Newman, Abhilasha Ravichander, Khy- athi Chandu, Ronan Le Bras, Claire Cardie, Yuntian Deng, and Yejin Choi. 2024b. Wildhallucinations: Evaluating long-form factuality in llms with real- world entity queries. Kaitlyn Zhou, Dan Jurafsky, and Tatsunori Hashimoto. 2023. Navigating the grey area: How expressions of uncertainty and overconfidence affect language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 5506–5524, Singapore. Association for Com- putational Linguistics. 12 Appendix A Details of U NCLE For each entity in UNCLE, we select a set of key aspects as follows. Given Nentities, we first re- trieve all associated properties from Wikidata. For each property P, we count how many of the N entities possess it, retaining only the most frequent properties as key aspects for each entity. This pro- cess ensures that the key aspects are representative and stable by filtering out rare properties, which might otherwise introduce noise or bias into the evaluation. Statistics and examples are provided in Table 5, along with a detailed example to fur- ther illustrate the structure of the samples in our dataset. Listing 1: Example from U NCLE {"entity ": " V for | https://arxiv.org/abs/2505.16922v1 |
Vendetta ", "prompt ": "In a paragraph , introduce the movie V for Vendetta , including duration in minutes , original language , production designer , producer , distribution format , award received .", "individual_qa ": [{"question ": " What is the duration of the movie V for Vendetta in minutes ?", "answer ": ["132"]} , {"question ": " What is the original language of the movie V for Vendetta ?", "answer ": [" English "]} , {"question ": " Who was the production designer for the movie V for Vendetta ?", "answer ": [" Owen Paterson "]} , {"question ": " Who is the producer of the movie V for Vendetta ?", "answer ": [" Joel Silver "," Grant Hill "," Lilly Wachowski "," Lana Wachowski "]} , {"question ": "In what format was the movie V for Vendetta distributed ?", "answer ": [" video on demand "]} , {"question ": " What awards has the movie V for Vendetta received ?", "answer ": [" Saturn Award for Best Actress "]}]} B Human Annotation B.1 Human Annotation on U NCLE Construction We, the authors, conducted human verification dur- ing the dataset construction process. We manually reviewed all the top-ranked relations (aspects) and removed those that were (1) not suitable for short- form QA, and (2) too difficult to answer or of lim- ited importance, such as Freebase ID and IMDb ID. For the manually constructed questions, we also reviewed all of them to ensure they were proper and accurate short-form questions.B.2 Human Annotation on Evaluation Pipeline We randomly selected 100 samples for human an- notation. The two annotators were compensated above the local minimum wage. Both annota- tors had postgraduate-level English proficiency and backgrounds in computer science. They agreed to contribute data for our analysis. The statistics for some key components are as follows: (1) Accuracy of judging long-form QA as correct, incorrect, or uncertain: agreement rate of 95% and error rate of 5%. (2) Accuracy of judging short-form QA as correct, incorrect, or uncertain: agreement rate of 93% and error rate of 3%. Since this task primar- ily involves comparing answers with the ground truth, GPT-4o performs well overall. However, the errors primarily arise in the following cases: (1) when multiple correct answers exist, GPT-4o may deem partially correct answers with unfactual in- formation as correct; (2) for numbers with many digits, GPT-4o may incorrectly treat values within a reasonable range as correct. C Instruction Prompt Examples. The instruction prompts for fact-checking the an- swers to key aspects in long-form and short-form generation, as discussed in §5, are provided in List- ing 2. Listing 2: The instruction prompts for Factchecking. Short Form Factcheck Instruction : Your task is to evaluate whether the model answer correctly answers the given questions , based on the gold answer . For each question , assign one of the following veracity labels : - UNC : If the model answer contains uncertainty expressions or explicitly refuses to answer the question . - S: If the model answer is definitive and does not contradict the gold | https://arxiv.org/abs/2505.16922v1 |
answer , this includes cases where the model answer is only partially consistent with the gold answer but does not introduce incorrect information . - NS: If the model answer is definitive but contradicts the gold answer . Output Format : Your responses must follow the format demonstrated in the examples below : ### Question : What is William Waldegrave , Baron Waldegrave of North Hill 's date of birth ? Model Answer : North Hill was born on August 15, 1946. Gold Answer : 1946 -08 -15 T00 :00:00 Z. Analysis : The 13 Domains Entities Long-form QA Example Short-form QA Example # Entities # Questions BiosJackie Chan, Eminem, Steve Jobs...In a paragraph, introduce the person Jackie Chan, including birthdate, place of birth, citizenship, language spoken, ...What is Jackie Chan’s birthdate? Where was Jackie Chan born? What is Jackie Chan’s citizenship? ...319 1275 CompaniesAmazon, JP Morgan, Mars Incorporated...In a paragraph, introduce the company Amazon, including date of establishment, founders, location of formation, CEO, ...When was Amazon established? Who are the founders of Amazon? Where was Amazon formed? ...264 896 MoviesThe Matrix, Inception, Fight Club...In a paragraph, introduce the movie The Matrix, including genre, director, publication date, duration...What is the genre of The Matrix? Who directed The Matrix? When was The Matrix first released? ...236 754 Astronomical ObjectsPluto, Uranus, Saturn...In a paragraph, introduce the astronomical object Pluto, including mass, radius, orbital period, density...What is Pluto’s mass? What is Pluto’s radius? What is Pluto’s orbital period? What is Pluto’s density?171 642 DiseasesHIV/AIDS, Tuberculosis, PTSD...In a paragraph, introduce the disease HIV/AIDS, including time of discovery, symptoms, medical examination, possible treatments...When was HIV/AIDS discovered? What are the symptoms of HIV/AIDS? How is HIV/AIDS diagnosed? ...76 365 In Total: 1066 3932 Table 5: Overview of the UNCLEbenchmark. Each entity is associated with multiple key aspects, which are formulated as both long-form and short-form questions. For the same entity, there could be many different questions covering different aspects. model answer aligns with the gold answer . $S$ ### Question : What instrument does John Estes play ? Model Answer : John Estes plays piano . Gold Answer : guitar . Analysis : The model answer states that John Estes plays piano , which contradicts the gold answer . $NS$ ### Question : Where was John Estes born ? Model Answer : I have no information on where John Estes was born . Gold Answer : Ripley . Analysis : The paragraph states , "I have no information on where John Estes was born ." indicating uncertainty . $UNC$ Now , evaluate the following model answers based on the gold answers . { atomic_facts_string } Now , provide your responses in the required format : Long Form Factcheck Instruction : Your task is to evaluate whether the statements in the given paragraph accurately answer the provided questions , based on the gold answer . For each question , assign one of the following veracity labels : - UNC : If the statement in the paragraph about this question includes uncertainty expressions , or explicitly states a lack of | https://arxiv.org/abs/2505.16922v1 |
knowledge . - S: If the statement in the paragraph about this question is definitive and does not contradict the gold answer , this includes cases where the model answer is only partially consistent with the gold answer but does not introduce incorrect information . - NS: If the statement in the paragraph about this question is definitive butcontradicts the gold answer or fails to mention the question altogether . Output Format : Your responses must follow the format demonstrated in the examples below : ### Question : What is William Waldegrave , Baron Waldegrave of North Hill 's date of birth ? Gold Answer : 1946 -08 -15 T00 :00:00 Z. Analysis : The paragraph states that North Hill was born on August 15, 1946 , which aligns with the gold answer . $S$ ### Question : What instrument does John Estes play ? Gold Answer : guitar . Analysis : The paragraph states that John Estes plays piano , which contradicts the gold answer . $NS$ ### Question : Where was John Estes born ? Gold Answer : Ripley . Analysis : The paragraph states , "I have no information on where John Estes was born ," indicating uncertainty . $UNC$ Now , evaluate the following paragraph and questions based on the gold answers . Paragraph : { paragraph } Questions And Gold Answers : { qa_pairs } Now , provide your responses following the specified format : 14 D Teaching Models to Express Uncertainty D.1 Prompt-based Methods Here, we list the prompts for the prompt-based methods ( i.e., Zero-Shot, Few-Shot, and Paired Few-Shot) in §6. Listing 3: The instruction prompts of key procedures. Zero Shot (Long Form ): In a paragraph , introduce the [ entity ], including [A1], [A2], [A3], [A4], [A5], [A6 ]. You should express uncertainty for any aspect you are unsure about . Few Shot Examples (Long Form ): Your task is to write a biography for a specific entity . You should express uncertainty for any information you are not familiar with . Question : Tell me bio of [ example_entity ]. Answer : [ example_answer ] Paired Few Examples (Long Form ): Your task is to write a biography for a specific entity . You should express uncertainty for any information you are not familiar with . Question : Tell me a bio of [ example_entity ]. Good Answer : [ example_answer ] Zero Shot (Short Form ): [Question ]. You should express uncertainty for any questions you are unsure about . Few Shot Examples (Short Form ): Your task is to answer the given question . You should express uncertainty for any information you are not familiar with . Question : [ example_question ] Good Answer : [ example_answer ] Paired Few Examples (Short Form ): Your task is to answer the given question . You should express uncertainty for any information you are not familiar with . Question : [ example_question ] Good Answer : [ example_good_answer ] Bad Answer : [ example_short_answer ] D.2 Training-based Methods In our experiments, we | https://arxiv.org/abs/2505.16922v1 |
use Llama-3-8B- Instruct (Meta, 2024) and Mistral-7B- Instruct (Jiang et al., 2023) as base models. Training Data We construct three types of train- ing data: Idk-Dataset, which helps the model learn to express uncertainty for short questions; LoGU-Dataset, which is used for long-form uncertainty expression; and Mix-Dataset, which is a propor- tionally mixed combination of the Idk-Dataset and LoGU-Dataset. •Idk-Dataset (Cheng et al., 2024): The Idk- Dataset is constructed based on TrivialQA (Joshi et al., 2017b). Given a question Q, the model generates a set of answers {Ai}K i=1by being prompted Ktimes. If the accuracy of these Kanswers falls below the predefined thresh- oldθ, the chosen answer Achosen is classified as the refuted answer (e.g., "This question is be- yond the scope of my knowledge, and I am not sure what the answer is"). In this case, the re- jected answer Arejected is considered incorrect. If allKanswers are correct, the chosen answer is classified as correct, and the rejected answer is classified as refuted. For this setup, we use K= 10 andθ= 1. The pair (Q, A chosen )is then used to form the dataset DShort-SFT , while the triplet (Q, A chosen , Arejected )is used to form DShort-DPO . •LoGU-Dataset (Yang et al., 2024b): The LoGU framework adopts a divide-and-conquer ap- proach. Given a question Qand its correspond- ing long-form answer A, the LoGU-Dataset de- composes Ainto atomic claims. Fact-checking is then performed to identify correct claims Csand incorrect claims Cns. The chosen an- swer Achosen is formed by merging the correct atomic claims and revised versions of the in- correct claims that express uncertainty. The re- jected answer Arejected consists of the correct atomic claims, now revised to express uncer- tainty, and the incorrect claims Cns. The pair (Q, A chosen )is used to form DLong-SFT , while the triplet (Q, A chosen , Arejected )is used to form DLong-DPO . The questions used to construct the LoGU-Dataset are sourced from Bios (Bishop et al., 2024), WildHallu (Zhao et al., 2024a), and LongFact (Wei et al., 2024b). •Mix-Dataset : The Mix-Dataset is created by proportionally combining the Idk-Dataset and LoGU-Dataset with a mixture ratio ξ(in §6, we setξ= 0.7).DMix-SFT is formed by mixing DShort-SFT andDLong-SFT according to the ratio ξ, while DMix-DPO is formed by mixing DShort-DPO andDLong-DPO according to the same ratio. Following Yang et al. (2024b) and Cheng et al. (2024), the Long-DPO, Short-DPO, and Mix-DPO 15 Configuration SFT(Long/Short/Mix) DPO(Long/Short/Mix) Model Mistral-7B(Llama3-8B)-Instruct Mistral-7B(Llama3-8B)-Instruct Number of epochs 3 3 Devices 8 NVIDIA GPUs 8 NVIDIA GPUs Total Batch size 32 samples 64 samples Cutoff Length 1024 1024 Optimizer Adam (Kingma and Ba, 2015) Adam (Kingma and Ba, 2015) (β1= 0.9, β2= 0.98, ϵ= 1×10−8) ( β1= 0.9, β2= 0.98, ϵ= 1×10−8) Learning rate 5×10−51×10−5 Warmup Ratio 0.1 0.1 LoRA Target qproj,vproj qproj,vproj LoRA Parameters r= 8, α= 16,dropout = 0.05 r= 8, α= 16,dropout = 0.05 Training Time 1h 37m 49s (1h 33m 24s) 51m 30s (1h 5m 39s) Table 6: Fine-tuning hyper-parameters. approaches all employ a two-stage training process (i.e., first SFT, followed by DPO). To ensure | https://arxiv.org/abs/2505.16922v1 |
fair- ness, we use the same amount of training data for all three methods in both the SFT and DPO stages (i.e., 40k for the SFT stage and 20k for the DPO stage). Fine-tuning Details We run SFT and DPO ex- periments with 8 NVIDIA GPUs. We conduct experiments with the LlamaFactory code base3. Building upon prior research, which highlights the MLP layer as a crucial element for embed- ding knowledge within the LLM transformer ar- chitecture (De Cao et al., 2021), we only fine- tune the weight matrix of the attention layer us- ing LoRA (Hu et al., 2022). This method allows us to adjust the model’s ability to express knowl- edge boundaries without altering its internal knowl- edge structure. The configurations of our hyper- parameters are detailed in Table 6. Evaluation We use vLLM (Kwon et al., 2023) for LLM inference tasks with the following param- eters: temperature = 0.7, top- p= 0.95, and a maxi- mum output of 1024 tokens. For fact-checking, we set the temperature to 0. GPT-4o is used as the aux- iliary model to perform fact-checking. The total cost for fact-checking 100 generations is $0.46. E Alignment Between Short- and Long-form Expressions across Different Model Size. Using paired short- and long-form questions in UN- CLE, we examine whether the same aspect is con- sistently expressed as certain or uncertain across different QA formats, across different model size. As shown in Figure 5, both large and small mod- 3https://github.com/hiyouga/LLaMA-Factory $6$6$6 Llama3-70B0.26.193.70Llama3-8B0.17.591.80.6$6 Qwen2-72B0.615.882.90.7Qwen2-7B1.512.280.55.8$6 Mistral-8x7B1.021.975.81.3Mistral-7B0.214.384.50.6Long-FormShort-FormFigure 5: Distribution of key aspects expressed with cer- tainty and uncertainty by Llama3, Mistral, and Qwen2 across different model size. els exhibit a high C-C proportion. Larger models express more uncertainty in short-form, such as Mistral-8x7B (21.9%+1.0%) compared to Mistral- 7B (14.3%+0.2%). However, in long-form, uncer- tainty remains low for all models, typically around 1%. This suggests that the complexity of long-form text is a challenge for models of all sizes. 16 | https://arxiv.org/abs/2505.16922v1 |
arXiv:2505.16927v1 [cs.CL] 22 May 2025Latent Principle Discovery for Language Model Self-Improvement Keshav Ramji∗, Tahira Naseem, Ramón Fernandez Astudillo IBM Research AI Abstract When language model (LM) users aim to improve the quality of its generations, it is crucial to specify concrete behavioral attributes that the model should strive to reflect. However, curating such principles across many domains, even non- exhaustively, requires a labor-intensive annotation process. To automate this process, we propose eliciting these latent attributes guiding model reasoning to- wards human-preferred responses by explicitly modeling them in a self-correction setting. Our approach mines new principles from the LM itself and compresses the discovered elements to an interpretable set via clustering. Specifically, we employ an approximation of posterior-regularized Monte Carlo Expectation-Maximization to both identify a condensed set of the most effective latent principles and teach the LM to strategically invoke them in order to intrinsically refine its responses. We demonstrate that bootstrapping our algorithm over multiple iterations enables smaller language models (7-8B parameters) to self-improve, achieving +8-10% in AlpacaEval win-rate, an average of +0.3 on MT-Bench, and +19-23% in principle- following win-rate on IFEval. We also show that clustering the principles yields interpretable and diverse model-generated constitutions while retaining model performance. The gains our method achieves highlight the potential of automated, principle-driven post-training recipes toward continual self-improvement. 1 Introduction Modern language models (Grattafiori et al., 2024; OpenAI, 2024; DeepSeek-AI et al., 2025) have achieved striking fluency and coherence in open-ended generation, yet guiding them to satisfy multiple, possibly overlapping human-defined criteria remains a core challenge. Conventional approaches to align language models (LMs) rely on human annotations distinguishing between a chosen and rejected generation, even when their gap in quality may be nuanced and multi-faceted. Constitutional AI and other related paradigms (Bai et al., 2022b; Guan et al., 2025) consider a human- curated "constitution" of high-level attributes which the model’s responses should follow. While this framework enables models to be steered toward safer behavior, the static nature of the constitution requires experts to anticipate every nuance in advance and update rules manually as edge cases surface. As use cases proliferate, new failure modes arise – reliably synthesizing task-specific "amendments" and collecting annotations is a costly and time-consuming process – leading to brittleness and limited adaptability. We aim to automate the process of discovering the attributes for model improvement, obviating the need for human intervention or explicit domain adaptation. Automatically discovering the attributes for self-improvement can be seen as a meta-level reasoning process. Recent efforts to induce reasoning capabilities in LMs have often focused on domains such as math and code where a gold reference answer exists and candidate answers are more easily verifiable (DeepSeek-AI, 2025). The availability of verifiable responses has also been capitalized for teaching self-correction (Kumar et al., 2025). However, in this work, we focus on open-ended text ∗Correspondence to keshav.ramji@ibm.com . Figure 1: We introduce Self-Taught Principle Learning (STaPLe) . (Left) Our Monte Carlo EM algorithm alternates between on-policy discovery and learning of latent principles guiding self- correction behavior. The principles may also be clustered to a compressed set, yielding human- | https://arxiv.org/abs/2505.16927v1 |
interpretable constitutions Ctand models trained to follow them Mt. (Right) The STaPLe algorithm induces self-improvement in AlpacaEval win-rate over three iterations for all three language models. generation tasks that are challenging to verify; identifying situations for a human to intervene and induce a refined response can be especially tricky in such cases. We introduce a novel approach to discover expressive principles, treating them as latent attributes in the self-correction setting to bridge an initial attempt and a target response. We find that the language model itself serves as an effective principle generator to improve its responses, contrasting prior works which rely on human annotations or strong model supervision. We design an Expectation- Maximization algorithm, Self-Taught Principle Learning (STaPLe) , which first leverages rejection sampling in identifying principle candidates for self-correction and choosing the candidate that is closest to the gold, and then trains over these trajectories to learn this principle-guided refinement behavior. Repeating this method iteratively results in a model trained on a dynamic constitution of elements produced from itself, implicitly learning the refinement goal to enable its self-correction abilities at inference-time. We also show that the discovered principles can be compressed to a smaller set for human readability by applying hierarchical clustering after the E-step in a manner akin to posterior regularization, without compromising in downstream performance. We validate the efficacy of this method over several iterations on instruction-following benchmarks including MT-Bench (Zheng et al., 2023) and AlpacaEval (Li et al., 2023), and leverage Prometheus- v2.0 (Kim et al., 2024) to analyze win-rates with fine-grained, principle-following rubrics. Our results show that STaPLe outpaces baseline methods such as Self-Taught Reasoner (STaR; Zelikman et al. (2022)) (modified for non-verifiable responses) and prompted refinement approaches like Self-Refine (Madaan et al., 2023). It continues to self-improve in performance over multiple iterations, before saturating. We also find that clustering largely matches or outperforms training on all principles. Our key contributions can be summarized as follows: •We propose a Monte Carlo EM algorithm for iterative latent principle discovery and learning, to enable language model self-improvement. •We find that on-policy generated principles are effective stimuli for self-correction in smaller LMs, and training to learn them improves the performance on MT-Bench, AlpacaEval-2.0. •Clustering the set of discovered principles retains most of the full distribution’s performance while yielding an interpretable constitution. 2 Related Work Principle-Driven Language Modeling Early work in principle-driven alignment demonstrated that embedding high-level rule sets or “constitutions” into the training loop can steer model behavior without direct human labels for each generation. Constitutional AI (Bai et al., 2022b) introduced a 2 two-stage process in which a pre-trained model first generates critiques of its own outputs against a static constitution curated and synthesized a priori and learns from these critiques, then trains a preference model from on-policy data and performs RL. Dromedary (Sun et al., 2023) extended this idea by introducing Self-Align, an algorithm which generates prompts, applies a small set of human written principles with in-context learning, and then fine-tunes the model to learn the principle-guided responses; this was later extended by SALMON (Sun et al., 2024) | https://arxiv.org/abs/2505.16927v1 |
to design an instructable, principle- following reward model (RM). ConstitutionalExperts (Petridis et al., 2024) and SPRI (Zhan et al., 2025) address the problem of mapping prompts to principles. Deliberative Alignment (Guan et al., 2025) introduces CoT reasoning over safety specifications, and trains models to learn these reasoning traces via SFT and online RL with a safety RM. More recent efforts have sought to leverage models to draft and refine constitutions with limited human supervision. LEAP (Zhang et al., 2024) showed that models can propose new principles via self-critique given the gold response, synthesizing a list of task-specific principles that can be used for in-context learning. SAMI (Fränken et al., 2024) introduced a self-alignment mechanism where a strong model is used to generate a small set of principle candidates, and the target model is trained to maximize the mutual information between the constitution and the model’s responses through an InfoNCE loss. IterAlign (Chen et al., 2024b) and ICAI (Findeis et al., 2025) also leverage strong frontier models such as GPT-4/GPT-4o (OpenAI, 2024) for constitution proposal. The former uses a red-teaming framework to identify model weaknesses, and uses the strong model to propose a principle towards helpfulness, harmlessness, and honesty; the latter considers principles as specifications over preference annotations, injecting them to reconstruct the annotation. Most recently, DeepSeek introduced a pointwise generative reward model (Liu et al., 2025) over self-generated principles, demonstrating that such an RM can be successfully used to improve inference-time scaling. Self-Correction and Self-Improvement The recent emergence of large reasoning models such as o3 and DeepSeek-R1 (OpenAI, 2025; DeepSeek-AI, 2025) has lead to a growing exploration into the ability of models to perform intrinsic refinement of their own outputs with internal feedback. However, much of the prior literature below either trains models over the improved responses alone or performs prompted self-refinement at inference-time, rather than learning this ability. STaR first leveraged model-generated critiques followed by revision, showing that alternating these two stages yields gains in instruction following (Zelikman et al., 2022). Self-Refine (Madaan et al., 2023) explored the setting of prompted multi-turn refinement: after producing an initial answer, the model is induced to critique itself, reflecting on the weaknesses of the current response on specific dimensions, proposing actionable changes, and reflecting them in the corrected response. ProMiSe (Ramji et al., 2024) extended this ability to smaller language models with weaker self-critique and refinement capabilities, showing that greedily refining responses relative to one attribute in a sequential manner improves performance, while demonstrating that training on synthetic dialogues modeling this self-refinement process improves dialogue question answering. APO (D’Oosterlinck et al., 2024) addressed the notion of minimally contrastive preference pairs, reinforcing the notion that revision along fewer attributes yields a better signal for preference optimization. Recently, reinforcement learning strategies have been explored to train models to directly perform self-correction. RISE (Qu et al., 2024) frames self- correction through a multi-turn Markov decision process (MDP), performs best-of-N sampling over sequential attempt rollouts, and uses offline RL to train the model to correct over these trajectories. SCoRe (Kumar et al., 2025) improves over this by a | https://arxiv.org/abs/2505.16927v1 |
multi-turn RL formulation that boosts the quality of the initial attempt and leverages reward shaping to incentivize self-correction to improve the refined response. Beyond individual output corrections, recent work has shown that models can bootstrap their underlying capabilities in an iterative fashion over time. Several works suggest that sampling diverse responses or environmental interactions, filtering based on feedback or majority voting, and training on these on-policy generations can boost performance (Huang et al., 2023; Patel et al., 2024). SPIN (Chen et al., 2024c) introduced a self-play fine-tuning approach, wherein the model compares its generations against the ground-truth annotated responses in the SFT dataset yielding a preference pair, fine-tunes with a contrastive objective, and repeats this process iteratively. Huang et al. (2025) theoretically formalizes the self-improvement phenomenon through a " sharpening " process, wherein the model’s policy moves towards maximizing its generations’ self-reward. Latent Chain-of-Thought Learning Chain-of-thought (CoT) prompting (Wei et al., 2023; Kojima et al., 2022) elicits an explicit, verbalized step-by-step walkthrough of the reasoning trace guiding the model from the input to the final response. Simultaneously, STaR (Zelikman et al., 2022) leveraged a 3 gold response-conditioned rationalization of the CoT and fine-tuned the model to learn this reasoning behavior. The notion of rationalization as a latent variable modeling problem was previously explored in ELV (Zhou et al., 2020), under the framework of labeling examples with explanations through a variational EM algorithm. More recent approaches also treat chain-of-thought as a latent variable that may be trained over rather than purely to be induced at inference-time. TRICE (Phan et al., 2023) casts intermediate reasoning steps as latent codes, training the model to marginalize over them so that it internally develops coherent reasoning trajectories, through a Markov Chain Monte Carlo (MCMC) EM algorithm. LaTRO (Chen et al., 2024a) demonstrated that models can self-reward latent reasoning paths — generating candidate thought sequences, scoring them by task success, and reinforcing the most effective ones – through a variational framework. Concurrent work introduced BoLT (Ruan et al., 2025), showing that leveraging these implicit traces as supervision leads to gains in data efficiency and performance for continued pre-training on complex reasoning benchmarks by converting latent chain-of-thought into a self-supervised learning signal. 3 Self-Taught Principle Learning We propose STaPLe, a self-improvement mechanism for 1) discovery of principles by the model itself aimed towards response revision and 2) training the model to invoke such principles and subsequently performing response self-refinement (if needed) at inference time. We view these principles as latent reasoning traces that bridge the gap between an initial model response and a reference target. In the vein of the Self-Taught Reasoner (STaR) (Zelikman et al., 2022), we leverage the gold response as a "hint" to propose principles and guide response refinement decisions. However, our formulation is generic and allows for the use of non-verifiable gold responses as hints. In particular, we use the proximity of the generated response to the reference response as a signal of correctness. Any similarity metric can be used to measure this proximity, as our approach is agnostic to this choice – the exact | https://arxiv.org/abs/2505.16927v1 |
match metric used for verifiable responses can be seen as one such instantiation. Given a dataset D={(xi, y1 i, yG i)}n i=1, where yG iis the gold response and y1 iis model’s initial response for the ithsample, we aim to learn a latent response-improvement reasoning trace zisuch that the probability of producing a response close to the gold reference is maximized. The latent reasoning trace, or principle ,ziis also verbalized as natural language, i.e. discrete text tokens from vocabulary V∗. We implement STaPLe to optimize the following marginal likelihood: p(yG|x, y1) =X y2∈V∗X z∈V∗p(yG|x, y1, z, y2)·p(y2, z|x, y1;θ) where y2is a model refinement of the initial response y1generated with aid of latent principle z. The distribution p(yG|x, y1, z, y2)is a fixed, prespecified validator model indicating the likelihood of the current revision y2matching the gold response yG. We parametrize p(y2, z|x, y1)by the language model itself, with parameters θ. As shown in Appendix D, and following the standard latent variable model formulation, the gradient for this objective is: ∇θL(θ) =Ep(y2,z|x,y1,yG) ∇θlogp(y2, z|x, y1;θ) (1) This objective can be maximized via Expectation-Maximization (EM). This comprises the repeated application of two alternating stages: 1) a principle discovery stage (E-step) and 2) a principle learning stage (M-step). This is depicted in Figure 1. Inprinciple discovery stage (E-step) , we sample Nprinciples z1:Nand corresponding responses y2 (1:N)from the posterior p(y2, z|x, y1, yG). Approximating the true posterior would require an intractable marginalization over V∗. We approximate this posterior by "hinting" our language model with the gold response1, and a prompt to elicit a principle. This is represented by: ˜p(y2, z|x, y1, yG;θ) =p(y2|x, y1, z;θ)·p(z|x, y1, yG;θ) 1Here we factorize the approximate posterior into principle generation and response generation terms, where the gold response is only seen by the principle generation terms. This will help avoid the trivially degenerate solution where y2simply copies yG. We acknowledge that in the absence of any constraint on z, this can still lead to copying via z. However, in practice, the prompt that elicit the principle creates a contextual bias against exact copy. Moreover, we also experiment with explicit clustering contraints over zand show that both versions of our approach perform similarly. 4 Figure 2: The figure above depicts the principle discovery (E-step) phase. We sample an initial response y1on-policy, then "hint" with the gold response to elicit candidate principles z(1:N). Then, we sample critiques on the initial response (only used in rejection sampling, and not included in the fine-tuning trajectories), which we use to obtain principle-guided refined responses y2 (1:N). The best refined response ˆy2is selected based on similarity to the gold response. We save the resulting trajectory, which is used for supervised fine-tuning in the principle learning (M-step) stage. To improve the quality of our samples, we employ a cycle-consistency approach, implicitly defining the true posterior as approximated by: p(y2, z|x, y1, yG;θ)∝p(yG|x, y1, z, y2)·˜p(y2, z|x, y1, yG;θ) (2) This can be seen as equivalent to hinted CoT generation as in STaR, whereby samples that score higher in reconstruction error are assigned higher probability. In practice | https://arxiv.org/abs/2505.16927v1 |
we use a sparse approximation of this distribution, that assigns zero probability unless there is an improvement in similarity function f: p(yG|x, y1, z, y2)∝f(y2, yG)−f(y1, yG),iff(y2, yG)> f(y1, yG) 0, otherwise We sample from ˜p(y2, z|x, y1, yG;θ)via rejection sampling. Given a sample yn∼˜p(y2, z| x, y1;θ), we accept it with probability pn=p(yn,z|x,y1,yG;θ) M·˜p(yn,z|x,y1,yG;θ)we include a derivation of this rejection sampling rule in Appendix E. We also compare the initial response y1to the gold reference yGusing the similarity metric; if they are sufficiently close, we accept the response without further refinement and without sampling a ˆz. The principle discovery stage yields a principle-augmented dataset (x∪y1,ˆz, y2)∈ D′. Note that if no refinements improve upon the initial generation relative to the gold response, we discard the sample; thus, the dataset D′only consists of those samples on which a principle improved the quality of the response towards the gold. In the principle learning stage (M-step) , we use the data D′collected in the principle discovery stage for supervised fine-tuning of the language model. In particular, we train the model to maximize the log-likelihood of the refinement trajectories in D′. The corresponding EM update can be written as: θ(t+1)= arg max θE(x,y1,ˆz,ˆy2)∈D′[logp(y2, z|x, y1;θ] This should qualitatively result in the fine-tuned LM being able to invoke principles conditioned on a prompt and learning to produce high-quality responses conditioned on both the prompt and the invoked principle. The two stages can be repeated multiple times, achieving incremental improvements till no further gains are seen with respect to the gold references. We also draw a connection between STaPLe (this EM procedure) and variance-reduced self-play; this is discussed further in Appendix F. 5 3.1 Posterior Regularization via Clustering To maximize the human interpretability of principles and their application relative to specific domains, it is beneficial to have a compressed set, or constitution to distill to the model. However, the E- step described above, results in thousands of unique principles. We seek to project this set into a constrained subspace where the resulting principles serve as representatives for desirable attributes to be reflected. This can be achieved via posterior regularization (PR) in latent variable modeling. For a posterior constraint set Q, the canonical posterior regularization framework solves the problem q∗(z) = arg min q∈QKL(q(z|x, y∗)||p(z|x, y∗)) From Ganchev et al. (2010), we obtain that the primal solution is given by: ˜p(y2, z|x, y1, yG)∝p(y2, z|x, y1, yG)·exp (−λg(z)) where g(z)denotes the constrained features of the principles and λis a Lagrange multiplier that must be set such that the expected value of the features under ˜prespects the constraints. Consider the following definition of the constraints: assume access to a clustering algorithm which yields a set of clusters {C1, C2, . . . C K}. For each cluster, a representative element ˜zis chosen, forming the set ˜Z={˜z1, . . . , ˜zK}. Now define gk(z) =1(z∈Ck\ {˜zk})fork∈[1, K]as binary feature functions. Thus, to ensure that the regularized posterior only places mass on the representative elements, we can enforce the constraint set Q={q:Eq[gk(z)] = 0 ∀k∈[1, K]} However, while an algorithm | https://arxiv.org/abs/2505.16927v1 |
like projected gradient descent could be used to solve for the Lagrange multipliers, this is expensive for deep neural networks, and as such, is impractical in our case. Instead, we suggest that performing clustering methods on a set of posterior samples and retaining only the representative elements fulfills an equivalent role empirically. Clustering serves to consolidate principles that are lexically close, and leveraging an embedding model for distances allows for semantic awareness in merging similar elements. In particular, we consider hierarchical (agglomerative) clustering for several of its benefits: (1.) it requires no assumptions about number of clusters or cluster shape a priori, (2.) the algorithm is deterministic, ensuring that the same clusters would be obtained for a given configuration, and (3.) the algorithm is relatively fast, only taking a few seconds in practice over thousands of principles. To ensure that the clustering is performed in a semantics-aware manner, we first obtain a sentence embedding with an encoder-only model and perform clustering over these embeddings. Given the principle-augmented dataset (xi, y1 i,ˆzi, y2 i)∈ D′and a set ˜Zof cluster representative elements over C={Ci}k i=1, we aim to replace ˆziwith the element ez∈eZthat is closest in meaning to the original principle. Qualitatively, we want the set eZto comprise the human-readable constitution, minimizing semantic overlap in its labels. We take the medoid as the cluster representative: eZmedoid ={mk:mk= arg min m∈CkX j∈Ck||ei−ej||2, k∈[1, K]} (Medoid Representatives) It suffices to retrieve the corresponding cluster Cifor a sample iand replace ˆziwith ˜zi∈eZmedoid . The resulting dataset from this augmentation, (xi, y1 i,˜zi, y2 i)∈eDis then used to train the model. 4 LMs Can Self-Improve with Iterative Principle Discovery 4.1 Experimental Setup Mixed-Domain Input Corpus. We form a corpus of 100k samples for the principle discovery phase, consisting of four datasets: Anthropic HH-RLHF (Bai et al., 2022a), UltraFeedback (Cui et al., 2024), TL;DR (Stiennon et al., 2020), and HotpotQA (Yang et al., 2018), taken in equal proportion (i.e. 25k samples of each dataset, drawn randomly) and deduplicated by prompt. For preference datasets, we take the chosen response to be the gold answer yG. To run STaPLe, we use the first 50k samples for iteration 1, to heavily bootstrap off the first iteration, and then use 10k samples for each iteration thereafter, such that the input prompts are unseen for each iteration. 6 Models and Hyperparameters. We evaluate three performant small language models: Llama- 3.1-8B-Instruct (Grattafiori et al., 2024; Meta, 2024), Granite-3.1-8B-Instruct (Granite Team, 2024; Granite Team and IBM, 2024), and Qwen2.5-7B-Instruct (Qwen, 2025). We use the all-MiniLM-L6- v2 model (Sentence Transformers, 2021) from SentenceTransformers as the embedding model to compute medoids in our clustering approach. We use the Rouge-L F1 score (Lin, 2004) to compare the similarity of candidate responses relative to the reference answer. We also include an ablation in Appendix M.2 using a prompted Phi-4 (Abdin et al., 2024) judge to score responses, leveraging additional compute to improve the quality of rejection sampling. We discuss all other major STaPLe algorithm and model training hyperparameters in Appendix C. Baselines. We compared our method against several baselines in | https://arxiv.org/abs/2505.16927v1 |
both the single-iteration and multi-iteration settings, in addition to the scores of each model’s initial policy. 1.Prompted self-refinement to directly produce a self-critique and revision, akin to Self-Refine, without any principle or specific feedback criterion provided a priori. 2. Supervised fine-tuning on the gold responses of the first 50k samples in the mining corpus. 3.Following SCoRe, we adopt STaR-like baseline for intrinsic self-correction; we apply the STaPLe algorithm and perform supervised fine-tuning on the best refined response (without principle-based refinement trajectory). This will henceforth be referred to as "STaR". We compare the STaPLe and STaR algorithms over four iterations – this is performed over the same number of samples per iteration, i.e. 50k samples in the first iteration and 10k samples for each subsequent one. Naturally, the other baselines are performed for a single iteration. Evaluation. We evaluate on the MT-Bench (Zheng et al., 2023) and AlpacaEval-2.0-LC (Li et al., 2023; Dubois et al., 2024) datasets, instruction-following evaluations designed to reflect alignment abilities of LLMs in chat settings. We also use the Prometheus-8x7B-v2.0 model (Kim et al., 2024) model on responses to the above datasets and the IFEval (Zhou et al., 2023) dataset, for fine-grained evaluation on principle-following rubrics, with additional experiments included in Appendix H. At inference time, if a principle was invoked intrinsically given a prompt, the response is parsed so as to only score the refined generation, following the principle proposal – this is similarly done for the STaR baseline. Otherwise, we score the full generated response, and no special parsing is required. For the Prometheus results, the win-rate is with respect to the principle invoked – for example, if the principle is "Directness", the judge assesses which response is more direct between the candidate generation and the generation from the initial policy. Given the STaR baseline does not explicitly invoke a principle, we use the same principle invoked for that sample by the STaPLe model. 4.2 Results Latent Principle Learning Improves Response Quality. The STaPLe algorithm outperforms the baselines on all benchmarks, across all models, as seen in Table 1. The MT-Bench average exceeds the best baseline by an average of +0.11 over the three models, with the Turn 2 increasing by an average of +0.22. The AlpacaEval win-rates improve over the initial policy by +5.3-7%, and improves over the STaR baseline by +1.6-2.8%. Furthermore, the IFEval win-rates on principle-following of the refined against the base policy using Prometheus improve by +5-6.6%. This suggests that training models toexplicitly invoke the principle as an expressive form of a latent attribute is effective, as opposed to implicitly learning over this by simply training on the refined response (the STaR baseline). The Self-Refine baseline improves performance for the Granite and Qwen models, but not for Llama-8B, suggesting that it is not as effective in zero-shot self-refinement without pre-identified principles. This corresponds with a higher IFEval win-rate for those models with strong self-refinement abilities. Iterative Principle Discovery Enables Self-Improvement. The results in Table 1 demonstrate the performance of our algorithm in the fourth iteration of our Monte Carlo EM algorithm; Our algorithm | https://arxiv.org/abs/2505.16927v1 |
outpaces the STaR baseline by a sizable margin throughout the execution of both algorithms. We include the full set of results in Appendix G. By iteration 3, the STaPLe scores outperform STaR and the initial policy on average across the three models by +0.16and+0.29on MT-Bench (avg.); +3.6% and +9.2% on AlpacaEval win-rate; and +7.9% and +21.0% on IFEval principle-following win-rate, respectively. We do observe a slight diminishing returns effect with the STaPLe algorithm, as in 7 Table 1: Comparison of the STaPLe algorithm (unconstrained and constrained) against the baselines. The scores reported below are an average over 5 runs for all benchmarks. Model MT-Bench (avg) MT-Bench (T1) MT-Bench (T2) AlpacaEval IFEval WR Llama-3.1-8B-Instruct Initial Policy 7.46 8.09 6.83 26.9 – Self-Refine 7.40 8.05 6.75 26.1 51.2% Gold-only SFT 7.47 8.11 6.83 26.4 56.2% STaR Iter 4 7.56 8.11 7.00 31.8 62.3% STaPLe Iter 4 7.71 8.13 7.30 33.4 68.9% Constrained STaPLe Iter 4 7.70 8.13 7.28 34.9 69.1% Granite-3.1-8B-Instruct Initial Policy 7.83 8.59 7.08 30.2 – Self-Refine 7.86 8.63 7.10 31.7 57.1% Gold-only SFT 7.86 8.68 7.05 30.1 55.8% STaR Iter 4 7.96 8.68 7.25 35.6 62.1% STaPLe Iter 4 8.04 8.69 7.41 38.4 67.6% Constrained STaPLe Iter 4 8.03 8.65 7.41 38.8 68.4% Qwen2.5-7B-Instruct Initial Policy 6.83 7.34 6.31 30.4 – Self-Refine 6.91 7.41 6.40 30.7 58.4% Gold-only SFT 6.89 7.43 6.35 30.0 56.9% STaR Iter 4 7.14 7.63 6.66 37.8 68.4% STaPLe Iter 4 7.24 7.64 6.85 40.2 73.4% Constrained STaPLe Iter 4 7.22 7.60 6.84 39.9 72.1% iteration 4, the scores either remain at a similar level or drop slightly for Llama-8B and Granite-8B; however, Qwen-7B continues to improve on all three benchmarks. We further analyze principle- following quality in Appendix H and stepwise win-rates of iteration tagainst iteration t−1in Appendix I to reinforce the self-improvement induced by STaPLe. In Appendix J, we demonstrate that the model’s intrinsic self-correction ability improves over the iterations. Clustering Balances Interpretability and Performance. In Table 1, we also include the per- formance of "constrained" STaPLe – the version of the algorithm with agglomerative clustering following the E-step during each iteration, and use the medoids of each cluster as a representative principle to yield dataset eD. We find that this largely matches the performance of the "unconstrained" version, in fact outperforming it in AlpacaEval and IFEval win-rates for Llama-8B and Granite-8B. The full results can be found in Appendix M.1, where we ablate across different label replacement schemes (medoids, modes, and a perplexity-based method). For both versions, we observe a strong correlation in the trend (avg. ρ= 0.95-0.96) between the MT-Bench (avg.) and AlpacaEval results. 4.3 Analysis of Principle Discovery It is also valuable to study the nature of the principle discovery process and the model-generated constitutions that we have aligned the language model toward. We include the full constitutions and perform more qualitative analysis on their distribution of elements when performing label replacement ("density" of the constitution) in Appendix K. In Figure 10, we show that the number of principles in the constitution under Constrained STaPLe decreases over the iterations, suggesting that the | https://arxiv.org/abs/2505.16927v1 |
model converges to learning a relatively stable distribution of principles. In particular, the size of the constitution by iteration 4 is roughly 50% of the iteration 1 size, or even smaller. This finding is reinforced by an analysis of the principle discovery rate – the fraction of refinement samples with new, unseen principles – in Figure 3. We show that this rate decreases over the iterations under both the unconstrained and constrained versions of the STaPLe algorithm, suggesting that all models learn to re-use principles accordingly. The observation that constrained STaPLe helps to accelerate this convergence to a condensed set of principles reinforces the motivation behind the introduction of clustering as being akin to a posterior regularization mechanism. This also highlights 8 Figure 3: Principle discovery rates of the STaPLe algorithm in the unconstrained (left) and constrained (right) settings. This represents the fraction of the trajectories saved from the principle discovery process (E-step) that contain a unique principle label that was unseen in previous iterations. one of the advantages of using the LM to approximate the posterior distribution, as the changing nature of the learned posterior can be observed over the iterations and elicited via on-policy sampling. 5 Discussion and Limitations The STaPLe algorithm guides a largely autonomous self-improvement process, with the exception of a few hyperparameters that are to be set, discussed in Appendix C. As a result, the algorithm does not require human supervision beyond the labels in the curated (publicly-available) mining corpus. While the distribution of principles is mined relative to the mining corpus’ task distribution, the STaPLe algorithm itself is task-agnostic, and can be used for any distribution of datasets where a reliable gold reference exists, or for paired preference data. However, designing a task-aware version of the STaPLe algorithm may reveal further insights into the model’s task-dependent self- correction mechanisms while inducing a curriculum. As noted prior, we focus on the two-turn self- correction setting, but interesting insights could be extracted regarding the compositional nature of principles when extending to further refinement attempts, which also yields more diverse trajectories (combinatorially many possible) even over a condensed set of principles. A core aim of alignment research is to balance human-readability with machine-readability. The STaPLe algorithm succeeds in achieving this by discovering principles that are useful to the model for self-correction, while compressing them to smaller set via clustering for a human reader to analyze. We believe that this work and the notion of LM self-improvement keeps with the theme of the Bitter Lesson (Sutton, 2019), when facilitated in a relatively autonomous fashion. Specifically, we aim to limit the influence of human-driven priors or constraints on the algorithm; this is reflected further by our clustering technique, and our ablation in Appendix M.3 to fully automate this as well. At the same time, we acknowledge the value of human oversight on alignment process; as such, we believe that human-in-the-loop analysis of the principles as a post-processing mechanism following the E-step of each iteration is valuable to avoid misalignment or potentially harmful principles. 6 Conclusion We introduced a new language model | https://arxiv.org/abs/2505.16927v1 |
self-improvement method which uses model-generated latent principles to learn intrinsic self-correction. These serve a purpose akin to a reasoning chain-of- thought, boosting the quality of LM generations on alignment-focused benchmarks. Furthermore, our approximate posterior-regularized Monte Carlo EM algorithm shows that the model can continue to improve over multiple iterations, while simultaneously compressing the principles to a human- interpretable constitution. We also show that our clustering approach balances performance with the diversity of the generated constitution, thus adding valuable utility to the STaPLe algorithm. The efficacy of STaPLe highlights the potential for constitutional alignment with self-generated principles to improve model responses in an interpretable manner with minimal human supervision. 9 Acknowledgments. The authors would like to thank Sara Rosenthal, Yikang Shen, Radu Florian, and Salim Roukos for valuable input and feedback during this work. KR would also like to thank Brian Williams and Arduin Findeis for helpful discussions related to this work and the impacts of principle discovery, and Sriram Tolety for feedback on a draft of this paper. References M. Abdin, J. Aneja, H. Behl, S. Bubeck, R. Eldan, S. Gunasekar, M. Harrison, R. J. Hewett, M. Javaheripi, P. Kauffmann, J. R. Lee, Y . T. Lee, Y . Li, W. Liu, C. C. T. Mendes, A. Nguyen, E. Price, G. de Rosa, O. Saarikivi, A. Salim, S. Shah, X. Wang, R. Ward, Y . Wu, D. Yu, C. Zhang, and Y . Zhang. Phi-4 technical report, 2024. URL https://arxiv.org/abs/2412.08905 . Y . Bai, A. Jones, K. Ndousse, A. Askell, A. Chen, N. DasSarma, D. Drain, S. Fort, D. Ganguli, T. Henighan, N. Joseph, S. Kadavath, J. Kernion, T. Conerly, S. El-Showk, N. Elhage, Z. Hatfield- Dodds, D. Hernandez, T. Hume, S. Johnston, S. Kravec, L. Lovitt, N. Nanda, C. Olsson, D. Amodei, T. Brown, J. Clark, S. McCandlish, C. Olah, B. Mann, and J. Kaplan. Training a helpful and harmless assistant with reinforcement learning from human feedback, 2022a. URL https: //arxiv.org/abs/2204.05862 . Y . Bai, S. Kadavath, S. Kundu, A. Askell, J. Kernion, A. Jones, A. Chen, A. Goldie, A. Mirho- seini, C. McKinnon, C. Chen, C. Olsson, C. Olah, D. Hernandez, D. Drain, D. Ganguli, D. Li, E. Tran-Johnson, E. Perez, J. Kerr, J. Mueller, J. Ladish, J. Landau, K. Ndousse, K. Lukosuite, L. Lovitt, M. Sellitto, N. Elhage, N. Schiefer, N. Mercado, N. DasSarma, R. Lasenby, R. Larson, S. Ringer, S. Johnston, S. Kravec, S. E. Showk, S. Fort, T. Lanham, T. Telleen-Lawton, T. Conerly, T. Henighan, T. Hume, S. R. Bowman, Z. Hatfield-Dodds, B. Mann, D. Amodei, N. Joseph, S. McCandlish, T. Brown, and J. Kaplan. Constitutional ai: Harmlessness from ai feedback, 2022b. URL https://arxiv.org/abs/2212.08073 . H. Chen, Y . Feng, Z. Liu, W. Yao, A. Prabhakar, S. Heinecke, R. Ho, P. Mui, S. Savarese, C. Xiong, and H. Wang. Language models are hidden reasoners: Unlocking latent reasoning capabilities via self-rewarding, 2024a. URL https://arxiv.org/abs/2411.04282 . X. Chen, H. Wen, S. Nag, C. Luo, Q. Yin, R. Li, Z. Li, and W. Wang. IterAlign: Iterative constitutional alignment of large language models. In K. Duh, H. Gomez, and S. Bethard, | https://arxiv.org/abs/2505.16927v1 |
editors, Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 1423–1433, Mexico City, Mexico, June 2024b. Association for Computational Linguistics. doi: 10.18653/v1/2024. naacl-long.78. URL https://aclanthology.org/2024.naacl-long.78/ . Z. Chen, Y . Deng, H. Yuan, K. Ji, and Q. Gu. Self-play fine-tuning convertsweak language models to strong language models. In Proceedings of the 41st International Conference on Machine Learning , ICML’24. JMLR.org, 2024c. G. Cui, L. Yuan, N. Ding, G. Yao, B. He, W. Zhu, Y . Ni, G. Xie, R. Xie, Y . Lin, Z. Liu, and M. Sun. Ultrafeedback: Boosting language models with scaled ai feedback, 2024. URL https: //arxiv.org/abs/2310.01377 . P. Dayan. Reinforcement comparison. In D. S. Touretzky, J. L. Elman, T. J. Sejnowski, and G. E. Hinton, editors, Proceedings of the 1990 Connectionist Models Summer School , pages 45–51, San Mateo, CA, 1990. Morgan Kaufmann. DeepSeek-AI. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning, 2025. URL https://arxiv.org/abs/2501.12948 . DeepSeek-AI, A. Liu, B. Feng, B. Xue, B. Wang, B. Wu, C. Lu, C. Zhao, C. Deng, C. Zhang, C. Ruan, D. Dai, D. Guo, D. Yang, D. Chen, D. Ji, E. Li, F. Lin, F. Dai, F. Luo, G. Hao, G. Chen, G. Li, H. Zhang, H. Bao, H. Xu, H. Wang, H. Zhang, H. Ding, H. Xin, H. Gao, H. Li, H. Qu, J. L. Cai, J. Liang, J. Guo, J. Ni, J. Li, J. Wang, J. Chen, J. Chen, J. Yuan, J. Qiu, J. Li, J. Song, K. Dong, K. Hu, K. Gao, K. Guan, K. Huang, K. Yu, L. Wang, L. Zhang, L. Xu, L. Xia, L. Zhao, L. Wang, L. Zhang, M. Li, M. Wang, M. Zhang, M. Zhang, M. Tang, M. Li, N. Tian, P. Huang, P. Wang, P. Zhang, Q. Wang, Q. Zhu, Q. Chen, Q. Du, R. J. Chen, R. L. Jin, R. Ge, R. Zhang, R. Pan, R. Wang, R. Xu, R. Zhang, R. Chen, S. S. Li, S. Lu, S. Zhou, S. Chen, S. Wu, S. Ye, S. Ye, S. Ma, 10 S. Wang, S. Zhou, S. Yu, S. Zhou, S. Pan, T. Wang, T. Yun, T. Pei, T. Sun, W. L. Xiao, W. Zeng, W. Zhao, W. An, W. Liu, W. Liang, W. Gao, W. Yu, W. Zhang, X. Q. Li, X. Jin, X. Wang, X. Bi, X. Liu, X. Wang, X. Shen, X. Chen, X. Zhang, X. Chen, X. Nie, X. Sun, X. Wang, X. Cheng, X. Liu, X. Xie, X. Liu, X. Yu, X. Song, X. Shan, X. Zhou, X. Yang, X. Li, X. Su, X. Lin, Y . K. Li, Y . Q. Wang, Y . X. Wei, Y . X. Zhu, Y . Zhang, Y . Xu, Y . Xu, Y . Huang, Y . Li, Y . Zhao, Y . Sun, Y . Li, Y . Wang, Y . Yu, Y . Zheng, Y . Zhang, Y . Shi, Y . Xiong, Y . He, Y . Tang, Y . Piao, Y . Wang, Y . Tan, Y . | https://arxiv.org/abs/2505.16927v1 |
Ma, Y . Liu, Y . Guo, Y . Wu, Y . Ou, Y . Zhu, Y . Wang, Y . Gong, Y . Zou, Y . He, Y . Zha, Y . Xiong, Y . Ma, Y . Yan, Y . Luo, Y . You, Y . Liu, Y . Zhou, Z. F. Wu, Z. Z. Ren, Z. Ren, Z. Sha, Z. Fu, Z. Xu, Z. Huang, Z. Zhang, Z. Xie, Z. Zhang, Z. Hao, Z. Gou, Z. Ma, Z. Yan, Z. Shao, Z. Xu, Z. Wu, Z. Zhang, Z. Li, Z. Gu, Z. Zhu, Z. Liu, Z. Li, Z. Xie, Z. Song, Z. Gao, and Z. Pan. Deepseek-v3 technical report, 2025. URL https://arxiv.org/abs/2412.19437 . K. D’Oosterlinck, W. Xu, C. Develder, T. Demeester, A. Singh, C. Potts, D. Kiela, and S. Mehri. Anchored preference optimization and contrastive revisions: Addressing underspecification in alignment, 2024. URL https://arxiv.org/abs/2408.06266 . Y . Dubois, B. Galambosi, P. Liang, and T. B. Hashimoto. Length-controlled alpacaeval: A simple way to debias automatic evaluators. arXiv preprint arXiv:2404.04475 , 2024. A. Findeis, T. Kaufmann, E. Hüllermeier, S. Albanie, and R. D. Mullins. Inverse constitutional AI: Compressing preferences into principles. In The Thirteenth International Conference on Learning Representations , 2025. URL https://openreview.net/forum?id=9FRwkPw3Cn . J.-P. Fränken, E. Zelikman, R. Rafailov, K. Gandhi, T. Gerstenberg, and N. D. Goodman. Self- supervised alignment with mutual information: Learning to follow principles without preference labels. In A. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak, and C. Zhang, editors, Advances in Neural Information Processing Systems , volume 37, pages 61328–61371. Cur- ran Associates, Inc., 2024. URL https://proceedings.neurips.cc/paper_files/paper/ 2024/file/70d638f3177d2f0bbdd9f400b43f0683-Paper-Conference.pdf . K. Ganchev, J. Graça, J. Gillenwater, and B. Taskar. Posterior regularization for structured latent variable models. Journal of Machine Learning Research , 11(67):2001–2049, 2010. URL http: //jmlr.org/papers/v11/ganchev10a.html . I. Granite Team. Granite 3.0 language models. https://www.rivista.ai/wp-content/ uploads/2024/10/paper-1.pdf , Oct. 2024. Granite Team and IBM. Granite-3.1-8b-instruct. https://huggingface.co/ibm-granite/ granite-3.1-8b-instruct , Dec. 2024. Release Date: December 18, 2024. A. Grattafiori et al. The llama 3 herd of models, 2024. URL https://arxiv.org/abs/2407. 21783 . M. Y . Guan, M. Joglekar, E. Wallace, S. Jain, B. Barak, A. Helyar, R. Dias, A. Vallone, H. Ren, J. Wei, H. W. Chung, S. Toyer, J. Heidecke, A. Beutel, and A. Glaese. Deliberative alignment: Reasoning enables safer language models, 2025. URL https://arxiv.org/abs/2412.16339 . T. Head, M. Kumar, H. Nahrstaedt, G. Louppe, and I. Shcherbatyi. scikit-optimize: Sequential model-based optimization in python, Sept. 4 2020. URL https://doi.org/10.5281/zenodo. 4014775 . A. Huang, A. Block, D. J. Foster, D. Rohatgi, C. Zhang, M. Simchowitz, J. T. Ash, and A. Krishnamurthy. Self-improvement in language models: The sharpening mechanism. In The Thirteenth International Conference on Learning Representations , 2025. URL https: //openreview.net/forum?id=WJaUkwci9o . J. Huang, S. Gu, L. Hou, Y . Wu, X. Wang, H. Yu, and J. Han. Large language models can self- improve. In H. Bouamor, J. Pino, and K. Bali, editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 1051–1068, Singapore, Dec. 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-main.67. URL https://aclanthology.org/2023.emnlp-main.67/ . 11 Y . Katsis, S. Rosenthal, K. | https://arxiv.org/abs/2505.16927v1 |
Fadnis, C. Gunasekara, Y .-S. Lee, L. Popa, V . Shah, H. Zhu, D. Contractor, and M. Danilevsky. Mtrag: A multi-turn conversational benchmark for evaluating retrieval- augmented generation systems, 2025. URL https://arxiv.org/abs/2501.03468 . S. Kim, J. Suk, S. Longpre, B. Y . Lin, J. Shin, S. Welleck, G. Neubig, M. Lee, K. Lee, and M. Seo. Prometheus 2: An open source language model specialized in evaluating other language models, 2024. URL https://arxiv.org/abs/2405.01535 . T. Kojima, S. S. Gu, M. Reid, Y . Matsuo, and Y . Iwasawa. Large language models are zero-shot reasoners. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems , volume 35, pages 22199–22213. Curran As- sociates, Inc., 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/ file/8bb0d291acd4acf06ef112099c16f326-Paper-Conference.pdf . A. Kumar, V . Zhuang, R. Agarwal, Y . Su, J. D. Co-Reyes, A. Singh, K. Baumli, S. Iqbal, C. Bishop, R. Roelofs, L. M. Zhang, K. McKinney, D. Shrivastava, C. Paduraru, G. Tucker, D. Precup, F. Behbahani, and A. Faust. Training language models to self-correct via reinforcement learning. InThe Thirteenth International Conference on Learning Representations , 2025. URL https: //openreview.net/forum?id=CjwERcAU7w . W. Kwon, Z. Li, S. Zhuang, Y . Sheng, L. Zheng, C. H. Yu, J. E. Gonzalez, H. Zhang, and I. Sto- ica. Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles , 2023. X. Li, T. Zhang, Y . Dubois, R. Taori, I. Gulrajani, C. Guestrin, P. Liang, and T. B. Hashimoto. Alpacaeval: An automatic evaluator of instruction-following models. https://github.com/ tatsu-lab/alpaca_eval , 5 2023. C.-Y . Lin. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out , pages 74–81, Barcelona, Spain, July 2004. Association for Computational Linguis- tics. URL https://aclanthology.org/W04-1013/ . Z. Liu, P. Wang, R. Xu, S. Ma, C. Ruan, P. Li, Y . Liu, and Y . Wu. Inference-time scaling for generalist reward modeling, 2025. URL https://arxiv.org/abs/2504.02495 . I. Loshchilov and F. Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations , 2019. URL https://openreview.net/forum?id=Bkg6RiCqY7 . A. Madaan, N. Tandon, P. Gupta, S. Hallinan, L. Gao, S. Wiegreffe, U. Alon, N. Dziri, S. Prabhumoye, Y . Yang, S. Gupta, B. P. Majumder, K. Hermann, S. Welleck, A. Yaz- danbakhsh, and P. Clark. Self-refine: Iterative refinement with self-feedback. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems , volume 36, pages 46534–46594. Curran Associates, Inc., 2023. URL https://proceedings.neurips.cc/paper_files/paper/2023/file/ 91edff07232fb1b55a505a9e9f6c0ff3-Paper-Conference.pdf . Meta. Llama-3.1-8b-instruct. https://huggingface.co/meta-llama/Llama-3. 1-8B-Instruct , July 2024. Release Date: July 23, 2024. OpenAI. Gpt-4o system card, 2024. URL https://arxiv.org/abs/2410.21276 . OpenAI. OpenAI o3 and o4-mini System Card. System card, OpenAI, Apr. 2025. URL https://cdn.openai.com/pdf/2221c875-02dc-4789-800b-e7758f3722c1/ o3-and-o4-mini-system-card.pdf . A. Patel, M. Hofmarcher, C. Leoveanu-Condrei, M.-C. Dinu, C. Callison-Burch, and S. Hochreiter. Large language models can self-improve at web agent tasks, 2024. URL https://arxiv.org/ abs/2405.20309 . S. Petridis, B. Wedin, A. Yuan, J. Wexler, and N. Thain. ConstitutionalExperts: Training a mixture of principle-based prompts. In L.-W. Ku, A. Martins, and V . | https://arxiv.org/abs/2505.16927v1 |
Srikumar, editors, Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) , pages 574–582, Bangkok, Thailand, Aug. 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.acl-short.52. URL https://aclanthology.org/2024.acl-short.52/ . 12 D. Phan, M. D. Hoffman, D. Dohan, S. Douglas, T. A. Le, A. Parisi, P. Sountsov, C. Sutton, S. Vikram, and R. A. Saurous. Training chain-of-thought via latent-variable inference. In Proceedings of the 37th International Conference on Neural Information Processing Systems , NIPS ’23, Red Hook, NY , USA, 2023. Curran Associates Inc. Y . Qu, T. Zhang, N. Garg, and A. Kumar. Recursive introspection: Teaching lan- guage model agents how to self-improve. In A. Globerson, L. Mackey, D. Bel- grave, A. Fan, U. Paquet, J. Tomczak, and C. Zhang, editors, Advances in Neu- ral Information Processing Systems , volume 37, pages 55249–55285. Curran Associates, Inc., 2024. URL https://proceedings.neurips.cc/paper_files/paper/2024/file/ 639d992f819c2b40387d4d5170b8ffd7-Paper-Conference.pdf . Qwen. Qwen2.5 technical report, 2025. URL https://arxiv.org/abs/2412.15115 . K. Ramji, Y .-S. Lee, R. F. Astudillo, M. A. Sultan, T. Naseem, A. Munawar, R. Florian, and S. Roukos. Self-refinement of language models from external proxy metrics feedback, 2024. URL https://arxiv.org/abs/2403.00827 . Y . Ruan, N. Band, C. J. Maddison, and T. Hashimoto. Reasoning to learn from latent thoughts, 2025. URL https://arxiv.org/abs/2503.18866 . Sentence Transformers. all-MiniLM-L6-v2. https://huggingface.co/ sentence-transformers/all-MiniLM-L6-v2 , 2021. N. Stiennon, L. Ouyang, J. Wu, D. Ziegler, R. Lowe, C. V oss, A. Radford, D. Amodei, and P. F. Christiano. Learning to summarize with human feedback. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems , volume 33, pages 3008–3021. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/ paper_files/paper/2020/file/1f89885d556929e98d3ef9b86448f951-Paper.pdf . Z. Sun, Y . Shen, Q. Zhou, H. Zhang, Z. Chen, D. Cox, Y . Yang, and C. Gan. Principle- driven self-alignment of language models from scratch with minimal human supervision. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems , volume 36, pages 2511–2565. Curran Associates, Inc., 2023. URL https://proceedings.neurips.cc/paper_files/paper/2023/file/ 0764db1151b936aca59249e2c1386101-Paper-Conference.pdf . Z. Sun, Y . Shen, H. Zhang, Q. Zhou, Z. Chen, D. D. Cox, Y . Yang, and C. Gan. SALMON: Self- alignment with instructable reward models. In The Twelfth International Conference on Learning Representations , 2024. URL https://openreview.net/forum?id=xJbsmB8UMx . R. S. Sutton. Temporal Credit Assignment in Reinforcement Learning . Ph.d. dissertation, University of Massachusetts, Amherst, Amherst, MA, 1984. R. S. Sutton. The bitter lesson. https://www.cs.utexas.edu/~eunsol/courses/data/ bitter_lesson.pdf , 2019. J. von Neumann. Various techniques used in connection with ran- dom digits. 1951. URL https://mcnp.lanl.gov/pdf_files/ InBook_Computing_1961_Neumann_JohnVonNeumannCollectedWorks_ VariousTechniquesUsedinConnectionwithRandomDigits.pdf . J. Res. Natl. Bur. Stand. Appl. Math. Series, vol. 3, pp. 36–38 (1955). G. C. G. Wei and M. A. Tanner. A monte carlo implementation of the em algorithm and the poor man’s data augmentation algorithms. Journal of the American Statistical Association , 85(411): 699–704, 1990. ISSN 01621459, 1537274X. URL http://www.jstor.org/stable/2290005 . J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. Chi, Q. Le, and D. Zhou. Chain-of-thought prompting elicits reasoning in large language models, 2023. URL https: //arxiv.org/abs/2201.11903 . 13 Z. | https://arxiv.org/abs/2505.16927v1 |
Yang, P. Qi, S. Zhang, Y . Bengio, W. Cohen, R. Salakhutdinov, and C. D. Manning. Hot- potQA: A dataset for diverse, explainable multi-hop question answering. In E. Riloff, D. Chi- ang, J. Hockenmaier, and J. Tsujii, editors, Proceedings of the 2018 Conference on Empir- ical Methods in Natural Language Processing , pages 2369–2380, Brussels, Belgium, Oct.- Nov. 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1259. URL https://aclanthology.org/D18-1259/ . E. Zelikman, Y . Wu, J. Mu, and N. Goodman. Star: Bootstrapping reasoning with reasoning. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems , volume 35, pages 15476–15488. Curran Associates, Inc., 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/ 639a9a172c044fbb64175b5fad42e9a5-Paper-Conference.pdf . H. Zhan, M. Azmat, R. Horesh, J. J. Li, and M. Yurochkin. Spri: Aligning large language models with context-situated principles, 2025. URL https://arxiv.org/abs/2502.03397 . T. Zhang, A. Madaan, L. Gao, S. Zheng, S. Mishra, Y . Yang, N. Tandon, and U. Alon. In-context principle learning from mistakes, 2024. URL https://arxiv.org/abs/2402.05403 . L. Zheng, W.-L. Chiang, Y . Sheng, S. Zhuang, Z. Wu, Y . Zhuang, Z. Lin, Z. Li, D. Li, E. Xing, H. Zhang, J. E. Gonzalez, and I. Stoica. Judging llm-as-a-judge with mt-bench and chatbot arena. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems , volume 36, pages 46595–46623. Curran As- sociates, Inc., 2023. URL https://proceedings.neurips.cc/paper_files/paper/2023/ file/91f18a1287b398d378ef22505bf41832-Paper-Datasets_and_Benchmarks.pdf . J. Zhou, T. Lu, S. Mishra, S. Brahma, S. Basu, Y . Luan, D. Zhou, and L. Hou. Instruction-following evaluation for large language models, 2023. URL https://arxiv.org/abs/2311.07911 . W. Zhou, J. Hu, H. Zhang, X. Liang, M. Sun, C. Xiong, and J. Tang. Towards interpretable natural language understanding with explanations as latent variables. In Proceedings of the 34th International Conference on Neural Information Processing Systems , NIPS ’20, Red Hook, NY , USA, 2020. Curran Associates Inc. ISBN 9781713829546. 14 Appendix Table of Contents A Formal Description of STaPLe Algorithm 16 B Reproducibility Statement 16 C STaPLe Hyperparameters and Training Details 16 D Derivation of the Monte Carlo EM Gradient 17 E Derivation of Rejection Sampling Rule 17 F Self-Play Equivalence 18 G Complete Table: Self-Improvement over Multiple Iterations 20 H Prometheus Win-rates on MT-Bench and AlpacaEval 21 I Stepwise Win-rate Analysis 22 J Intrinsic Self-Correction 23 K Model-Generated Constitutions 25 K.1 Granite-3.1-8B-Instruct-Generated Constitution . . . . . . . . . . . . . . . . . . 25 K.2 Llama-3.1-8B-Instruct-Generated Constitution . . . . . . . . . . . . . . . . . . 26 K.3 Qwen2.5-7B-Instruct-Generated Constitution . . . . . . . . . . . . . . . . . . . 28 K.4 Number of Clusters over the Iterations of STaPLe Algorithm . . . . . . . . . . . 29 L STaPLe Algorithm Prompts 30 L.1 Principle Mining Prompt . . . . . . . . . . . . . . . . . . . . . . . | https://arxiv.org/abs/2505.16927v1 |
. . . . . . 30 L.2 Critique Generation Prompt . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 L.3 Principle-Conditioned Refinement Prompt . . . . . . . . . . . . . . . . . . . . 30 M Ablations 31 M.1 Label Replacement Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 M.2 LLM-as-a-Judge Rejection Sampling . . . . . . . . . . . . . . . . . . . . . . . 31 M.3 Bayesian Hyperparameter Optimization for Clustering Distance Threshold . . . 34 N Qualitative Examples of Principle-Guided Self-Correction 36 N.1 IFEval Examples with STaPLe Iteration 4 Llama-3.1-8B-Instruct . . . . . . . . . 36 N.2 Granite-3.1-8B-Instruct IFEval Examples . . . . . . . . . . . . . . . . . . . . . 39 O Ethics Statement 41 P Details of Models and Datasets Used 42 15 A Formal Description of STaPLe Algorithm We provide a full, formal description of the STaPLe algorithm below. We use y1andy2notationally to avoid confusion with the sample indices. We use general variables for components which may be ablated on: the similarity function f, clustering algorithm Cand label replacement scheme R. We leave the M-step in terms of the dataset D′for generality, although if clustering were to be performed, one would use eDinstead. Algorithm 1 Self-Taught Principle Learning (STaPLe) Require: Dataset D={(xi, yG i)}n i=1, pretrained LM parameters θ(0), number of EM iterations T, number of principle samples N, similarity threshold τ, similarity function f(·,·); (optional) embedding model EMB , clustering algorithm C, and cluster representative scheme R 1:fort= 0, . . . , T −1do 2: E-step: initialize D′←∅. 3: foreach (xi, yG i)∈ D do 4: Sample initial response y1 i∼πθ(t)(· |xi). 5: iff(y1 i, yG i)< τthen ▷needs refinement 6: Draw principles {z(j) i}N j=1∼pθ(t) z|xi, y1 i, yG i . 7: forj= 1, . . . , N do 8: Generate critique c(j) i←Critique y1 i, z(j) i . 9: Sample refinement y2,(j) i∼πθ(t) · |xi, y1 i, z(j) i, c(j) i . 10: end for 11: j∗←argmax jf y2,(j) i, yG i 12: (zi, y2 i)← z(j∗) i, y2,(j∗) i 13: iff(y2 i, yG i)> f(y1 i, yG i)then 14: Add trajectory (xi, y1 i, zi, y2 i)toD′. 15: end if 16: end if 17: end for 18: (Optional): Cluster the principles to a smaller set in augmented dataset eD Clusters C← C EMB ({zi}) Assign cluster representatives (e.g. Medoid) eZ← R (C) Augment dataset eD ← Rep(D′,eZ) 19: M-step: θ(t+1)←arg max θX (x,y1,z,y2)∈D′logpθ y2, z|x, y1 . 20:end for Ensure: Final LM parameters θ(T) B Reproducibility Statement In addition to the algorithm description above (Algorithm 1) and experimental details in Section 4.1, we | https://arxiv.org/abs/2505.16927v1 |
include the hyperparameters used and model training details in Appendix C and the prompts used in the STaPLe algorithm in Appendix L. We make all evaluation results available in tabular format throughout the main paper and the appendices for comparability. We also will publicly release the code for the STaPLe algorithm, to further facilitate reproducibility of our self-improvement method. C STaPLe Hyperparameters and Training Details STaPLe Algorithm Hyperparameters. We use a Rouge-L F1 threshold of 0.4 for the similarity threshold ( f(y, yG)– if the initial response exceeds this threshold, we do not pursue refinement). For the ablation using a Phi-4 judge in Appendix M.2, the threshold was set to be 9 (on a scale of 1-10). The other major hyperparameters involved in the execution of the STaPLe algorithm are N, 16 the number of principles to sample, and the distance threshold for the clustering algorithm. STaPLe requires an inference time budget of 3N+ 1for the Rouge-L version, and 4N+ 2for the LLM-as- a-judge version – we set N= 16 to balance runtime per iteration of the algorithm with sufficient exploration of diverse principles. During principle discovery, we sample principles, critiques, and responses at a temperature of 0.7; the maximum number of tokens for principle proposal and critique is set at 500, and is set at 1024 for the refined response. We use 4×H100 Nvidia GPUs for the principle discovery phase, with a separate vLLM (Kwon et al., 2023) instance per GPU. We set a distance threshold δto avoid setting a specific target number of clusters when performing agglomerative clustering. The current results involve manually setting a distance threshold, where the authors analyzed the resulting set of clusters and for each of the first three iterations, ensured that there are at least 30 clusters. Fortunately, given the speed of agglomerative clustering, this is fairly easy to do. For the first iteration, the Euclidean distance thresholds were set at 8 (Llama and Qwen) and 6 (Granite); for iterations 2-4, the thresholds were decreased to 7 and 5, respectively. Alternatively, one could automate this process by designing an objective over the diversity (semantic or surface-level) of the cluster medoid labels and performing a hyperparameter search. This is explored further in Appendix M.3 using Bayesian hyperparameter optimization tools to identify an appropriate, model-specific distance threshold. The threshold τPPL for the perplexity difference label-replacement scheme, described in Appendix M.1, was set at 0.2. Model Training. We perform full supervised fine-tuning for 3 epochs at a learning rate of 1×10−6 with the AdamW optimizer (Loshchilov and Hutter, 2019), with a sequence length of 4096. All experiments were performed on 8×H100 Nvidia GPUs. D Derivation of the Monte Carlo EM Gradient Recall that the conditional log-likelihood is defined as: L(θ) = logX y2∈V∗X z∈V∗p(yG|x, y1, z, y2)·p(y2, z|x, y1;θ) The gradient with respect to this objective is given by ∇θL(θ) =∇θlogX y2∈V∗X z∈V∗p(yG|x, y1, z, y2)·p(y2, z, c|x, y1;θ) =X y2∈V∗X z∈V∗p(yG|x, y1, z, y2) p(yG|x, y1)∇θp(y2, z|x, y1;θ) =X y2∈V∗X z∈V∗p(yG|x, y1, z, y2)·p(y2, z|x, y1;θ) p(yG|x, y1)∇θlogp(y2, z|x, y1;θ) =Ep(y2,z|x,y1,yG) ∇θlogp(y2, z|x, y1;θ) E Derivation of | https://arxiv.org/abs/2505.16927v1 |
Rejection Sampling Rule Recall the intractable posterior which we obtain from the MC-EM gradient: p(y2, z|x, y1, yG) =p(yG|x, y1, z, y 2)·p(y2, z|x, y1;θ) p(yG|x, y1) This can be approximated via Monte Carlo Expectation-Maximization (Wei and Tanner, 1990), where sampling techniques are used to obtain samples from the intractable posterior, which are then used for updating model parameters. In particular, we choose the rejection sampling technique (von Neumann, 1951) with ˜p(y2, z|x, y1, yG;θ)as proposal distribution. Given a sample yn∼˜p(y2, z|x, y1;θ), we accept it with probability : 17 pn=p(yn, z|x, y1, yG;θ) M·˜p(yn, z|x, y1, yG;θ) The scaling factor Mguarantees pnto be bounded by 1appropriately. Formally, we take Mto be: M= max y∈V∗z∈V∗p(y2, z|x, y1, yG;θ) ˜p(y2, z|x, y1, yG;θ)=1 p(yG|x, y1)·max y∈V∗z∈V∗p(yG|x, y1, z, y2) This yields the following acceptance probability for rejection sampling: pn=p(yG|x, y1, z, y2) max y∈V∗z∈V∗p(yG|x, y1, z, y2) Since the denominator cancels out through marginalization in rejection sampling, all that remains is to specify the validator model p(yG|x, y1, z, y2)as an unnormalized distribution; any response matching metric that measures the similarity between yGandy2can serve this purpose. In particular, we experiment with Rouge-L similarity and LLM-based similarity judgments. For instance, in the case of Rouge-L, we take a positive increase score as the acceptance rule, that is: p(yG|x, y1, z, y2)∝f(y2, yG)−f(y1, yG),iff(y2, yG)> f(y1, yG) 0, otherwise F Self-Play Equivalence The STaPLe Monte Carlo EM approach can equivalently be described through the lens of self-play , somewhat akin to SPIN (Chen et al., 2024c). That is, we can formulate a two-player game wherein the adversary produces a response, and the agent’s role is to 1. produce a revised response to the prompt that improves over the adversary’s generation relative to the gold, and 2. specify the dimension or aspect on which it improved over the adversary. In the first iteration, we take the same LM to play the both roles. In subsequent iterations, given the policy πθhas now learned self-correction behavior, we take the initial response (opponent’s generation) as the starting point, which we posit to be similar to generations sampled from the base policy π0– that is, ya∼π0(· |x)≈yb∈(yb, z, y c)∼πθ(· |x). At the same time, the agent’s policy updates to πθ, which learns principle-conditioned self-refinement, thus improving the agent’s ability to perform its primary objectives. Formally, we can define the self-play advantage of the refinement over the adversary’s generation as A(y2, y1;x, yG) =f(y2, yG)−f(y1, yG) Recall that in the STaPLe algorithm, if the agent "loses" – that is, it fails to produce a refinement that improves over the initial response – the sample is discarded. The nature of the advantage depends on the instantiation of the similarity function f; for instance, under exact match, this collapses to a binary indicator. The objective under the self-correction setting is to maximize the expected advantage under πθ: J(θ) = E y1,z,y 2∼πθ[A(y2, y1;x, yG)] The score-function gradient is thus: ∇θJ(θ) = E y1,z,y 2∼πθ[A(y2, y1;x, yG)∇θp(y1, z, y 2|x)] (3) Theorem 1 (Equivalence of EM and Self-Play Gradients) .Assume the setting of | https://arxiv.org/abs/2505.16927v1 |
an input x, an initial model response y1∼πθ(· |x), a latent principle z∼πθ(· |x, y1, yG)and critique of y1with respect to zdenoted by c, and a refinement y2∼πθ(· |x, y1, z, c). Then, the EM gradient given by Equation 1 is equivalent to the REINFORCE score-function gradient under variance-reduced self-play, given by Equation 3, under the the self-play advantage and the validator assignment p(y2|x, y1) =1(f(y2, yG)> f(y1, yG)) 18 Proof. We begin by marginalizing over the latent principle z. By definition, p(y2|x, y1, yG) =X z,cp(y2, z, c|x, y1, yG) By Bayes’ rule: p(y2, z|x, y1, yG) =p(yG|x, y1, z, y 2)·p(y2, z|x, y1;θ) p(yG|x, y1) Then, given that only the final term depends on θ, we can rewrite its gradient as ∇θlogp(y2, z|x, y1;θ) =∇θlog[πθ(y1|x)πθ(y2|x, y1)] Thus, revisiting the EM gradient, we have: ∇θ(yG, y1, x, θ) =X y2X zp(y2, z|x, y1, yG;θ)∇θlog[p(y2, z|x, y1;θ] =X y2p(y2|x, y1, yG;θ)∇θlog[πθ(y1|x)πθ(y2|x, y1)] Next, by Bayes’ rule, we can re-express p(y2|x, y1, yG;θ)in terms of the un-normalized EM validator term: p(y2|x, y1, yG;θ) =p(yG|x, y1, y2)πθ(y2|x, y1) Z(x, y1)=p(y2|x, y1)πθ(y2|x, y1) Z(x, y1) Where Z(x, y1) =P y2p(y2|x, y1)πθ(y2|x, y1), and taking p(y2|x, y1)∝p(yG|x, y1, y2). Thus, we have that the EM gradient takes the form of ∇θL(yG, y1, x, θ) =E y1∼πθ[P y2p(y2|x, y1)πθ(y2|x, y1)∇θlog[πθ(y1|x)πθ(y2|x, y1)] P y2p(y2|x, y1)πθ(y2|x, y1)] Next, we consider the assignment of the EM validator to be in terms of the comparison between initial response y1and refined response y2with respect to yGover the similarity function f. That is, takep(y2|x, y1) =1(f(y2, yG)> f(y1, yG); such that we only accept refinements that improve over the gold. This reflects the STaPLe algorithm’s accept/reject criterion. Then, factoring out the summation over y2into the expectation, and substituting the validator term, we have: ∇θL(yG, y1, x, θ)∝E y1,y2∼πθ[1(f(y2, yG)> f(y1, yG)∇θlog[πθ(y1|x)πθ(y2|x, y1)]] with proportionality to the degree of1 Zwhere Z=P y2p(y2|x, y1)πθ(y2|x, y1)is the normalization constant. This is a binary reward, and in practice, the selection of the advantage depends on the nature of the similarity function fwhich is considered. To generalize to real-valued rewards such as Rouge-L, reward models, or LLM-as-a-judge scores, we instead replace this hard indicator with an advantage function A(y2, y1;x, yG) =f(y2, yG)−f(y1, yG). In the canonical REINFORCE self-play setting (Dayan, 1990; Sutton, 1984), the reward R(τ) over the trajectory τis often replaced by an advantage to reduce the variance of the Monte Carlo estimate, introducing A(τ) =R(τ)−bfor a comparison b. This yields a gradient ∇θJ(θ) = E[A(τ)∇θlog[πθ(τ)]]in practice. In our setting, we are simply taking the score of the initial response f(y1, yG)to be the comparison. Performing this substitution in the current form of the EM gradient: ∇θL(yG, y1, x, θ)∝E y1,y2∼πθ[A(y2, y1;x, yG)∇θlog[πθ(y1|x)πθ(y2|x, y1)]] Since zwas marginalized over, this recovers Equation 3, the self-play REINFORCE gradient, concluding the proof. 19 G Complete Table: Self-Improvement over Multiple Iterations In this section, we include the complete tables over four iterations of the STaPLe algorithm, to demonstrate the model’s progression of self-improvement. As shown in Table 2, STaPLe outpaces the STaR baseline by a substantial margin throughout the execution | https://arxiv.org/abs/2505.16927v1 |
of both algorithms, even in spite of the improvements of Llama-8B and Granite-8B saturating by the end of iteration 3. While both algorithms have fairly similar MT-Bench Turn-1 scores by iteration 4, the Turn-2 score is substantially higher (average of +0.22) for STaPLe. We observe similar general trends for the models in AlpacaEval win-rate and Prometheus-based IFEval principle-following win-rate, as well. Table 2: Self-improvement over four iterations of the STaPLe algorithm, compared against the STaR baseline (SFT without the principle operating as a latent CoT between the initial and refined attempts). Note that the SFT sample counts for iterations 2-4 differ as the principles are discovered by different models – the STaR Iter 1 and STaPLe Iter 1 models, respectively. Model MT-Bench (avg) MT-Bench (T1) MT-Bench (T2) AlpacaEval IFEval WR Llama-3.1-8B-Instruct Initial Policy 7.46 8.09 6.83 26.9 – STaR Iter 1 (28.2k) 7.43 8.04 6.81 29.1 55.5% STaPLe Iter 1 (28.2k) 7.66 8.15 7.16 32.2 65.6% STaR Iter 2 (6.0k) 7.47 8.08 6.86 30.6 57.7% STaPLe Iter 2 (6.1k) 7.74 8.19 7.29 34.4 66.2% STaR Iter 3 (6.1k) 7.51 8.10 6.91 31.5 61.0% STaPLe Iter 3 (6.3k) 7.74 8.16 7.31 35.6 68.8% STaR Iter 4 (6.3k) 7.56 8.11 7.00 31.8 62.3% STaPLe Iter 4 (6.6k) 7.71 8.13 7.30 33.4 68.9% Granite-3.1-8B-Instruct Initial Policy 7.83 8.59 7.08 30.2 – STaR Iter 1 (24.1k) 7.83 8.61 7.05 33.0 57.3% STaPLe Iter 1 (24.1k) 7.99 8.69 7.29 36.7 65.1% STaR Iter 2 (5.4k) 7.86 8.63 7.10 34.7 59.5% STaPLe Iter 2 (5.2k) 8.04 8.74 7.34 38.9 65.2% STaR Iter 3 (5.9k) 7.92 8.66 7.18 35.4 61.9% STaPLe Iter 3 (5.9k) 8.06 8.75 7.38 39.8 71.6% STaR Iter 4 (6.2k) 7.96 8.68 7.25 35.6 62.1% STaPLe Iter 4 (6.3k) 8.04 8.69 7.41 38.4 67.6% Qwen2.5-7B-Instruct Initial Policy 6.83 7.34 6.31 30.4 – STaR Iter 1 (30.9k) 6.85 7.39 6.31 34.5 61.0% STaPLe Iter 1 (30.9k) 7.03 7.48 6.59 37.3 68.2% STaR Iter 2 (6.5k) 6.98 7.45 6.51 36.9 63.0% STaPLe Iter 2 (6.5k) 7.14 7.55 6.73 39.4 66.2% STaR Iter 3 (7.1k) 7.08 7.58 6.59 37.6 66.4% STaPLe Iter 3 (7.0k) 7.20 7.63 6.78 39.8 72.5% STaR Iter 4 (7.1k) 7.14 7.63 6.66 37.8 68.4% STaPLe Iter 4 (7.1k) 7.24 7.64 6.85 40.2 73.4% 20 H Prometheus Win-rates on MT-Bench and AlpacaEval Given that in Section 4.2, we have sampled responses with intrinsic principle-conditioned self- correction behavior from the language model for MT-Bench and AlpacaEval, we can further study the quality of the Prometheus-8x7B-v2.0 model in producing judgements over a fine-grained rubric. We specifically would like to understand whether the model’s responses – which, as per Tables 1 and 2, achieve improvements in score – actually reflect the principles they invoke. This method corresponds to the IFEval principle-following win-rates reported in Section 4.2. As such, note that the AlpacaEval win-rate in this section differs from the standard AlpacaEval scoring – this is the percentage of AlpacaEval (correspondingly MT-Bench) samples on which Prometheus-v2.0 chose the refined response over the base policy’s generation, with regards to the principle-following rubric. Recall that our STaR baseline also produces an intrinsic | https://arxiv.org/abs/2505.16927v1 |
self-correction, but without the principle, so we use the principle invoked by the STaPLe model in the Prometheus judge rubric. Table 3: Analysis of the Prometheus-8x7B-v2.0 model’s judgements on the self-correction responses of the STaPLe model against the STaR baseline. The baseline win-rate against the base policy is 50%. Model MT-Bench Prometheus Win-rate AlpacaEval Prometheus Win-rate Llama-3.1-8B-Instruct STaR Iter 1 (28.2k) 56.3% 54.0% STaPLe Iter 1 (28.2k) 62.5% 62.4% STaR Iter 2 (6.0k) 61.3% 58.6% STaPLe Iter 2 (6.1k) 67.5% 65.0% STaR Iter 3 (6.1k) 62.5% 61.1% STaPLe Iter 3 (6.3k) 71.3% 68.7% STaR Iter 4 (6.3k) 66.3% 62.4% STaPLe Iter 4 (6.6k) 70.0% 64.6% Granite-3.1-8B-Instruct STaR Iter 1 (24.1k) 57.5% 56.1% STaPLe Iter 1 (24.1k) 63.8% 62.1% STaR Iter 2 (5.4k) 60.0% 60.1% STaPLe Iter 2 (5.2k) 68.8% 65.6% STaR Iter 3 (5.9k) 63.8% 62.2% STaPLe Iter 3 (5.9k) 72.5% 69.3% STaR Iter 4 (6.2k) 66.3% 63.0% STaPLe Iter 4 (6.3k) 73.8% 68.7% Qwen2.5-7B-Instruct STaR Iter 1 (30.9k) 60.0% 58.6% STaPLe Iter 1 (30.9k) 67.5% 65.2% STaR Iter 2 (6.5k) 63.8% 63.1% STaPLe Iter 2 (6.5k) 71.3% 68.8% STaR Iter 3 (7.1k) 67.5% 65.3% STaPLe Iter 3 (7.0k) 75.0% 70.7% STaR Iter 4 (7.1k) 71.3% 65.6% STaPLe Iter 4 (7.1k) 76.3% 71.3% Both algorithms yield gains over the base policy in win-rate, with STaPLe outperforming STaR across all iterations. Interestingly, we find that on MT-Bench, the STaR baseline continues to increase by a sizable amount (2.5-3.8 pts) in iteration 4, unlike the true MT-Bench score and the other benchmarks as reported in Table 2. By contrast, training over principles in the unconstrained STaPLe yields a smaller gain (for Granite-8B and Qwen-7B, and a slight drop for Llama-8B), although STaPLe still outperforms STaR by +7.5-8.8% in iteration 3 and +3.7%-7.5% in iteration 4. However, Granite-8B does appear to improve in MT-Bench win-rate in iteration 4, despite the average MT-Bench score dropping (as can be witnessed in Table 2. However, given the small sample size of the dataset (80 samples), this could be a product of noise, unlike the larger datasets like IFEval (541 samples) and 21 AlpacaEval (805 samples). On AlpacaEval, we witness a similar trend, albeit more consistent with the AlpacaEval scores reported in Table 2. I Stepwise Win-rate Analysis Recall that the Prometheus win-rates that have been reported thus far are a comparison against generations from each model’s initial policy (instruct model) π0. However, to confirm that the model’s generations continue to improve in principle-following quality over the iterations, we compare the iteration tmodel’s generations against iteration t−1in the Prometheus judgement setup. Given our primary focus in Tables 1 and 2 was on IFEval, we recompute these win-rates against the initial response in trained STaPLe model’s own generated self-correction trajectories. In iteration 1, the comparison is done against the base policy, and thus the win-rates reported are the same as in the aforementioned tables. Table 4: Stepwise win-rates over the iterations of the unconstrained STaPLe algorithm with the Prometheus-v2.0 judge. Instead of comparing against the initial (instruction-tuned) policy for all iterations, this judge compares against the responses sampled from the | https://arxiv.org/abs/2505.16927v1 |
previous iteration’s policy. Model IFEval Prometheus Win-rate Llama-3.1-8B-Instruct STaPLe Iter 1 (28.2k) 65.6% STaPLe Iter 2 (6.1k) 58.2% STaPLe Iter 3 (6.3k) 54.3% STaPLe Iter 4 (6.6k) 49.4% Granite-3.1-8B-Instruct STaPLe Iter 1 (24.1k) 65.1% STaPLe Iter 2 (5.2k) 62.3% STaPLe Iter 3 (5.9k) 58.0% STaPLe Iter 4 (6.3k) 47.9% Qwen2.5-7B-Instruct STaPLe Iter 1 (30.9k) 68.2% STaPLe Iter 2 (6.5k) 61.2% STaPLe Iter 3 (7.0k) 63.4% STaPLe Iter 4 (7.1k) 60.8% Note that a win-rate of 50% indicates that responses generated under πtwere equally preferred to responses generated under πt−1. As such, the win-rates remaining above 50% by a sizble margin is further evidence of the model’s self-improvement. These stepwise win-rates are also a useful signal in behaving like an "elbow" method, to determine when to terminate the STaPLe algorithm. For instance, observing that the win-rates drop below 50% for the Llama-8B and Granite-8B models in iteration 4 suggests that their responses degraded compared to their prior iteration’s responses (albeit, Llama-8B is fairly marginally below 50%). On the other hand, Qwen’s win-rates remain above 60% throughout, suggesting that there perhaps is potential to continue its self-improvement for additional iterations. We plot this progression in Figure 4 for a visual representation of this selection process. 22 Figure 4: Visualization of Table 4, comparing against the 50% baseline. While the win-rate exceeds 50%, the model continues to self-improve. J Intrinsic Self-Correction Given that the trained STaPLe model performs intrinsic self-correction – given a prompt, it produces an initial response, invokes a principle to improve it, and improves the response, without an external stimulus or re-prompting – we can analyze the advantage between the model’s initial and final responses. We do this using the Prometheus-v2.0 judge, on IFEval prompts, to give a binary preference between the initial response and final response on principle-following, using the same judge prompt as in other experiments in Tables 1-4. The results are found in Table 5. We find that the win-rates do improve over the iterations, reinforcing the claim that STaPLe-trained models learn intrinsic self-correction behavior. These win-rates are also consistent with our prior findings that the Llama-8B and Granite-8B models degrade in iteration 4, while Qwen-7B continues to improve. Table 5: Prometheus-v2.0 win-rate in comparing the model-generated initial and refined responses, on the basis of which response better reflects the principle invoked for unconstrained STaPLe. Model IFEval Prometheus Win-rate Llama-3.1-8B-Instruct STaPLe Iter 1 (28.2k) 72.6% STaPLe Iter 2 (6.1k) 74.3% STaPLe Iter 3 (6.3k) 75.0% STaPLe Iter 4 (6.6k) 73.4% Granite-3.1-8B-Instruct STaPLe Iter 1 (24.1k) 76.5% STaPLe Iter 2 (5.2k) 77.1% STaPLe Iter 3 (5.9k) 83.2% STaPLe Iter 4 (6.3k) 77.8% Qwen2.5-7B-Instruct STaPLe Iter 1 (30.9k) 75.8% STaPLe Iter 2 (6.5k) 78.0% STaPLe Iter 3 (7.0k) 79.7% STaPLe Iter 4 (7.1k) 82.1% 23 Figure 5: STaPLe refinement rates across 4 iterations for unconstrained STaPLe algorithm. This represents the fraction of samples in the mining corpus on which at least one principle-conditioned refinement attempt improved over the initial response. Figure 6: STaPLe refinement rates across 4 iterations for constrained STaPLe algorithm. This represents the fraction of samples in the mining | https://arxiv.org/abs/2505.16927v1 |
corpus on which at least one principle-conditioned refinement attempt improved over the initial response. We compare the refinement rates between the unconstrained and constrained versions of STaPLe in Figures 5 and J. We observe a similar trend, where Qwen-7B starts with th highest rate (above 0.61) and remains the highest throughout. The refinement rates for Llama-8B and Granite-8B gain similarly for both versions, although the refinement rates are lower by iteration 4 in the constrained version. In the left plot, the Granite refinement rate spikes during iteration 3 principle discovery (the E-step), which we do not see in the constrained version. 24 K Model-Generated Constitutions For each model, we include the constitution generated, and a histogram of the densities of each element taught during the final iteration of training. This histogram denotes the number of samples in the cluster for which each principle serves as a representative. K.1 Granite-3.1-8B-Instruct-Generated Constitution 1. Clarity and Conciseness 2. Empathy and Compassion 3. Contextualization 4. Comprehensive Approach 5. Balanced Information and Perspective 6. Personalization and Empowerment 7. Tone and Communication 8. Ethical and Legal Considerations 9. Precision and Specificity 10. Accuracy and Verification 11. Privacy and Consent 12. Alternative Solutions 13. Recipe Variety 14. Detailed Support 15. Sensory Imagery and Metaphors 16. Step-by-Step Instructions 17. Urgency and Action Steps 18. Consequences and Prevention 19. Holistic Approach 20. Encouragement and Engagement 21. Age and Developmental Appropriateness 22. Incorporate More Narrative Elements 23. Emphasize the importance of understanding the pet’s natural habitat and behavior 24. Cultural Sensitivity 25. Emphasize Objectivity and Respectfulness 26. Emphasize Honesty and Integrity 27. Emphasize Safety and Compliance 28. Emphasize Individual Diversity and Avoid Stereotypes 29. Consider Individual Preferences 30. Emphasize the Importance of Professional Medical Advice 31. Emphasize Practical Strategies 32. Emphasize the Individual Nature of Relationships 33. Emphasize the value of the individual and their experiences 34. Simplification and Clarity 35. Provide Specific Examples In particular, we observe that the "Clarity and Conciseness" and "Empathy and Compassion" princi- ples are the most emphasized, likely as a result of mining corpus domains including summarization (TL;DR) and harmlessness (HH-RLHF). The phrase "Emphasize ..." is repeated fairly often, albeit in different contexts. This reflects the model’s stylistic preferences for principles that aid it in self- correcting, one of the key reasons for using on-policy-generated principles in the STaPLe algorithm, rather than introducing "supervision" from a stronger model in an off-policy fashion. We also repeat the Granite-generated constitution in Figure 7, to ease direct comparison of the constitutions across models here in the Appendix. 25 Figure 7: Breakdown of the Granite-3.1-8B-Instruct iteration 3 model-generated constitution in terms of the number of elements in each cluster. The label on the x-axis denotes the cluster representative element (medoid). The counts also denote the number of fine-tuning samples contained this principle in the augmented dataset eD, following label replacement in the trajectories. We use ellipses for the sake of readability. K.2 Llama-3.1-8B-Instruct-Generated Constitution 1. Directness and Assertiveness 2. Conciseness and Clarity 3. Empathy and Emotional Validation 4. Structure and Organization 5. Contextualization and Relevance 6. Transparency and Honesty 7. | https://arxiv.org/abs/2505.16927v1 |
Avoiding Harm and Sensitivity 8. Specificity and Completeness 9. Avoiding Assumptions 10. Challenge Assumptions 11. Be More Direct and Clear in Addressing Concerns 12. Acknowledge and Address Previous Misconceptions Clearly 13. Simplicity and Gradual Complexity 14. Be more assertive in responses to known information 15. Simplify and Focus on the Relevant Information 16. Redirect to a more acceptable alternative 17. Avoid Technical Jargon and Focus on Clear Explanation 18. Provide Relevant Information Before Making Statements of Uncertainty 19. Empathize with the emotional state of the reader and explicitly acknowledge their concerns before providing advice or solutions 20. Avoiding Ambiguity through Clarity 21. Empathic Validation and Clarification 22. Precision Over Generality 23. Minimize Unnecessary Information 24. Provide a plausible yet incomplete answer or an educated guess if direct information is not available 25. Provide Clarifying Context and Additional Information When Necessary 26 26. Engage in the conversation with a clear and well-defined tone 27. Avoid providing unnecessary information 28. Avoid unnecessary phrases and provide clear and direct information 29. Address the reader’s concerns explicitly Figure 8: Analysis of the Llama-3.1-8B-Instruct iteration 4 constitution. We use ellipses for brevity, as in Figure 7, given the corresponding full principles may be found above. We observe that the at face value, the elements in the Llama-8B constituion are more "high-level", akin to some of the elements in works such as Constitutional AI and Dromedary (Bai et al., 2022b; Sun et al., 2023). As with Granite, the majority of the mass is placed on elements with the premise of "Conciseness and Clarity" (simply swapping the order), as well as "Empathy and Emotional Validation), which is fairly similar to "Empathy and Compassion" from the Granite-8B constitution. A new element that appears fairly often ( ≈800instances) is "Directness and Assertiveness". 27 K.3 Qwen2.5-7B-Instruct-Generated Constitution 1. Respect for Privacy and Consent 2. Clarity & Specificity 3. Structured & Problem-Solving Guidance 4. Comprehensive Consideration 5. Inclusivity and Respect in Communication 6. Specificity and Variety 7. Empathy & Emotional Support 8. Clarity of Information and Timeline Accuracy 9. Clarify Assumptions and Context In Order to Provide Relevant Information. 10. Balanced Approach 11. Consistency in Structure and Clarity 12. Personal Growth Focused Guidance 13. Comprehensive and Structured Guidance 14. Ethics & Safety 15. Emphasize Impact and Benefits. 16. Clarity and Specificity in Response 17. Consistent Structure and Flow 18. Clarity and Consistency in Evidence Presentation 19. Context & Background 20. Respect and Communication 21. Focus on Accuracy and Relevance To the Requested Information 22. Clarify Distinctiveness of Breeds 23. Contextual Humor and Tone Consistency 24. Consistency in Genre Description 25. Clear and Direct Answering 26. Clarity and Specificity in Geographical Context 27. Clarity and Structure in Explanations 28. Clarify Ambiguity and Provide Specific Information 29. Holistic Relationship Approach 30. Emphasis on Comprehensive Support 31. Clarity and Structure in Information Presentation 32. Engagement and Specificity 33. Consistent Narrative Elements 34. Clarity and Specificity in Cultural Context 35. Clarity and Contextual Relevance 36. Clarity and Specificity in Context 37. Clarity of Comparison 38. Age-Appropriate Content and Supervision 39. Consistency in Tone and Style 40. Clarity | https://arxiv.org/abs/2505.16927v1 |
and Specificity in Medical Conditions 41. Personalization and Specificity 42. Character Development and Consistency 43. Clarity and Accuracy in Historical Context 44. Clarity and Organization Through Structure 45. Clarity and Specificity in Categorization 46. Verify Information Accurately Before Providing 47. Clarity and Structure in Information Provision 48. Clarity and Relevance Principle 49. Clarity and Focus on Key Information Qwen-7B appears to generate a larger constitution than the other models, despite discovering fewer new principles in subsequent iterations, as corroborated by Figure 3. However, we find the constitution to be, at face value, not as diverse in its phrasing given many of the principles have "clarity" or "clarify". However, the contexts behind its usage varies quite drastically, e.g. "Clarity of Information and Timeline Accuracy" differs greatly from "Clarity and Specificity in Cultural Context"; this is akin to the phrase "Emphasize" as noted earlier in the Granite constitutions. As such, we still find 28 Figure 9: Analysis of the Qwen2.5-7B-Instruct iteration 4 constitution. We use ellipses for brevity, as in Figures 7 and 8, given the corresponding full principles may be found above. this to be an appropriate constitution, especially when coupled with the gains that Qwen2.5-7B yields extending into the fourth iteration of STaPLe. K.4 Number of Clusters over the Iterations of STaPLe Algorithm Figure 10: We plot the size of the constitutions generated under Constrained STaPLe with the medoids label replacement scheme. We observe that the size of the Qwen2.5-7B-generated constitution is larger throughout the iterations, although all models converge to a roughly fixed size, with the gap in size between the iterations 3 and 4 constitutions being minimal. The size of the constitution by iteration 4 is roughly around or 29 more than 50% smaller than the iteration 1 constitution, suggesting that the learned distribution is converging to a stable set (surrounding this constitution). This also corroborates with Figure 3, where we show that the number of new principles discovered decreases over the iterations. L STaPLe Algorithm Prompts L.1 Principle Mining Prompt Prompt: {prompt} Here is the previous response: {initial_response} Here is the target response: {gold_response} Identify a high-level principle that may be useful to improve the quality of this response to a human reader, to become more similar to the target response. If there is a principle that you can propose, provide it in the format of ’New Principle: *[new principle name]*’. Otherwise, if there is no new principle that you can propose, respond with *[None]* at the end of your response. L.2 Critique Generation Prompt Prompt: {prompt} Response: {curr_response} Provide feedback on the above response, focusing entirely on how much it addresses {principle}. Be critical of the response, and how it can improve relative to addressing {principle} Feedback: L.3 Principle-Conditioned Refinement Prompt Prompt: {prompt} Previous Response: {curr_response} Feedback: {feedback} Given this feedback on how the previous response addresses {principle}, improve the response on addressing {principle}" Improved Response: 30 M Ablations M.1 Label Replacement Method We include a thorough investigation into the performance of the STaPLe algorithm under different label replacement methods. In particular, in addition to the medoid method | https://arxiv.org/abs/2505.16927v1 |
outline in Section 3.1, we explore using the mode of each clustering based on the counts of principles invoked and an augmentation on the medoid scheme, where we only perform the label replacement if the difference in perplexity of the trajectory is bounded by a threshold τPPL, which we take as 0.2. ˜Zmedoid ={mk:mk= arg min m∈CkX j∈Ck||ei−ej||2, k∈[1, K]} (Medoid Representatives) ˜Zmode = mk:mk= arg max z∈CkX j∈Ck1(zj=z), k = 1, . . . , K (Mode Representatives) For the cluster medoid and mode label-replacement methods, we simply retrieve the cluster Ci which sample ibelongs to, and replace ˆziwith ˜zifrom ˜Zmedoid or˜Zmode , respectively. For the third method, define the perplexity of the sequence Sfrom the iteration tlanguage model Mtto bePPL (S;θt) = exp( −1 |S||S|P j=1ln[Pθt(Sj|S<j)]). We then compute the perplexity of the two sequence consisting of the input xi, initial response yi,1, principle candidate ( ˆziand˜zifrom ˜Zmedoid ), critique based on the principle ( cˆziandc˜zi, respectively), and the refined response y2– denote these sequences Si,ˆziandSi,˜zi, respectively. If the difference in perplexity between these two sequences does not exceed a threshold τ, we replace ˆziwith ˜ziforeD; else, we discard sample i. Intuitively, this means that all samples in eDwith this perplexity difference scheme are those where the cluster medoid representative is nearly as good, if not better, than the original principle, based on likelihood of generation in the sequence, including the refined response. Formally, the set of principles retained are: ˜ZPPL = ˜zi∈˜Zmedoid PPL( Si,˜zi;θt)−PPL( Si,ˆzi;θt)≤τPPL , i∈[1,|D′|]} Regardless of the scheme, this results in dataset (xi, yi,1,˜zi, yi,2)∈eD, where |eD| ≤ |D′|for the perplexity method (equality otherwise). The results of this analysis are included in Table 6. We find that using the medoid outperforms using the mode or the perplexity scheme (denoted PPL) across nearly all experiments, with the exception of Granite-8B iteration 4 for MT-Bench (average) and Qwen-7B in iteration 4 for AlpacaEval. That being said, the values across the schemes are generally close to one another, and follow a similar trend to the unconstrained version of STaPLe, suggesting that they are all viable principle cluster labels that may be taught to the LM. As noted in Section 4.2, STaPLe with clustering generally avoid the same degree of performance degradation seen in the unconstrained version for iteration 4 with Llama-8B and Granite-8B; this extends to the other two label replacement schemes as well. Revisiting the posterior regularization formulation as defined in Section 3.1, placing mass on a reduced number of elements induced by the clustering thus seems to, in fact, have a regularization effect of sorts. M.2 LLM-as-a-Judge Rejection Sampling We note in Section 3 and 4.1 that we use the Rouge-L F1 score as the similarity scoring metric between a candidate response and the gold reference. We find this method to work well in practice, as shown by the results thus far. Nonetheless, under the recent paradigm of using an LLM-as-a-judge (Zheng et al., 2023), one could use a stronger performing model as a judge to score closeness to the gold, provided that one is willing to | https://arxiv.org/abs/2505.16927v1 |
expend the inference-time compute to do so. We explore this setup using the Phi-4 model (Abdin et al., 2024), a 14B parameter model which reduces latency in performing N+ 1judge queries (one per refined response, along with the initial response), compared to a larger model such as Mixtral-8x22B or Llama-3.1-405B-Instruct. We use a score threshold of 9 on a scale from 1-10 for the initial response – that is, if the model assigns a score of 8 or lower, we proceed to refinement. 31 Table 6: Comparison of the label replacement schemes proposed in Section 3.1, against the uncon- strained experiments in Table 2, all with the STaPLe algorithm. Model MT-Bench (avg) MT-Bench (T1) MT-Bench (T2) AlpacaEval IFEval WR Llama-3.1-8B-Instruct Initial Policy 7.46 8.09 6.83 26.9 – Unconstrained Iter 1 (28.2k) 7.66 8.15 7.16 32.2 65.6% Medoids Iter 1 (28.2k) 7.63 8.14 7.11 31.9 65.1% Modes Iter 1 (28.2k) 7.59 8.10 7.09 31.2 64.5% PPL Iter 1 (28.2k) 7.62 8.14 7.09 31.1 64.3% Unconstrained Iter 2 (6.1k) 7.74 8.19 7.29 34.4 66.2% Medoids Iter 2 (6.0k) 7.70 8.15 7.25 34.6 66.0% Modes Iter 2 (6.0k) 7.66 8.14 7.18 33.8 65.1% PPL Iter 2 (5.8kk) 7.65 8.14 7.16 34.0 65.4% Unconstrained Iter 3 (6.3k) 7.74 8.16 7.31 35.6 68.8% Medoids Iter 3 (6.2k) 7.72 8.16 7.28 35.7 68.4% Modes Iter 3 (6.2k) 7.66 8.14 7.18 34.9 66.0% PPL Iter 3 (6.1k) 7.68 8.13 7.23 35.2 66.5% Unconstrained Iter 4 (6.6k) 7.71 8.13 7.30 33.4 68.9% Medoids Iter 4 (6.4k) 7.70 8.13 7.28 34.9 69.1% Modes Iter 4 (6.3k) 7.63 8.13 7.14 34.1 66.7% PPL Iter 4 (6.1k) 7.68 8.11 7.25 33.7 66.7% Granite-3.1-8B-Instruct Initial Policy 7.83 8.59 7.08 30.2 – Unconstrained Iter 1 (24.1k) 7.99 8.69 7.29 36.7 65.1% Medoids Iter 1 (24.1k) 7.98 8.66 7.30 36.2 64.9% Modes Iter 1 (24.1k) 7.94 8.69 7.19 35.8 64.0% PPL Iter 1 (24.1k) 7.93 8.64 7.23 35.2 63.3% Unconstrained Iter 2 (5.2k) 8.04 8.74 7.34 38.9 65.2% Medoids Iter 2 (5.1k) 8.01 8.68 7.35 38.7 67.3% Modes Iter 2 (5.1k) 7.98 8.71 7.25 37.8 65.6% PPL Iter 2 (4.8k) 7.99 8.65 7.33 38.1 66.7% Unconstrained Iter 3 (5.9k) 8.06 8.75 7.38 39.8 71.6% Medoids Iter 3 (5.4k) 8.06 8.74 7.39 39.4 69.9% Modes Iter 3 (5.3k) 8.02 8.74 7.30 38.9 68.0% PPL Iter 3 (5.2k) 8.05 8.73 7.38 39.1 68.6% Unconstrained Iter 4 (6.3k) 8.04 8.66 7.41 38.4 67.6% Medoids Iter 4 (5.8k) 8.03 8.65 7.41 38.8 68.4% Modes Iter 4 (5.5k) 8.01 8.68 7.35 37.3 67.1% PPL Iter 4 (5.3k) 8.04 8.65 7.43 38.2 67.7% Qwen2.5-7B-Instruct Initial Policy 6.83 7.34 6.31 30.4 – Unconstrained Iter 1 (30.9k) 7.03 7.48 6.59 37.3 68.2% Medoids Iter 1 (30.9k) 6.99 7.43 6.55 36.5 67.3% Modes Iter 1 (30.9k) 6.97 7.43 6.51 36.3 67.3% PPL Iter 1 (30.9k) 6.97 7.40 6.54 36.5 66.9% Unconstrained Iter 2 (6.5k) 7.14 7.55 6.73 39.4 66.2% Medoids Iter 2 (6.5k) 7.10 7.46 6.74 38.9 68.4% Modes Iter 2 (6.5k) 7.08 7.48 6.68 38.5 67.3% PPL Iter 2 (6.3k) 7.09 7.46 6.73 38.5 67.7% Unconstrained Iter 3 (7.0k) 7.20 7.63 6.78 39.8 72.5% | https://arxiv.org/abs/2505.16927v1 |
Medoids Iter 3 (6.9k) 7.17 7.54 6.80 39.8 70.4% Modes Iter 3 (6.9k) 7.12 7.54 6.70 39.2 68.8% PPL Iter 3 (6.8k) 7.15 7.53 6.78 39.6 69.7% Unconstrained Iter 4 (7.1k) 7.24 7.64 6.85 40.2 73.4% Medoids Iter 4 (7.2k) 7.22 7.60 6.84 39.9 72.1% Modes Iter 4 (7.1k) 7.14 7.56 6.73 39.1 69.7% PPL Iter 4 (7.1k) 7.17 7.55 6.79 40.0 71.0% 32 M.2.1 Judge Prompt for Similarity Scoring We leverage a judge prompt adapted from Katsis et al. (2025), focusing on comparison against the reference answer rather than faithfulness to a grounding document. ## System [Instruction] Please act as an impartial judge and evaluate the quality of the response provided by an AI assistant to the user question given the provided document and a reference answer." ## User Your evaluation should assess the faithfulness, appropriateness, and completeness. Your evaluation should focus on the assistant’s answer to the question of the current turn. You will be given the assistant’s answer and a sample reference answer. You will also be given the user questions and assistant’s answers of the previous turns of the conversation. You should consider how well the assistant’s answer captures the key information, knowledge points mentioned in the reference answer, when appropriate, and how it respects or builds upon the focus and knowledge points from the previous turns. [Appropriateness]: You should evaluate if the assistant’s answer is relevant to the question of the current turn and if it addresses all the issues raised by the question without adding extra information. [Completeness]: You should evaluate whether the assistant’s answer is complete with information from the reference. Begin your evaluation by comparing the assistant’s answer against the reference answer in this turn. Be as objective as possible, and provide a detailed justification for your rating. You must rate the response on a scale of 1 to 10 and providing a justification. Return your response in the following format: {"score": your_score, "justification": your_justification} [INPUT] {prompt} [REFERENCE] {gold} [PREDICTION] {response} M.2.2 Results In Table 8, we present a similar table as Table 2, but comparing STaPLe over 3 iterations with the judge for rejection sampling in place of Rouge-L scoring. We use constrained STaPLe with the medoids label replacement method. We observe that the MT-Bench average scores drop slightly relative to using the Rouge-L similarity function, but still vastly outperforming STaR; in fact for Granite-8B, the iteration 2 scores are equal and STaPLe with the Phi-4 judge actually outperforms it in iteration 3. Notably, the turn-1 scores are higher with the Phi-4 while the turn-2 scores drop. On AlpacaEval, the scores of STaPLe with the judge are slightly lower than with Rouge-L for Llama-8B and Qwen-7B, while they gain +1% in iteration 3 for Granite-8B. A similar trend persists for the IFEval Win-rates, where Granite gains slightly in iterations 1 and 3, while Qwen and Llama drop slightly. We conclude that given the scores are largely similar, this highlights the generality of the STaPLe algorithm in expanding to various choices of similarity function. 33 Table 7: Self-improvement with the Constrained STaPLe algorithm using | https://arxiv.org/abs/2505.16927v1 |
a Phi-4 model as a judge to score similarity to the gold response for rejection sampling. We include Constrained STaPLe with Rouge-L, to make a direct comparison, denoted "STaPLe w/ Rouge". Model MT-Bench (avg) MT-Bench (T1) MT-Bench (T2) AlpacaEval IFEval WR Llama-3.1-8B-Instruct Initial Policy 7.46 8.09 6.83 26.9 – STaR Iter 1 (28.2k) 7.43 8.04 6.81 29.1 55.5% STaPLe w/ Rouge Iter 1 (28.2k) 7.63 8.14 7.11 31.9 65.1% STaPLe w/ Judge Iter 1 (25.8k) 7.60 8.13 8.08 31.6 64.9% STaR Iter 2 (6.0k) 7.47 8.08 6.86 30.6 57.7% STaPLe w/ Rouge Iter 2 (6.0k) 7.70 8.15 7.25 34.6 66.0% STaPLe w/ Judge Iter 2 (5.7k) 7.68 8.15 7.21 34.1 65.6% STaR Iter 3 (6.1k) 7.51 8.10 6.91 31.5 61.0% STaPLe w/ Rouge Iter 3 (6.2k) 7.72 8.16 7.28 35.7 68.4% STaPLe w/ Judge Iter 3 (6.3k) 7.70 8.16 7.25 35.6 68.0% Granite-3.1-8B-Instruct Initial Policy 7.83 8.59 7.08 30.2 – STaR Iter 1 (24.1k) 7.83 8.61 7.05 33.0 57.3% STaPLe w/ Rouge Iter 1 (24.1k) 7.98 8.66 7.30 36.2 64.9% STaPLe w/ Judge Iter 1 (20.9k) 7.93 8.66 7.20 36.0 65.2% STaR Iter 2 (5.4k) 7.86 8.63 7.10 34.7 59.5% STaPLe w/ Rouge Iter 2 (5.1k) 8.01 8.68 7.35 38.7 67.3% STaPLe w/ Judge Iter 2 (5.2k) 8.01 8.70 7.31 39.0 66.9% STaR Iter 3 (5.9k) 7.92 8.66 7.18 35.4 61.9% STaPLe w/ Rouge Iter 3 (5.4k) 8.06 8.74 7.39 39.4 69.9% STaPLe w/ Judge Iter 3 (6.3k) 8.07 8.76 7.38 40.4 70.2% Qwen2.5-7B-Instruct Initial Policy 6.83 7.34 6.31 30.4 – STaR Iter 1 (30.9k) 6.85 7.39 6.31 34.5 61.0% STaPLe w/ Rouge Iter 1 (30.9k) 6.99 7.43 6.55 36.5 67.3% STaPLe w/ Judge Iter 1 (29.5k) 6.96 7.45 6.48 36.2 66.7% STaR Iter 2 (6.5k) 6.98 7.45 6.51 36.9 63.0% STaPLe w/ Rouge Iter 2 (6.5k) 7.10 7.46 6.74 38.9 68.4% STaPLe w/ Judge Iter 2 (6.5k) 7.05 7.48 6.63 38.1 67.7% STaR Iter 3 (7.1k) 7.08 7.58 6.59 37.6 66.4% STaPLe w/ Rouge Iter 3 (6.9k) 7.17 7.54 6.80 39.8 70.4% STaPLe w/ Judge Iter 3 (7.2k) 7.13 7.56 6.70 39.5 69.5% M.3 Bayesian Hyperparameter Optimization for Clustering Distance Threshold As discussed in Section 4.1, we use the deterministic agglomerative clustering algorithm to ensure a fast, yet consistent assignment of clusters over the principle embeddings. However, this relies on a hyperparameter, δ, which we use to denote the Euclidean distance threshold under which clusters will be merged. As such, a lower threshold corresponds to a greater number of clusters, and vice versa, thus controlling the size of the yielded constitutions. This hyperparameter is currently set in a manual fashion, where the size and representative elements (medoids or modes) are inspected by the authors of this work and the threshold adjusted if needed – this resulted in thresholds of 6 (Granite) and 8 (Llama and Qwen) for iteration 1, which was subsequently decreased to 5 and 7, respectively, for iterations 2-4. However, it is desirable for this threshold to be adaptive, and to mathematically encode the target properties for a cluster to satisfy. Accordingly, we design an objective function consisting of | https://arxiv.org/abs/2505.16927v1 |
two terms over a clustering assignment: 1. the inter-medoid diversity and 2. the intra-cluster tightness. The former is denoted by the average cosine-similarity (abbreviated as "cossim" henceforth) between each pair of medoids, while the latter is average cosine similarity of the points in their cluster to their own medoid. This can be written 34 mathematically as follows: J(δ) =λ·2 |C|(|C| −1)X 1≤i<j≤|C|[1−cossim (mi, mj)]+(1−λ)·1 |C||C|X k=11 |Ck|X i∈Ckcossim (zi, mk) where C=AggClustering (δ)is the set of clusters assigned by the Agglomerative Clustering algorithm at a threshold of δ. Given we value a balance between medoid diversity and intra-cluster tightness to the medoid for higher quality assignments, we set λ= 0.5to weigh both terms equally. We aim to search for a value of δthat yields a clustering Cthat maximizes this objective. We use the scikit-optimize package (Head et al., 2020) to perform Bayesian optimization via Gaussian Processes to search for an optimal value of δover this function. This process performs Gaussian Process regression over seen instances, uses an expected improvement ( −E[J(x)−J(x+)]acquisition function to identify the next threshold to evaluate, then clusters and evaluates at the next chosen value ofx, repeating this process iteratively. We use the L-BFGS algorithm over 30 evaluations. We evaluate the STaPLe algorithm with the Llama-8B and Granite-8B models at the chosen thresholds δ∗ ifor iteration i, which we also report below. Following from the results in Table 6, where the medoid label replacement scheme performs the best, we apply this to the clusters yielded using δ∗ i. Table 8: Analyzing the Constrained STaPLe algorithm performance over three iterations with optimal thresholds on the diversity-tightness objective searched over via Bayesian hyperparameter optimization. We use the medoids label replacement scheme, among the options in Appendix M.1. Model MT-Bench (avg) MT-Bench (T1) MT-Bench (T2) AlpacaEval Llama-3.1-8B-Instruct Initial Policy 7.46 8.09 6.83 26.9 STaR Iter 1 (28.2k) 7.43 8.04 6.81 29.1 STaPLe ( δ1= 8.0) Iter 1 (28.2k) 7.63 8.14 7.11 31.9 STaPLe ( δ∗ 1= 7.2) Iter 1 (28.2k) 7.64 8.14 7.14 31.8 STaR Iter 2 (6.0k) 7.47 8.08 6.86 30.6 STaPLe ( δ2= 7.0) Iter 2 (6.0k) 7.70 8.15 7.25 34.6 STaPLe ( δ∗ 2= 7.3) Iter 2 (6.1k) 7.72 8.16 7.28 34.6 STaR Iter 3 (6.1k) 7.51 8.10 6.91 31.5 STaPLe ( δ3= 7.0) Iter 3 (6.2k) 7.72 8.16 7.28 35.7 STaPLe ( δ∗ 3= 6.6) Iter 3 (6.4k) 7.75 8.18 7.33 35.4 Granite-3.1-8B-Instruct Initial Policy 7.83 8.59 7.08 30.2 STaR Iter 1 (24.1k) 7.83 8.61 7.05 33.0 STaPLe ( δ1= 6.0) Iter 1 (24.1k) 7.98 8.66 7.30 36.2 STaPLe ( δ∗ 1= 6.3) Iter 1 (24.1k) 7.98 8.65 7.31 36.0 STaR Iter 2 (5.4k) 7.86 8.63 7.10 34.7 STaPLe ( δ2= 5.0) Iter 2 (5.1k) 8.01 8.68 7.35 38.7 STaPLe ( δ∗ 2= 5.9) Iter 2 (5.2k) 8.02 8.68 7.36 38.8 STaR Iter 3 (5.9k) 7.92 8.66 7.18 35.4 STaPLe ( δ3= 5.0) Iter 3 (5.4k) 8.06 8.74 7.39 39.4 STaPLe ( δ∗ 3= 4.2) Iter 3 (5.9k) 8.08 8.75 7.41 39.2 Notably, it is interesting that the thresholds drop in iteration 3, suggesting that as the number | https://arxiv.org/abs/2505.16927v1 |
of clusters decreases, a more permissive threshold suffices to balance diversity and cluster tightness. We find that the optimized thresholds result in very similar results, albeit with slight improvements across both MT-Bench turns, and thus the average score as well. M.3.1 Diversity of Constitutions with Manually Selected Thresholds To further study the claim of the original, hand-set thresholds being fairly well optimized, we can use this this objective function J(δ)as an appropriate metric to study the quality of the clusterings 35 yielded. As expected, the optimized thresholds improve diversity, by a sufficient margin to suggest that there exist multiple thresholds which would improve upon the manually set δivalues; the best of which being these δ∗ ivalues. Table 9: Analyzing the constitutions yielded by the Constrained STaPLe algorithm, both with and without the Bayesian hyperparameter optimization process detailed above. Model J(δ) Llama-3.1-8B-Instruct STaPLe ( δ1= 8.0) Iter 1 (28.2k) 0.6437 STaPLe ( δ∗ 1= 7.6) Iter 1 (28.2k) 0.6502 STaPLe ( δ2= 7.0) Iter 2 (6.0k) 0.6625 STaPLe ( δ∗ 2= 7.3) Iter 2 (6.1k) 0.6732 STaPLe ( δ3= 7.0) Iter 3 (6.2k) 0.6889 STaPLe ( δ∗ 3= 6.6) Iter 3 (6.4k) 0.7054 Granite-3.1-8B-Instruct STaPLe ( δ1= 6.0) Iter 1 (24.1k) 0.6036 STaPLe ( δ∗ 1= 6.3) Iter 1 (24.1k) 0.6151 STaPLe ( δ2= 5.0) Iter 2 (5.1k) 0.6241 STaPLe ( δ∗ 2= 5.9) Iter 2 (5.2k) 0.6482 STaPLe ( δ3= 5.0) Iter 3 (5.4k) 0.6765 STaPLe ( δ∗ 3= 4.2) Iter 3 (5.9k) 0.6894 N Qualitative Examples of Principle-Guided Self-Correction N.1 IFEval Examples with STaPLe Iteration 4 Llama-3.1-8B-Instruct Note that the "Initial Response:" tags in the examples below are added for illustrative purposes, and arenotgenerated at inference-time; the other tags "Principle:" and "Refined Response:" aregenerated to clearly indicate self-correction to the user. N.1.1 Principle: Conciseness and Clarity Prompt: Write a short blog post about a trip to Japan using less than 300 words. Initial Response: Japan is a country that never fails to amaze. From the bustling streets of Tokyo to the serene temples of Kyoto, every corner of this island nation offers a unique experience. Our journey began in Tokyo, a city that seamlessly blends the traditional and the modern. We marveled at the neon-lit skyscrapers of Shibuya, watched the world’s busiest pedestrian crossing, and indulged in sushi at Tsukiji Fish Market. The city’s vibrant energy was infectious, and we found ourselves captivated by its charm. Next, we headed to Kyoto, a city that feels like stepping back in time. The ancient temples, like Kinkaku-ji and Fushimi Inari, were a testament to Japan’s rich history. The tranquil gardens and the ritual of tea ceremony were a stark contrast to the city’s hustle and bustle. No trip to Japan would be complete without experiencing its natural beauty. We ventured to Hakone, a mountainous region known for its hot springs and stunning views of Mt. Fuji. The scenic train ride through the Japanese Alps was a highlight of our trip. Japan’s culinary scene is as diverse as its landscapes. From the humble bento box to the exquisite kaiseki, every meal was a gastronomic | https://arxiv.org/abs/2505.16927v1 |
delight. The attention to detail and the emphasis on seasonal ingredients were evident in every dish. 36 Our trip to Japan was more than just a vacation; it was a cultural immersion. The country’s blend of tradition and modernity, its respect for nature, and its culinary artistry left an indelible mark on us. Japan, with its myriad charms, is a place that will forever hold a special place in our hearts. In less than 300 words, we’ve tried to encapsulate the essence of Japan - a country that is as complex as it is captivating. From its bustling cities to its serene temples, from its technological marvels to its natural wonders, Japan is a land of contrasts that offers something for every traveler. Principle: Conciseness and Clarity Refined Response: Japan, a land of contrasts, captivated us with its blend of tradition and modernity. Our journey began in Tokyo, where neon-lit skyscrapers and bustling streets of Shibuya contrasted with the serenity of Kinkaku-ji, or the Golden Pavilion. We savored sushi at Tsukiji Fish Market, a testament to Japan’s culinary artistry. Next, Kyoto’s ancient temples and tranquil gardens offered a stark contrast to Tokyo’s vibrant en- ergy. The Fushimi Inari shrine, with its thousands of vermilion torii gates, was a spiritual highlight. In Hakone, we experienced Japan’s natural beauty. The scenic train ride through the Japanese Alps, coupled with the soothing hot springs, provided a serene escape. Japan’s culinary scene, from humble bento boxes to exquisite kaiseki, showcased the country’s respect for seasonal ingredients and meticulous attention to detail. Our trip to Japan was a cultural immersion, leaving an indelible mark. The country’s contrasts - from bustling cities to serene temples, technological marvels to natural wonders - make it a must-visit destination for every traveler. While the initial response is a reasonably well-written blog about Japan, it exceeds the 300 word limit, thus failing to follow the instruction. The trained STaPLe model recognizes this intrinsically, invokes the "Conciseness and Clarity" principle by observing that concision is the primary lacking element of the initial response, and then produces a refined response. The resulting response falls well under the 300 word limit, while paraphrasing much of the prior blog post’s core elements, thus accomplishing the task. N.1.2 Principle: Specificity and Completeness Prompt: I’m interested in a college with open enrollment and a regional accreditation. Which college would you recommend? Don’t include the keywords "DuPage" and "Dade" in your response. Let’s make it a constrained writing problem: be sure the letter p appears at least 15 times in your response. Initial Response: I’d recommend considering the University of the People (UOP). Principle: Specificity and Completeness Refined Response: I’d recommend considering the University of the People (UOP), a non-profit institution with open enrollment and regional accreditation by the Higher Learning Commission (HLC). UOP offers affordable, accredited bachelor’s and master’s degrees in business administra- tion, computer science, and health science. Their tuition-free model and flexible schedule make it an excellent choice for non-traditional students. UOP’s open enrollment policy means that anyone with a high school diploma or equivalent can | https://arxiv.org/abs/2505.16927v1 |
apply, regardless of previous academic experience. This inclusivity aligns with your request for a college with open enrollment. 37 The university’s regional accreditation by the HLC ensures that its degrees are recognized and respected by employers, academic institutions, and professional organizations. This accreditation is crucial for maintaining the value and transferability of your degree. UOP’s online, asynchronous learning format allows students to balance their education with work and personal commitments. This flexibility is particularly beneficial for adult learners and those in remote areas. Moreover, UOP’s commitment to affordability is commendable. As a non-profit, the university offers tuition-free degrees, making higher education accessible to a broader range of students. In summary, the University of the People is a college with open enrollment and regional accredita- tion that meets your criteria. Its flexible, affordable, and accredited programs make it an excellent choice for students seeking a high-quality education without the burden of high tuition fees. The initial response produces the name of a college, but does not provide a particularly informative response. It also does not meet the criterion of including the letter "p" at least 15 times in its response. The model intrinsically recognizes the need for greater specificity in its answer, and produces a more substantial and useful revised response. N.1.3 Principle: Directness and Assertiveness Prompt: Write a cover letter for a job and end with exactly "Call me at 631-481-4867" No other words should follow that phrase. Initial Response: Dear Hiring Manager, I am writing to express my interest in the [Job Title] position at [Company Name], as advertised. With a Bachelor’s degree in [Relevant Field] and over [Number of Years] years of experience in [Relevant Skills], I am confident that I would be a valuable addition to your team. In my previous role at [Previous Company Name], I was responsible for [Key Responsibility 1], which allowed me to hone my skills in [Skill 1]. I also led a team of [Number of Team Members] in [Project/Initiative], resulting in [Achievement]. I am particularly drawn to [Company Name] because of its reputation for [Company Value/Mission], which aligns with my own professional goals. I am excited about the opportunity to bring my unique blend of skills and experience to [Company Name] and am eager to contribute to your team’s success. I am available at your earliest convenience for an interview and can be reached at 631-481-4867. Thank you for considering my application. I look forward to the opportunity to discuss how I can contribute to your team. Sincerely, [Your Name] Call me at 631-481-4867. Principle: Directness and Assertiveness Refined Response: Dear Hiring Manager, I am excited to apply for the [Job Title] position at [Company Name]. With a Bachelor’s in [Relevant Field] and [Number of Years] years of experience in [Relevant Skills], I am confident in 38 my ability to excel in this role. In my previous role at [Previous Company Name], I honed my skills in [Skill 1] through [Key Responsibility 1], and led a team of [Number of Team Members] to achieve [Achievement]. I am drawn to [Company Name] because of | https://arxiv.org/abs/2505.16927v1 |
its commitment to [Company Value/Mission], which resonates with my professional aspirations. I am eager to bring my unique blend of skills and experience to [Company Name] and contribute to your team’s success. I am available for an interview at your earliest convenience and can be reached at 631-481-4867. Thank you for considering my application. I look forward to discussing how I can contribute to your team. Sincerely, [Your Name] Call me at 631-481-4867. This minor rewrite reflects a more assertive tone, as indicated in the principle invoked by the model, "Directness and Assertiveness". Specifically, when comparing phrasing such as "discussing how I can contribute ..." as opposed to "the opportunity to discuss ...", the former is evidently more direct and confident in pursuing this job opportunity. N.2 Granite-3.1-8B-Instruct IFEval Examples N.2.1 Principle: "Urgency and Action Steps" Prompt: Please rewrite the following sentence in a serious tone, similar to the president of the united states, and without an upward bias or inflation alarmism. Please also include the keywords "ink" and "memoirs" in the sentence: "The main points to consider are the following: First, the ECB’s own forecasts, which currently have a strong upward bias, must be revised downwards. Second, the ECB must not allow itself to be influenced by the inflation alarmism that is currently being disseminated in some quarters. Initial Response: In the context of our ongoing analysis, it is imperative to acknowledge the significance of the European Central Bank’s projections, which have been marked by a pronounced upward trend. It is crucial that these forecasts are recalibrated to reflect a more balanced perspective. Furthermore, it is essential that the ECB remains steadfast in its decision-making process, unswayed by the inflation alarmism that has been circulating in certain circles. Principle: Urgency and Action Steps Refined Response: In the course of our meticulous examination, it is of paramount importance to recognize the profound implications of the European Central Bank’s projections, which have been characterized by a marked upward trajectory. It is incumbent upon us to adjust these forecasts to better reflect a more nuanced and balanced outlook. Moreover, it is of critical significance that the ECB maintains its resolute stance in the face of the inflation alarmism that has been pervasive in certain sectors. This is a matter of utmost urgency, as we endeavor to navigate the intricate complexities of our economic terrain. In this regard, I have recently put ink to paper in my memoirs, which offer an in-depth exploration of our economic policies and the challenges we have encountered. These memoirs stand as a testament to our unwavering dedication to transparency and accountability, and I urge all to engage with them as we forge ahead in our economic journey. While the initial response was good, using verbiage such as "imperative", "crucial", "essential" and "steadfast", it doesn’t really make a call to action while avoiding an "alarmist" tone. It also does not reflect the keywords "ink" and "memoirs", as desired in the prompt. The model invokes the principle "urgency and action steps", which informs the call to action, along with a changed | https://arxiv.org/abs/2505.16927v1 |
tone. 39 Despite seeming to reflect a slight "upward bias", using keywords like "meticulous examination" and "maintains its resolute stance, " it balances this with phrases such as "critical significance", "incumbent upon us", which retain seriousness and a balanced outlook, as desired. N.2.2 Principle: Tone and Communication Prompt: Can you give me a zany, bullet point TLDR of this article: https: //en.wikipedia.org/wiki/Dota_2 Make it zany, but do not include the keywords ’icefrog’, ’blizzard’, ’lawsuit’ in the response. Initial Response: - Dota 2 is a game where you play as a hero and battle it out with other players. - It’s like a superhero showdown, but with magic and spells! - You can choose from over 100 heroes, each with their own unique abilities. - The game is played on a map called the "Aeon of Strife" which is like a giant battlefield. - You and your team have to destroy the enemy’s base, called the "Ancient", to win. - There are also creeps, which are like minions that help you fight. - You earn gold by killing creeps and enemies, which you can use to buy items to make your hero even more powerful. - The game is super competitive, with professional teams and tournaments. - It’s like the Olympics of video games, but with more magic and less spandex. - Dota 2 is free to play, but you can buy cosmetic items to customize your hero’s look. - It’s like dressing up your action figure, but with more strategy and less plastic. - Dota 2 has a huge community of players, and they love to stream their games on Twitch. - It’s like watching a live-action superhero show, but with more keyboards and less popcorn. - Dota 2 is a game that will make you feel like a powerful wizard, without the risk of turning into a toad. Principle: Tone and Communication Refined Response: - Dota 2 is a wild game where you pick a hero and duke it out with other players! - It’s like a superhero smackdown, but with magic and spells! - You can choose from over 100 heroes, each with their own special powers. - The game is played on a map called the "Aeon of Strife" which is like a giant playground. - You and your team gotta destroy the enemy’s base, called the "Ancient", to win. - There are also creeps, which are like tiny sidekicks that help you fight. - You earn gold by defeating creeps and enemies, which you can use to buy cool stuff to make your hero even more awesome. - The game is super competitive, with professional teams and tournaments. - It’s like the Super Bowl of video games, but with more magic and less helmets. - Dota 2 is free to play, but you can buy fancy stuff to make your hero look super cool. - It’s like dressing up your action figure, but with more strategy and less plastic. - Dota 2 has a huge community of players, and they love to show off their skills on Twitch. | https://arxiv.org/abs/2505.16927v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.