text string | source string |
|---|---|
Algebra, Prealgebra, Geometry, Counting & Probability, and Number Theory. 4.2 Experiment Setup We choose the four models for our study from the four popular LLM families, namely Gemini, GPT-4, Llama, and Mixtral. More details on the models used can be found in section A.2. 4.3 Evaluation method To evaluate the reliability of the MAPLE score, we perform a thorough validation process on the judge LLM predictions. We manually annotate a representative sample from the MATH dataset, consisting of incorrect mathematical answers generated by various LLM families. A multi-label approach is employed to comprehensively capture the types of errors present in the responses. The error labels predicted by the judge LLM are then compared against these human-annotated labels. This alignment accuracy can be found in section A.3 . 4.4 Results 4.4.1 Error Classification Based on the clustering of the LLM self-reflections for incorrect answers, we obtained the following error labels: 3 1.Complete misunderstanding . The model completely fails to understand the question and its requirements. 2.Partial misunderstanding . The model partially fails to understand the question or its requirements. 3.Incorrect Method . The model applies a concept with the correct formula but it is unrelated to the given question. 4.Incorrectly Applied Method . The model chooses the right concept with the correct formula which is related to the given question and can be used to solve it, but applied it incorrectly. 5.Calculation Error . Errors in arithmetic calculations. 6.Incoherent Output . Junk text with repeated characters or phrases. 7.No Solution . Failure to reach a final answer. The error labels are provided as prompts to the judge LLM, which identifies the errors present in the mathematical answers generated. These identified errors are subsequently used for computing the MAPLE score. 4.4.2 MAPLE Score Computation Figure 3: Comparison of LLM performance across difficulty levels on the MATH Dataset . Level 1 represents the easiest and Level 5 represents the toughest math problems. We observe a correlation between final answer accuracy and the degree of incorrectness represented by the MAPLE score. We evaluated the mathematical answers generated by various LLMs for the MATH dataset using our proposed approach. The results, categorized by difficulty level, are presented in Figure 3. The left graph demonstrates that as the difficulty level increases, accuracy declines across all models. Conversely, the right graph shows that the MAPLE score rises with increased difficulty, with the highest MAPLE score observed for the Llama model. This suggests that the Llama model exhibits the most significant issues in mathematical reasoning. Additionally, we performed a topic-wise analysis of the LLM-generated answers, the results of which are provided in section A.4. 5 Future Work Future efforts will expand the evaluation framework to include a broader range of error types, such as topic-specific reasoning issues, and incorporate ranking of error labels for more nuanced scoring. Addressing hallucination in LLMs through fine-tuning for evaluation-specific tasks and exploring alternatives to LLMs as judges will enhance alignment with human judgment. Testing the framework on diverse models, datasets, and interdisciplinary reasoning tasks will validate its robustness. Additionally, refining methods to reduce redundancy | https://arxiv.org/abs/2505.15623v1 |
and improve logical coherence in reasoning steps will be critical for advancing LLMs’ mathematical problem-solving capabilities. 4 References [1]Shijie Xia, Xuefeng Li, Yixin Liu, Tongshuang Wu, and Pengfei Liu. Evaluating mathematical reasoning beyond accuracy. arXiv preprint arXiv:2404.05692 , 2024. [2]Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Re- flexion: Language agents with verbal reinforcement learning. Advances in Neural Information Processing Systems , 36, 2024. [3]Jie Huang, Xinyun Chen, Swaroop Mishra, Huaixiu Steven Zheng, Adams Wei Yu, Xinying Song, and Denny Zhou. Large language models cannot self-correct reasoning yet. arXiv preprint arXiv:2310.01798 , 2023. [4]Ryo Kamoi, Yusen Zhang, Nan Zhang, Jiawei Han, and Rui Zhang. When can llms actu- ally correct their own mistakes? a critical survey of self-correction of llms. arXiv preprint arXiv:2406.01297 , 2024. [5]Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 , 2018. [6]Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. NeurIPS , 2021. [7]Shuai Peng, Ke Yuan, Liangcai Gao, and Zhi Tang. MathBERT: A Pre-Trained Model for Mathematical Formula Understanding, 2021. 5 A Appendix A.1 Error Label Penalty Weights Table 1: Error Label Penalty Weights Error Label Penalty Weight Complete Misunderstanding 0.95 Partial Misunderstanding 0.75 Incorrectly Applied Method 0.40 Calculation Error 0.10 Incoherent Output 1.00 No Solution 1.00 A.2 Experiment Setup The four LLMs chosen for our study are Gemini-1.5-Flash from Gemini, GPT-4o from GPT-4, Llama- 3-8B-Instruct1from LlaMa, and Mixtral-8x22B2from Mistral. All fine-tuned open-source models were taken from HuggingFace, while respective APIs were used for GPT-4o and Gemini-1.5-Flash. Outputs were generated with the temperature parameters set to 1for Gemini-1.5-Flash and GPT-4o, 0.3for Mixtral-8x22B, and 0.05 for Llama-3-8B-Instruct. To cluster the failing points and compile a set of error labels L, we use MathBERT[ 7]3, a BERT model fine-tuned on math word problems. Similarly, we use Mathstral-7B4, a fine-tuned Mistral model for mathematical problem solving from HuggingFace for our Judge LLM. A.3 Evaluation of LLM as Judge Figure 4: Comparison of accuracy of the LLM as a Judge in predicting error labels for generated solutions . We observe that most predictions match human annotations for a representative sample of 105 evenly-distributed examples across difficulty levels and topics. 1https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct 2https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1 3https://huggingface.co/tbs17/MathBERT 4https://huggingface.co/mistralai/Mathstral-7B-v0.1 6 A.4 Topic-wise Evaluation Scores Figure 5: Comparison of LLM performance across math topics on the MATH Dataset . We observe that most models perform better at easier topics such as geometry while underperforming at tougher topics such as calculus. 7 | https://arxiv.org/abs/2505.15623v1 |
arXiv:2505.15624v1 [cs.LG] 21 May 2025Mechanistic Insights into Grokking from the Embedding Layer H. V . AlquBoj∗Hilal AlQuabeh∗ MBZUAI Abu Dhabi, UAE hilal.alquabeh@mbzuai.ac.aeVelibor Bojkovi ´c∗ MBZUAI Abu Dhabi, UAE velibor.bojkovic@mbzuai.ac.ae Munachiso Nwadike MBZUAI Abu Dhabi, UAE munachiso.nwadike@mbzuai.ac.aeKentaro Inui MBZUAI, RIKEN Abu Dhabi, UAE kentaro.inui@mbzuai.ac.ae Abstract Grokking, a delayed generalization in neural networks after perfect training per- formance, has been observed in Transformers and MLPs, but the components driving it remain underexplored. We show that embeddings are central to grokking: introducing them into MLPs induces delayed generalization in modular arithmetic tasks, whereas MLPs without embeddings can generalize immediately. Our analy- sis identifies two key mechanisms: (1) Embedding update dynamics, where rare tokens stagnate due to sparse gradient updates and weight decay, and (2) Bilinear coupling, where the interaction between embeddings and downstream weights introduces saddle points and increases sensitivity to initialization. To confirm these mechanisms, we investigate frequency-aware sampling, which balances token updates by minimizing gradient variance, and embedding-specific learning rates, derived from the asymmetric curvature of the bilinear loss landscape. We prove that an adaptive learning rate ratio,ηE ηW∝σmax(E) σmax(W)·fW fE, mitigates bilinear cou- pling effects, accelerating convergence. Our methods not only improve grokking dynamics but also extend to broader challenges in Transformer optimization, where bilinear interactions hinder efficient training. 1 Introduction The phenomenon of grokking, in which a neural network exhibits delayed generalization after achieving close to or perfect training performance, has emerged as a compelling topic in deep learning. Initially observed in Transformer architectures by [ 19], grokking presents a puzzling challenge where models that seem to overfit to training data eventually demonstrate remarkable generalization capabilities after extensive training. Subsequent research has identified this phenomenon across various architectures, including convolutional neural networks (CNNs) and multi-layer perceptrons (MLPs) [13, 12]. Despite growing interest, the underlying mechanisms of grokking remain elusive. Existing studies have sought to unravel grokking by exploring its connection to delayed robustness, local complexity, and model architecture [ 3,6]. For instance, [ 6] suggest that grokking coincides with a phase transition in the linear regions of a model’s input space, leading to robust partitions that enable generalization after extended training. Others have attributed grokking to emergent circuit behaviors or optimization dynamics [ 17,21]. However, these studies often focus on high-level phenomena, ∗Equal contribution. Amalgamation of first authors’ names. Preprint. overlooking the role of specific components, such as embedding layers, in shaping the dynamics of grokking. In this work, we argue that embedding layers are central to understanding the grokking phenomenon. By introducing embedding layers into MLP architectures, we observe clear grokking patterns even in simple modular arithmetic tasks, such as modular addition. Interestingly, MLPs without embedding layers can often generalize without grokking, suggesting that embeddings introduce unique dynamics that delay generalization. Our analysis identifies two critical factors that influence these dynamics: 1.Embedding update dynamics: Embedding parameters are updated through gradient de- scent and weight decay. However, embeddings corresponding to tokens not present in a given batch are updated solely via weight decay or residual effects from previous gradi- ents in optimizers like Adam. This imbalance delays stabilization and can hinder | https://arxiv.org/abs/2505.15624v1 |
training, particularly for low-probability tokens. 2.Coupling with the first-layer weights: When embeddings are multiplied with the weights of the first layer, they form a bilinear interaction. This coupling introduces structural complexity into the optimization landscape, making the process more susceptible to saddle points and increasing the sensitivity to initialization. Building on these insights, we propose two strategies to address and prove the hypotheses introduced for embedding layers. First : A refined sampling methodology that ensures more uniform updates across all embeddings, mitigating frequency imbalance. Second : A learning rate adjustment for embeddings, setting it higher than that of the rest of the model. This adjustment counteracts the coupling effect with the first-layer weights, enabling faster stabilization and reducing the risk of optimization stagnation. Our experiments demonstrate that these strategies not only accelerate the grokking process but also enable generalization in scenarios where traditional approaches fail. Additionally, the bilinear coupling observed in embedding-based MLPs highlights broader challenges in optimizing Transformer architectures. Transformers, which rely on multiplicative interactions in attention mechanisms, exhibit similar issues due to the bilinearity of query, key, and value projections. While softmax attention and scaling by the dimensionality dhelp smooth the optimization landscape, these mechanisms may still struggle with increased saddle points in certain layers [ 5]. In summary, this work contributes to the understanding of grokking and its broader implications for deep learning by: •Highlighting the unique role of embedding layers in delaying generalization and their coupling with the first layer in MLPs. •Proposing strategies to accelerate grokking, including refined sampling and embedding- specific learning rates. •Connecting the challenges in embedding-based optimization to broader issues in Transformer training, such as bilinearity, saddle points, and the effectiveness of adaptive optimizers like Adam. By bridging insights from grokking and Transformer optimization, we provide a unified perspective on the interplay between embedding dynamics, optimization challenges, and generalization. 2 Related Work The phenomenon of grokking, where generalization emerges abruptly after prolonged overfitting, was first observed in transformers [ 19] and later extended to CNNs and ResNets [ 13,12], indicating it is architecture-agnostic. Various explanations have been proposed. [ 7] attribute it to phase transitions in local complexity (“delayed robustness”), while others link it to circuit efficiency [ 17,21,11]. Though insightful, these perspectives don’t fully explain the delayed generalization. Connections to double descent have also been explored [1, 16], but grokking’s dynamics remain distinct. The closest work to ours studies modular addition using permutation-equivariant models [ 15], where one-hot inputs interact with the first layer as a fixed embedding. Their analysis, however, is limited to modular tasks and specific activations. In contrast, we generalize across datasets and highlight how embedding layers, especially when trainable, interact bilinearly with downstream weights, affecting optimization dynamics. 2 Related studies like Tensor Programs IV [ 24] prescribe per-layer scaling based on width, assuming independent layer evolution. Our setup differs: the embedding layer’s updates depend on both its own width and the spectrum of the coupled layer. Prieto et al. [ 20] connect delayed generalization to numerical instability (Softmax Collapse), proposing solutions that complement our focus on structural coupling and | https://arxiv.org/abs/2505.15624v1 |
gradient imbalance. Unlike works that focus on final representations [ 4], we analyze the embedding layer’s evolving role during training. Even with one-hot inputs, its interaction with the first linear layer forms a learnable embedding mechanism. Concurrent work shows that transferring embeddings from small to large models can accelerate grokking [ 23]; while we share this motivation, we also observe in preliminary trials that transferring other MLP layers may offer similar benefits. Finally, the bilinear coupling we analyze in MLPs parallels challenges in Transformer architectures, where attention mechanisms introduce similar multiplicative dynamics. Prior work highlights how adaptive optimizers like Adam outperform SGD due to gradient noise and curvature heterogeneity [25,10,26]. Our findings help bridge these perspectives by showing how embedding-layer coupling shapes optimization and generalization. 3 Preliminaries 3.1 Embedding Layers 012345543210Addition mod (6) 012345 (a) 123456654321Multiplication mod (7) 123456 (b) Figure 1: Heatmaps for (a) additive group (mod6)and (b) multiplicative group (mod7). The two groups are iso- morphic despite differing appearances.The Transformer model [ 22] utilizes a self-attention mechanism to capture dependencies between tokens. In this framework, embeddings map input tokens to high- dimensional vectors, which are processed through atten- tion layers. These embeddings help the model capture contextualized representations. In contrast, MLPs rely on fully connected layers without attention mechanisms. We investigate the role of embeddings in MLPs, specifically how they improve model generalization. The core contri- bution of this work is to examine the role of embedding layers in MLPs. These layers map discrete tokens to dense, high-dimensional vectors, enabling models to handle non- linear tasks like modular arithmetic. Even with one-hot inputs—as studied in theoretical settings [ 2,15]—the first weight matrix effectively functions as a learned embed- ding. Thus, embeddings, whether explicit or implicit, play a central role in shaping model dynamics. While com- monly associated with Transformers, we focus on MLPs as a simpler and more interpretable setting. MLPs avoid the added complexity of self-attention while still exhibit- ing phenomena like grokking. Importantly, the bilinear coupling between embeddings and downstream weights, central to our analysis, also arises in Transformers but is further complicated by attention. Studying MLPs al- lows us to isolate and understand this coupling in a clean, controlled environment. 3.2 Algorithmic Datasets and Modular Arithmetic Algorithmic datasets are synthetic datasets carefully con- structed with controlled mathematical properties, typically involving operations over finite sets such as modular ad- dition or multiplication. One well-known example is the modular arithmetic dataset studied by [ 19], where the goal is to uncover relationships between binary inputs and produce consistent outputs based on these operations. For instance, given inputs a and b, the model is tasked to compute (a+b)modPor(a×b)modP, where Pis a prime number, and both inputs and outputs are constrained within {0,1, . . . , P −1}(refer to Figure 1). 3 This dataset highlights the challenging nature of generalization in grokking: the relationship between inputs is defined purely by a deterministic operation, not by a probabilistic distribution. Unlike typical machine learning datasets, where examples are drawn from an underlying (often unknown) data distribution, algorithmic datasets consist of a finite | https://arxiv.org/abs/2505.15624v1 |
and complete set of all possible input-output combinations. In such cases, there is no statistical "distribution" in the conventional sense; instead, the generalization task relies on uncovering the underlying relationship between inputs, which demands a model to internalize the algorithm itself. Moreover, any hypothesis consistent with training examples can initially seem plausible from a statistical perspective, as no known distribution governs the data. The difficulty of generalization thus lies not in interpolating unseen samples but in discovering the underlying relation, making it a fundamentally different task. We note that there is an equivalence between modular addition and modular multiplication in certain settings. Namely, given a prime number p, the groups (in mathematical sense) of modular addition {0,1, . . . , p −2},+ (where addition is performed modulo p−1), and of modular multiplication {1, . . . , p −1},∗ (where multiplication is performed modulo p) are isomorphic. Both groups have the same number of elements (which is p−1), and are simple (meaning, there is an element g, called generator, such that every other element is of the form g∗ ··· ∗ g, where ∗is the group operation and the number of operations used is less than p. In the first group, any element different from 0is the group generator while in the second group, any element different from 1is the generator (see Figure 1). The embedding layer strips the input group elements of their numerical meanings, and assigns a general, abstract vector to each element. In this way, training on modular addition or multiplication presents no difference for MLP (or other architectures) with the embedding layer. In contrast to this, the MLP without the embedding layer is able to fit and generalize on modular addition, while it completely fails on modular multiplication. 3.3 Problem Setup and Motivations LetD={(xi, yi)}N i=1represent an algorithmic dataset, where each xiis an input token sequence (e.g., a, b,operation ,=), and yiis the output derived from an operation modulo a positive integer P. The task is to learn a mapping fθ:X → Y parameterized by θ, capable of generalizing to unseen samples from Dtest. To process inputs effectively, we tokenize them as sequences of their digit representations, as the model does not inherently interpret numerical values. Each operand aandbis assigned a token in the range 0toP−1, while the operation and equality symbols are represented by tokens PandP+ 1, respectively. For instance, the modular arithmetic expression (3 + 2)( mod5) = 0 is tokenized as [3,5,2,6,0]. Embedding layers in models provide a dense representation of tokens. However, delayed updates to embeddings for infrequent tokens can significantly impact convergence and generalization. Our work explores these dynamics, with a focus on the impact of pi, theith-token sampling probability, and proposes adjustments to improve convergence. We investigate the use of embeddings in MLPs for algorithmic tasks. We started by training a MLP on modular addition and multiplication datasets, comparing setups with and without embedding layers. MLP Without Embeddings. In this setup, input tokens ( a,b, operation ( P), and equality sign (P+ 1)) are encoded directly into a 4-dimensional input | https://arxiv.org/abs/2505.15624v1 |
vector. The MLP processes these inputs as: h1=σ(W1x+b1),h2=W2h1+b2, ˆy= Softmax( h2). (1) where x∈R4is the encoded input vector (with first and third entry aandb, respectively), W1,W2 are weight matrices, b1,b2are biases, σis the ReLU activation function, and ˆyrepresents the predicted output. This configuration demonstrates that the MLP can fit the addition task with ease, but struggles to generalize multiplication. This difficulty arises because multiplication modulo Pis not linearly separable, as evident in the non-trivial patterns in Figure 1. 4 MLP With Embeddings. To overcome the challenges of non-linear separability, we introduced an embedding layer. Each token xis mapped to a dense vector exthrough an embedding matrix E∈RV×d, where dis the embedding dimension. Our input consists of 4 token embeddings of the form ˆe= [ei,e′∗′,ek,e′=′]⊤, and the modified forward pass is: h1=σ(Wˆe+b1), h2=W2h1+b2,ˆy= Softmax( h2), (2) Adding embeddings allows the model to capture more expressive input representations. With this setup, we observed that the model generalized well to both addition and multiplication tasks, but with a delayed generalization for multiplication. This delay corresponds to the grokking phenomenon, which appears as a "trapezoid pattern" in performance plots: a phase of memorization followed by a sudden leap in test accuracy, as illustrated in figure 2 . These observations motivate a deeper analysis of embedding dynamics during training. In particular, we investigated the gradient heatmaps to understand the role of embeddings in delaying generalization. By visualizing gradient magnitudes across training epochs, we point out that embeddings receive smaller updates compared to other weights of the model, potentially causing grokking. This investi- gation will help establish a connection between embedding behavior and the observed generalization delays. 101 102 103 104 105 Optimization steps0.00.20.40.60.8AccuracyTrain T est 101 102 103 104 Optimization steps0.00.20.40.60.81.0AccuracyTrain T est 101 102 103 104 Optimization steps0.00.20.40.60.8AccuracyTrain T est 101 102 103 104 Optimization steps0.00.20.40.60.81.0AccuracyTrain T est Figure 2: Training and validation accuracies of the MLP model on modular arithmetic tasks, trained with Adam. Left two: Addition task, without (first) and with (second) embeddings. Right two: Multiplication task, without (third) and with (fourth) embeddings. In the embedding-free cases, training and validation accuracies increase together only for addition; multiplication fails to generalize. In contrast, models with embeddings reach 100% training accuracy in both tasks, but only begin generalizing after a delay exhibiting the grokking phenomenon. 4 Main Results Our methodology investigates the dynamics of embedding layers within MLPs to address challenges in generalization, particularly in the context of algorithmic tasks. The key contributions include: (1) exploring the novel role of embedding layers attached to MLP architectures, (2) examining the impact of embedding sampling probability pion training dynamics, and (3) understanding how initialization and the coupling of embedding and weight matrices affect learning efficiency. These factors contribute to the grokking phenomenon, where generalization is delayed during training. 4.1 Embedding Dynamics Let the loss function of the model be L(θ,E), where θis model parameters other than embedding weights. Let ei,tdenote the embedding vector for token iat step t. Under stochastic gradient descent (SGD) with weight decay λ, the embedding update rule is: ei,t+1−ei,t=−ηλei,t−η∇ei,tL, (3) where ηis the learning | https://arxiv.org/abs/2505.15624v1 |
rate, and ∇eiLis the gradient2. Token embeddings are updated using corresponding gradients only when the associated tokens appear in a batch. Assume that token i being sampled in a batch with a probability pi. Consequently, taking into account the randomness of batch sampling, the expected update can be expressed as: E[ei,t+1−ei,t] =−ηλei,t−ηpi∇ei,tL. (4) 2Assuming the SGD update rule without momentum. 5 Figure 3: Gradient heat maps of the MLP model at random optimization steps. Sparse columns in the embedding gradients reflect the absence of certain tokens in sampled batches, leading to uneven learning dynamics and contributing to delayed generalization. To summarize, the sampling probability pidirectly influences the gradient dynamics of the embedding layer. While gradients contribute to tokens only probabilistically, weight decay affects all embeddings uniformly, leading to imbalances in parameter updates. This dynamic, visualized in Figure 3, highlights the need for a deeper understanding of how piaffects convergence. To analyze the reduction of the loss, we assume that the model’s overall loss function L(θ,{ei})is β-smooth. This means it satisfies the following inequality for all updates: L(θt+1,{ei,t+1})≤ L(θt,{ei,t}) +⟨∇L,∆⟩+β 2∥∆∥2. where ∆ = ( θt+1−θt,ei,t+1−ei,t). Denote Lt:=L(θt,{ei,t})then taking expectations over randomness of batch sampling leads to the following expected update: E[Lt+1− Lt]≤ ∇ θtLT(θt+1+θt) −VX i=1∇ei,tLTE(ei,t+1−ei,t) +β 2∥∆∥2, (5) Substituting the embedding update based on equation 4 into the smoothness inequality, E[Lt+1− Lt]≤ ∇ θtLT(θt+1−θt) −ηVX i=1 pi∥∇ei,tL∥2+λeT i,t∇ei,tL +β 2∥∆∥2, (6) and noting from the right hand side of the inequality above, piplays important role in reduction of the expected loss. However, the dependence on pi, is coupled with weight decay, which explains why these two parameters are important to study more deeply to draw a conclusion about grokking. 4.2 Dataset Splitting Strategies To further explore the role of pi, we investigate how train-test splitting strategies affect its value and, consequently, the grokking process. The train-test split determines the probability of token i appearing in a batch. We begin by assuming that the weight decay parameter λis zero and that the learning rate ηis uniform across all parameters. This reduces the optimization problem to focusing on pi, under the constraintsPV i=1pi= 1, pi≥0∀i. Specifically, the optimal pican be found by solving for the following: min pi|pi≥0,Ppi=1−ηVX i=1pi∥∇ei,tL∥2. (7) However, solving this exactly is challenging in practice due to the need for estimating all embedding gradient norms. Instead, we adopt approximate strategies for splitting the training data, guided by various assumptions about the gradient structure (see Appendix A for details). 6 Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 64 random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 128 random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est 101 102 103 104 105 Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 512 random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est(a)(a+b) mod p Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 64 random - Train random - T est skew - Train skew | https://arxiv.org/abs/2505.15624v1 |
- T est uniform - Train uniform - T est Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 128 random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est 101 102 103 104 105 Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 512 random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est (b)(a÷b) mod p Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 64 random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 128 random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est 101 102 103 104 105 Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 512 random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est (c)(a×b) mod p Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 64 random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 128 random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est 101 102 103 104 105 Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 512 random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est (d)(a2+b2) mod p Figure 4: Training and validation accuracies for the modular datasets with a learning rate of 0.001, comparing different sampling strategies (random, uniform, skewed) across batch sizes (64, 128, 512). Each row corresponds to a batch size, and each column represents a dataset. The x-axis is logarithmic to emphasize convergence trends. Uniform sampling promotes faster generalization and convergence compared to random sampling, but the performance gap diminishes for batch sizes exceeding 512. Skewed sampling, while fitting the training data well, fails to generalize, highlighting the detrimental impact of imbalanced token updates. 1.Uniform Sampling: Distribute all combinations of aandbevenly across training and test sets. 2.Skewed Sampling: Introduce a bias in the combinations of athat are distributed across training and test sets. 3.Random Sampling: Randomly distribute the examples across training and test sets. These splits enable us to regulate token sampling probabilities, offering a direct assessment of the impact of pion embedding convergence and grokking. Furthermore, Section 5.1 provides a detailed experiments conducted on two algorithmic datasets. 4.3 Embedding Convergence and Initialization While the frequency of embedding updates plays a crucial role in training dynamics, as demonstrated in our experiments, it alone cannot fully explain phenomena such as grokking after fitting, its relationship to initialization, weight decay, or the structure of the loss landscape. Stabilization (or convergence) occurs when the embedding eireaches a steady state where the updates become negligibly small, i.e., when the change in the embedding ∥ei,t+1−ei,t∥is approximately zero. This condition implies that, (ηλ)ei,t≈ηpi∇eiL.from equation 4. For small learning rates ( η≪1), the embedding updates behave like a continuous system, and we can model this as a differential equation (along every | https://arxiv.org/abs/2505.15624v1 |
dimension): dei dt=−λei−pi∇eiL, (8) 7 where ∇eiLis the gradient of the loss function with respect to the embedding i. Assuming that the gradient ∇eiLstabilizes to a constant value g, the solution to this equation is: ei(t) =Ce−λt−ηpg λ, (9) where Cis an integration constant determined by the initial conditions. As time tincreases, the embedding ei(t)converges to the equilibrium value ei(t)→ −ηpg λ.Thus, convergence is achieved when ei(t)stabilizes around this equilibrium point. The time Tto reach convergence is bounded asT≥1 λln C ϵ ,where ϵis a small threshold. In summary, convergence time is governed by the embedding gradient g, the weight decay λ, and the initialization magnitude C: stronger gradients and larger λaccelerate convergence, while larger initial values Cslow it down. In bilinear models such as MLPs and Transformers, embedding gradients are tightly coupled with those of downstream weights (e.g., W), forming a feedback loop: poor updates to Edegrade W, and vice versa. To study the role of initialization in this dynamic, we tested two setups: frozen embeddings, which led to slow convergence due to limited representational flexibility; and small initial embeddings, which improved convergence by allowing stronger early gradients—an effect also observed in prior work [ 26,12], though without analyzing embedding-weight coupling. 100101102103 Optimization Steps0.00.20.40.60.81.0T est AccuracyModp c=0.5 c=1.0 c=4.0 c=8.0 c=10.0 c=16.0 c=24.0 c=32.0 100101102103 Optimization Steps0.00.20.40.60.81.0T est AccuracyAmodp c=0.5 c=1.0 c=4.0 c=8.0 c=10.0 c=16.0 c=24.0 c=32.0 100101102103 Optimization Steps0.00.20.40.60.81.0T est AccuracyDmodp c=0.5 c=1.0 c=4.0 c=8.0 c=10.0 c=16.0 c=24.0 c=32.0 100101102103 Optimization Steps0.00.20.40.60.81.0T est AccuracySum_squares_modp c=0.5 c=1.0 c=4.0 c=8.0 c=10.0 c=16.0 c=24.0 c=32.0 Figure 5: Sensitivity of test accuracy to the learn- ing rate ratio c=ηE/ηWacross four tasks. Small cleads to under-updating, large ccauses instability, andc= 10 consistently balances convergence and stability.Motivated by these observations, we propose the Adam-LR Optimizer , which adjusts the embed- ding learning rate to balance update magnitudes between EandW. This coupling-aware scaling is formalized below: Proposition 4.1. LetEandWbe the embed- ding matrix and first-layer weights. To equal- ize update scales under cross-entropy loss, the learning rate ratio c=ηE ηWshould satisfy: c∝σmax(E) σmax(W)·fW fE, where σmax(·)denotes the largest singular value and fE, fWare the respective update fre- quencies,(see appendix B for details). In practice, we set c= 10 , guided by empirical singular value trends and supported by sensitiv- ity analysis (see Fig. 5, §5.2). This adjustment improves convergence and stability, especially under sparse embedding updates common in skewed token distributions. 5 Experiments and Discussions We begin our exploration with a MLP model. The architecture consists of two layers, where the hidden dimension of the first layer is set to four times the embedding dimension (where four is the sequence length), and embedding dimension is set to 128, as per prior work on grokking. The second layer has a dimension of P= 97 . The activation function used throughout is ReLU, and optimization is performed using the Adam optimizer with a weight decay of 0.001. 5.1 The Effect of Embedding Probability The first set of experiments investigates various strategies for splitting the training and testing datasets. Specifically, we explore three approaches, namely; uniform sampling, skewed sampling, and random | https://arxiv.org/abs/2505.15624v1 |
sampling. The expression (a+b) mod prepresents the sum of aandbmodulo p. For our experiments, we randomly set aside 20% of the data as a test set, ensuring that evaluation is performed on unseen samples. From the remaining data, 30/80% (i.e. 30% from total set) is sampled as the training set according to each sampling strategy. 8 101 102 103 104 Optimization steps0.00.51.0Accuracyadam - Train adam - T est adam_lr - Train adam_lr - T est(a)(a+b) mod p 101 102 103 104 Optimization steps0.00.51.0Accuracyadam - Train adam - T est adam_lr - Train adam_lr - T est (b)(a÷b) mod p 101 102 103 104 Optimization steps0.00.51.0Accuracyadam - Train adam - T est adam_lr - Train adam_lr - T est (c)(a×b) mod p 101 102 103 104 Optimization steps0.00.51.0Accuracyadam - Train adam - T est adam_lr - Train adam_lr - T est (d)(a2+b2) mod p Figure 6: Performance comparison of Adam-LR and Adam optimizers on four algorithmic datasets. Adam-LR scales the embedding learning rate based on the singular values of the embedding matrix. This adaptive adjustment accelerates convergence and enhances generalization across all datasets. The results demonstrate that Adam-LR significantly speeds up the grokking process compared to the standard Adam optimizer under identical training settings ( lr= 0.01, batch size = 512). Figure 4 compare the performance of the sampling methods (random, uniform, skew) across different splits of the dataset (see appendix C.4 for further datasets and settings). Each represents a specific datasets, while the rows compare batch sizes, and columns compare datasets. The x-axis is logarithmic to emphasize the convergence trends. Uniform sampling generally promotes faster generalization and convergence compared to random sampling. However, its benefits diminish at larger batch sizes (e.g., beyond 512), where random sampling becomes nearly as effective due to broader token coverage. Crucially, our results show that skewed sampling—despite fitting the training data and preserving the overall train-test ra- tio—consistently leads to suboptimal generalization. This suggests that models can converge to lower subaccuracy plateaus when token probabilities are heavily imbalanced. Importantly, even uniform sampling does not guarantee optimality: unless the batch size is sufficiently large, some tokens may be consistently omitted from updates. These findings underscore that token probability, both in expectation and in per-batch coverage, plays a central role in embedding dynamics and grokking behavior. 5.2 Comparison of Optimizers To evaluate the effectiveness of our proposed optimizer, Adam-LR, which incorporates a simple yet effective strategy for treating the embedding layer differently to avoid stagnation or saddle points, we conducted experiments on four datasets. The results are shown in Figure 6, where we compare the performance of the two optimizers, Adam-LR and the standard Adam optimizer, under identical training settings ( lr= 0.01, batch size = 512). Using our proposed optimizer, Adam-LR, which scales the embedding learning rate by a factor of 10, the results demonstrate a significant acceleration in the grokking process compared to the baseline Adam optimizer across all datasets. 5.3 Analysis of singular values of embedding layer Prior work attributes Adam’s superiority over SGD in Transformers to factors like gradient noise, descent direction, and Hessian block | https://arxiv.org/abs/2505.15624v1 |
heterogeneity [ 25,10,18,26]. However, these studies largely overlook the role of embeddings and their bilinear interactions. Our analysis supports the view that such bilinear structure, especially in embeddings, contributes significantly to the observed curvature differences (see appendix C.1 for more discussion). To analyze the curvature of the loss landscape, we compute the maximum eigenvalue of the Hessian matrix using the power method with Hessian-vector products (HVPs). Figure 7 shows the maximum eigenvalues of the Hessian with respect to EandWduring training. The results highlight distinct curvature properties for EandW, reflecting their roles in the bilinear interaction. 9 103104 Optimizaton steps0.000.020.040.060.080.100.12Hessian Eigenvalue w.r.t. to E 0.00.51.01.52.02.53.03.5 Hessian Eigenvalue w.r.t. WE W Training 100% Validation 100% 103104 Optimizaton steps0.000.010.020.030.040.050.060.070.08Hessian Eigenvalue w.r.t. to E 0.00.20.40.60.81.0 Hessian Eigenvalue w.r.t. WE W Training 100% Validation 100%Figure 7: Maximum eigenvalues of the Hessian with respect to embedding weights ( E) and down- stream weights ( W) during training. The left plot corresponds to the Adam optimizer, while the right plot uses Adam_lr optimizer (ours). With Adam (left), the eigenvalues for Eare significantly smaller than those for W, reflecting differences in dimensionality and update frequency. In contrast, with Adam_lr (right), the eigenvalues of Ware notably reduced and become closer to those of E, suggesting a more balanced optimization dynamic. Training accuracy reaches 100% when the eigenvalues of Wbegin to decrease, while validation accuracy improves as the eigenvalues of E decrease. This suggests that Wdrives early optimization progress, while Efine-tunes generalization. The Adam_lr optimizer (ours) appears to regularize W, leading to a more stable training process. 6 Discussions In this study, we explored the interplay between embedding layers and downstream weights in neural networks, highlighting how their bilinear coupling influences optimization and drives the grokking phenomenon. We demonstrated that embedding layers play a central role in delayed generalization and introduced the Adam-LR optimizer to address the imbalance in update dynamics, scaling the embedding learning rate based on singular values and update frequencies. A key limitation of this work is its focus on MLPs, which provide a simplified setting for analyzing embedding-weight coupling. While this enables controlled analysis, it leaves open how these insights transfer to more complex architectures such as Transformers, where similar bilinear interactions appear in attention mechanisms but with added structural complexity. Extending our framework to the Transformer setting is a promising direction for future work. References [1]X. Davies, L. Langosco, and D. Krueger. Unifying grokking and double descent. arXiv preprint arXiv:2303.06173 , 2023. [2]D. Doshi, T. He, A. Das, and A. Gromov. Grokking modular polynomials. arXiv preprint arXiv:2406.03495 , 2024. [3]S. Fan, R. Pascanu, and M. Jaggi. Deep grokking: Would deep neural networks generalize better? arXiv preprint arXiv:2405.19454 , 2024. [4] A. Gromov. Grokking modular arithmetic. arXiv preprint arXiv:2301.02679 , 2023. [5]X. S. Huang, F. Perez, J. Ba, and M. V olkovs. Improving transformer optimization through better initialization. In International Conference on Machine Learning , pages 4475–4483. PMLR, 2020. [6]A. I. Humayun, R. Balestriero, and R. Baraniuk. Deep networks always grok and here is why. arXiv preprint arXiv:2402.15555 , 2024. [7]A. Jeffares, A. Curth, and M. van der | https://arxiv.org/abs/2505.15624v1 |
Schaar. Deep learning through a telescoping lens: A simple model provides empirical insights on grokking, gradient boosting & beyond. Advances in Neural Information Processing Systems , 37:123498–123533, 2024. [8]S. Kobayashi, Y . Akram, and J. V on Oswald. Weight decay induces low-rank attention layers. Advances in Neural Information Processing Systems , 37:4481–4510, 2024. 10 [9]T. Kumar. Grokking as the transition from lazy to rich training dynamics . PhD thesis, none, 2024. [10] F. Kunstner, J. Chen, J. W. Lavington, and M. Schmidt. Noise is not the main factor behind the gap between sgd and adam on transformers, but sign descent might be. arXiv preprint arXiv:2304.13960 , 2023. [11] J. Lee, B. G. Kang, K. Kim, and K. M. Lee. Grokfast: Accelerated grokking by amplifying slow gradients. arXiv preprint arXiv:2405.20233 , 2024. [12] Z. Liu, O. Kitouni, N. S. Nolte, E. Michaud, M. Tegmark, and M. Williams. Towards un- derstanding grokking: An effective theory of representation learning. Advances in Neural Information Processing Systems , 35:34651–34663, 2022. [13] Z. Liu, E. J. Michaud, and M. Tegmark. Omnigrok: Grokking beyond algorithmic data. In The Eleventh International Conference on Learning Representations , 2022. [14] K. Lyu, J. Jin, Z. Li, S. S. Du, J. D. Lee, and W. Hu. Dichotomy of early and late phase implicit biases can provably induce grokking. arXiv preprint arXiv:2311.18817 , 2023. [15] M. A. Mohamadi, Z. Li, L. Wu, and D. J. Sutherland. Why do you grok? a theoretical analysis of grokking modular addition. arXiv preprint arXiv:2407.12332 , 2024. [16] P. Nakkiran, G. Kaplun, Y . Bansal, T. Yang, B. Barak, and I. Sutskever. Deep double descent: Where bigger models and more data hurt. Journal of Statistical Mechanics: Theory and Experiment , 2021(12):124003, 2021. [17] N. Nanda, L. Chan, T. Lieberum, J. Smith, and J. Steinhardt. Progress measures for grokking via mechanistic interpretability. arXiv preprint arXiv:2301.05217 , 2023. [18] Y . Pan and Y . Li. Toward understanding why adam converges faster than sgd for transformers. arXiv preprint arXiv:2306.00204 , 2023. [19] A. Power, Y . Burda, H. Edwards, I. Babuschkin, and V . Misra. Grokking: Generalization beyond overfitting on small algorithmic datasets. arXiv preprint arXiv:2201.02177 , 2022. [20] L. Prieto, M. Barsbey, P. A. Mediano, and T. Birdal. Grokking at the edge of numerical stability. arXiv preprint arXiv:2501.04697 , 2025. [21] V . Varma, R. Shah, Z. Kenton, J. Kramár, and R. Kumar. Explaining grokking through circuit efficiency. arXiv preprint arXiv:2309.02390 , 2023. [22] A. Vaswani. Attention is all you need. Advances in Neural Information Processing Systems , 2017. [23] Z. Xu, Z. Ni, Y . Wang, and W. Hu. Let me grok for you: Accelerating grokking via embedding transfer from a weaker model. arXiv preprint arXiv:2504.13292 , 2025. [24] G. Yang and E. J. Hu. Tensor programs iv: Feature learning in infinite-width neural networks. In M. Meila and T. Zhang, editors, Proceedings of the 38th International Conference on Machine Learning , volume 139 of Proceedings of Machine Learning Research , pages 11727–11737. PMLR, 18–24 Jul 2021. [25] J. Zhang, S. P. Karimireddy, A. Veit, S. Kim, S. Reddi, S. | https://arxiv.org/abs/2505.15624v1 |
Kumar, and S. Sra. Why are adaptive methods good for attention models? Advances in Neural Information Processing Systems , 33:15383–15393, 2020. [26] Y . Zhang, C. Chen, T. Ding, Z. Li, R. Sun, and Z.-Q. Luo. Why transformers need adam: A hessian perspective. arXiv preprint arXiv:2402.16788 , 2024. 11 Appendix A Optimizing for Sampling Porbability Uniform Importance Assumption If we assume that all gradients are equally important, i.e., ∥∇Ei,tL∥2is uniform across all embed- dings: ∥∇Ei,tL∥2=c,∀i, where cis a constant. In this case, the optimization of −PV i=1pi∥∇Ei,tL∥2becomes independent of pi. To satisfy the normalization constraintPV i=1pi= 1, the optimal solution is: pi=1 V,∀i. (10) This corresponds to a uniform distribution, where all embeddings are treated equally (see Figure 8). While computationally efficient, this approach may lead to suboptimal convergence if some embeddings contribute disproportionately to the loss reduction. Gradient Norm Bounded by Li Now, let us assume that the gradient norm for each embedding is bounded, ∥∇Ei,tL∥ ≤ Li,∀i, (11) where Liis a known upper bound for embedding i. Using this bound, we approximate, −VX i=1pi∥∇Ei,tL∥2≥ −VX i=1piL2 i. (12) To maximizePV i=1piL2 isubject to the constraintPV i=1pi= 1, we note that the objective function is linear in p. Therefore, the maximum is attained at a vertex of the probability simplex, meaning the optimal solution is: pk= 1,where k= arg max iL2 i,and pi= 0,∀i̸=k. (13) This result indicates that the optimal probability distribution assigns all weight to the embedding with the highest gradient bound, ignoring all others. Therefore, to obtain a smooth probability distribution, we introduce an entropy regularization term as follow, H(p) =−VX i=1pilogpi. (14) We now optimize the modified objective, VX i=1piL2 i+γH(p), (15) subject to the constraintPV i=1pi= 1, where γ >0controls the strength of the regularization. The corresponding Lagrangian is as follow, Lp=VX i=1piL2 i+γ −VX i=1pilogpi! +µ VX i=1pi−1! . (16) Taking the derivative with respect to piand setting it to zero, we get, L2 i−γ(1 + log pi) +µ= 0. (17) 12 Solving for pigives: logpi=L2 i+µ−γ γ=⇒ pi= expL2 i+µ−γ γ . (18) Applying the constraintPV i=1pi= 1, would results in the following solution, p∗ i=exp L2 i/γ PV j=1exp L2 j/γ. (19) This result smoothly distributes probabilities based on the gradient bounds, assigning higher probabil- ity to embeddings with larger L2 iwhile ensuring a non-degenerate distribution. 0 20 40 60 80 T oken id0.0080.0100.012ProbabilityTrain Data T est Data (a) Random Sampling 0 20 40 60 80 T oken id0.0050.0100.0150.0200.025ProbabilityTrain Data T est Data (b) Skewed Sampling 0 20 40 60 80 T oken id0.0070.0080.0090.0100.011ProbabilityTrain Data T est Data (c) Uniform Sampling Figure 8: Token probabilities in the training and test sets under different sampling strategies. Im- balanced sampling leads to uneven token occurrences in mini-batches, causing some tokens to be absent in multiple updates while others appear frequently. This results in highly variable gradient updates, where frequently seen tokens converge faster, while rare tokens stagnate due to sparse updates, affecting overall model generalization. B Dynamics of Updates in Bilinear Systems with Initialization Effects We analyze the interaction between embeddings E∈Rp×dand weight matrix W∈R4d×din | https://arxiv.org/abs/2505.15624v1 |
a bilinear term: z(EW), (20) where zis an activation function applied elementwise. The gradients of EandWare given as: ∇E∝W⊤∇loss,∇W∝E⊤∇loss. (21) The gradient norms are influenced by the dominant singular values of WandE. Specifically: ∥∇E∥ ∝σmax(W),∥∇W∥ ∝σmax(E). (22) At initialization, EandWare often drawn from distributions with variances that depend on their dimensions (e.g., PyTorch initializes weights with N(0,p 2/d)scaling). This initialization typically ensures σmax(E)≫σmax(W), asWis higher-dimensional, amplifying the difference in gradient magnitudes. The embedding matrix Eis updated less frequently than Wbecause not all tokens appear in every batch. Let fEandfWrepresent the update frequencies of EandW, respectively. Typically, fW> fE, exacerbating the update disparity. To balance the effective updates of EandW, the learning rates ηEandηWmust be scaled to account for both their singular values and update frequencies. The effective update ratio is: ∥∆E∥ ∥∆W∥∝ηE·σmax(W)·fE ηW·σmax(E)·fW. (23) For proportional updates ( ∥∆E∥ ∼ ∥ ∆W∥), the ratio c=ηE ηWmust satisfy: c∝σmax(E) σmax(W)·fW fE. (24) 13 103104 Optimizaton steps0.000.020.040.060.080.100.12Hessian Eigenvalue w.r.t. to E 0.00.51.01.52.02.53.03.5 Hessian Eigenvalue w.r.t. WE W Training 100% Validation 100% 103104 Optimizaton steps0.000.010.020.030.040.050.060.070.08Hessian Eigenvalue w.r.t. to E 0.00.20.40.60.81.0 Hessian Eigenvalue w.r.t. WE W Training 100% Validation 100%Figure 9: Maximum eigenvalues of the Hessian with respect to embedding weights ( E) and down- stream weights ( W) during training. The left plot corresponds to the Adam optimizer, while the right plot uses Adam_lr optimizer (ours). With Adam (left), the eigenvalues for Eare significantly smaller than those for W, reflecting differences in dimensionality and update frequency. In contrast, with Adam_lr (right), the eigenvalues of Ware notably reduced and become closer to those of E, suggesting a more balanced optimization dynamic. Training accuracy reaches 100% when the eigenvalues of Wbegin to decrease, while validation accuracy improves as the eigenvalues of E decrease. This suggests that Wdrives early optimization progress, while Efine-tunes generalization. The Adam_lr optimizer (ours) appears to regularize W, leading to a more stable training process. The termσmax(E) σmax(W)reflects the imbalance in singular values due to initialization and structural properties. The termfW fEaccounts for the frequency imbalance in updates between EandW, driven by sparse token appearances in batches. PyTorch initialization, which scales weights by O(p 2/d), ensures that σmax(W)andσmax(E)are initially proportional to the dimensions d. This contributes to the observed imbalance in their singular values at the start of training. C More experiments C.1 Analysis of singular values of embedding layer Previous studies (e.g., [ 25], [10], [18], [26]) have explored the gap between SGD and Adam in optimizing Transformer models, but the specific role of embeddings and their bilinearity with down- stream weights remains underexplored. For example, [ 25] attributes SGD’s suboptimal performance to the heavy-tailed distribution of stochastic gradient noise. This observation aligns with our findings regarding the randomness in embedding updates for low- ptokens. On the other hand, [ 10] argues that gradient noise alone cannot explain Adam’s superiority. Their experiments demonstrate that, even with full-batch training to eliminate stochastic noise, SGD underperforms compared to Adam. They suggest that the sign of the gradient might be a more reliable descent direction than its magnitude, and since Adam optimally balances both, it outperforms | https://arxiv.org/abs/2505.15624v1 |
SGD, particularly in small-batch settings. Furthermore, [ 26] provides a novel explanation for Adam’s advantage over SGD in Transformers by analyzing the blockwise Hessian spectrum, introducing the concept of “block heterogeneity.” This refers to significant variations in the Hessian spectra across parameter blocks, a phenomenon observed in Transformers but not in CNNs. However, the underlying source of this heterogeneity is not explicitly discussed. We hypothesize that this stems from the bilinear nature of weights, particularly in the embedding and attention mechanisms. To support this hypothesis, we analyze the Hessian of embedding weights compared to other weight below. To analyze the curvature of the loss landscape, we compute the maximum eigenvalue of the Hessian matrix using the power method with Hessian-vector products (HVPs). This approach avoids explicitly constructing the Hessian, making it computationally efficient for large-scale systems. The power method iteratively approximates the maximum eigenvalue of the Hessian Has follows: 1. Initialize a random vector v0with the same dimensionality as the parameters [E,W]. 14 Figure 10: Rank evolution during training for three optimization setups: Adam (wd=0.001), Adam- LR (wd=0.001 with learning rate ratio), and Adam with stronger weight decay (wd=0.005). While all runs show decreasing rank(EW), only Adam-LR continues to adjust rank after generalization. This suggests that rank behavior alone does not fully explain grokking, and supports the need to analyze embedding-weight coupling dynamics. 2. Compute the Hessian-vector product Hvkusing automatic differentiation: Hvk=∇θ(∇θL ·vk), where θ= [E,W]. 3. Normalize the vector and update the eigenvalue estimate: vk+1=Hvk ∥Hvk∥, σ max≈v⊤ kHvk. Figure 9 shows the maximum eigenvalues of the Hessian with respect to EandWduring training. The results highlight distinct curvature properties for EandW, reflecting their roles in the bilinear interaction. Extending these insights to attention mechanisms highlights further challenges in bilinear optimization and demonstrates how adaptive learning rates (e.g., Adam) help escape saddle points. This suggests a deeper connection between the bilinearity of weight interactions and the optimization challenges unique to Transformer models. C.2 Rank Evolution and Implicit Regularization Recent work has shown that weight decay in bilinear models (e.g., Z=EW ) implicitly regularizes the nuclear norm of the product matrix, promoting low-rank solutions and improved generalization [8]. This complements our focus on embedding dynamics, as both highlight the impact of bilinear coupling on optimization. To explore this in our setup, we track the rank evolution of E,W, and the product EW . As shown in Figure 10, Wexhibits three distinct phases: an early drop during training loss reduction, a plateau, and a final decline aligned with grokking. In contrast, E’s rank remains largely stable throughout. Figure 10 compares three optimization setups: Adam (with weight decay 0.001), Adam-LR (our proposed variant with a learning rate ratio), and Adam with stronger weight decay (0.005). All configurations lead to a reduction in rank(EW), consistent with implicit nuclear norm regularization. However, only Adam-LR shows continued rank changes after generalization, suggesting that rank evolution alone does not capture the onset of grokking. These findings reinforce that implicit regularization in bilinear systems depends not just on decay strength, but also on the interplay between initialization, update frequency, and | https://arxiv.org/abs/2505.15624v1 |
curvature. 15 0 20 40 60 80 100 Input Dimension (Vocab Index)01020304050607080L2 NormAddition Modular P 0 20 40 60 80 100 Input Dimension (Vocab Index)051015202530L2 NormDivision Modular P 0 20 40 60 80 100 Input Dimension (Vocab Index)01020304050607080L2 NormAddition Modular P with Adam_LRFigure 11: Discrete Fourier analysis of learned embedding representations across tasks. For each embedding matrix, we compute the DFT across the input dimension and the ℓ2-norm across the embedding dimension. Peaks indicate frequency localization that naturally aligns with the periodic structure of the task (e.g., modular addition), while tasks like modular division show more diffuse spectra. C.3 Fourier Analysis of Embedding Representations Fourier features offer a structured way to encode modular arithmetic directly into the input space. By encoding periodicity into the representation, such features can bypass the need for learned embeddings and mitigate challenges like sparse updates for rare tokens. However, this approach requires prior knowledge of the task’s structure—e.g., periodicity—which may not apply in more complex tasks such as modular division or nonlinear compositions. To investigate whether embedding layers naturally learn such structure, we analyze their frequency characteristics. Following the approach in [ 12], we apply the Discrete Fourier Transform (DFT) along the input dimension of the embedding matrix and compute the ℓ2-norm across the embedding dimension. We then plot the first P/2components, leveraging the symmetry of the DFT. The results for different tasks are shown in Figure 11. Clear frequency peaks indicate that the model internally captures task-specific periodic structure. Notably, such structure emerges even without explicit Fourier features, especially for modular addition and multiplication. However, in more complex tasks, such as modular division, this frequency localization diminishes—suggesting the limits of periodic encoding and the growing need for learned representations. C.4 Additional Datasets and Learning Rate Sensitivity We emphasize that our experimental design is not centered on hyperparameter optimization. While aggressive tuning of learning rates and batch sizes can suppress or delay grokking, our goal is to study it where it naturally occurs. To that end, we identify configurations where grokking persists and focus our analysis there. This approach aligns with prior work on mechanistic understanding of grokking [ 9,14], which likewise prioritize clarity of dynamics over benchmark performance. For illustration, Figures 12 and 13 show learning rate sensitivity on four datasets, confirming the robustness of our findings across reasonable settings (skewed distribution of embedding update delay the generalization). Compute Resources All experiments were conducted using an NVIDIA A6000 GPU. Training runs were performed using PyTorch, with each configuration fitting comfortably within the GPU’s 48 GB memory. No distributed training or multi-GPU setups were used. 16 Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 64 random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 128 random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est 101 102 103 104 105 Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 512 random - Train random - T est skew - Train skew - T est uniform - Train uniform - | https://arxiv.org/abs/2505.15624v1 |
T est(a)(a+b) mod p Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 256 random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 512 random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est 101 102 103 104 Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 1024 random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est (b)(a÷b) mod p Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 256 random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 512 random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est 101 102 103 104 Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 1024 random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est (c)(a×b) mod p Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 256 random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 512 random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est 101 102 103 104 Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 1024 random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est (d)(a2+b2) mod p Figure 12: Training and validation accuracies for the modular multiplication dataset for learning rate 0.01 across batch sizes (256, 512, 1024). Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 256 random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 512 random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est 101 102 103 104 Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 1024 random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est (a)(a+b) mod p Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 256 random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 512 random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est 101 102 103 104 Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 1024 random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est (b)(a÷b) mod p Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 64 random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 128 random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est 101 102 103 104 105 Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 512 | https://arxiv.org/abs/2505.15624v1 |
random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est (c)(a×b) mod p Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 256 random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 512 random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est 101 102 103 104 Optimization steps0.00.20.40.60.81.0AccuracyBatch Size = 1024 random - Train random - T est skew - Train skew - T est uniform - Train uniform - T est (d)(a2+b2) mod p Figure 13: Training and validation accuracies for the modular multiplication dataset for learning rate 0.005 across batch sizes (256, 512, 1024). 17 | https://arxiv.org/abs/2505.15624v1 |
arXiv:2505.15633v1 [cs.CL] 21 May 2025Listen to the Context: Towards Faithful Large Language Models for Retrieval Augmented Generation on Climate Questions David Thulke1,2Jakob Kemmler1,2Christian Dugast2Hermann Ney1,2 1Machine Learning and Human Language Technology, RWTH Aachen University, Germany 2AppTek GmbH, Aachen, Germany {thulke,jakob.kemmler,ney}@hltpr.rwth-aachen.de, cdugast@apptek.com Abstract Large language models that use retrieval aug- mented generation have the potential to unlock valuable knowledge for researchers, policymak- ers, and the public by making long and techni- cal climate-related documents more accessible. While this approach can help alleviate factual hallucinations by relying on retrieved passages as additional context, its effectiveness depends on whether the model’s output remains faithful to these passages. To address this, we explore the automatic assessment of faithfulness of dif- ferent models in this setting. We then focus on ClimateGPT, a large language model spe- cialised in climate science, to examine which factors in its instruction fine-tuning impact the model’s faithfulness. By excluding unfaithful subsets of the model’s training data, we develop ClimateGPT Faithful+, which achieves an im- provement in faithfulness from 30% to 57% in supported atomic claims according to our automatic metric. 1 Introduction As the urgency of climate action intensifies (Calvin et al., 2023), researchers, policymakers, and the public require efficient access to reliable climate information. Large language models (LLMs) have emerged as powerful tools for processing and sum- marizing vast amounts of climate-related litera- ture. However, these models often suffer from factual hallucinations, where generated responses contain incorrect or misleading information. One promising approach to mitigating this issue is Re- trieval Augmented Generation (RAG), which en- ables models to ground their responses in retrieved passages from trusted sources. Several climate- related RAG-based systems have been developed, including ChatClimate (Vaghefi et al., 2023), Cli- mateGPT (Thulke et al., 2024), and ClimateQ&A (Calzada et al., 2024). Despite these advancements, challenges remain in ensuring that generated out- puts remain faithful to the retrieved content.Prompt What is the purpose of the Global Stocktake? Context [0] Recalling Article 14 of the Paris Agreement, the objective of the Global Stocktake is to assess the collective progress towards achieving [...] [1]·Paragraph 34 of 19/CMA.1 decides that the outputs of this component of the global stocktake should: a) Identify opportunities for and [...] [2] The SBSTA and the SBI invited submissions on views on the approach to the consideration of outputs component of the first global [...] ClimateGPT 7B The Global Stocktake is a key component of the Paris Agreement, scheduled to begin in 2023. It involves a comprehensive assessment of global progress in addressing climate change, considering scientific evidence, economic implications, and societal impacts. The Stocktake aims to inform the updates to countries’ climate commitments, known as “nationally determined contributions” (NDCs), to ensure they are in line with the agreement’s objectives and targets. [...] ClimateGPT 7B Faithful+ The Global Stocktake’s purpose, as outlined in Ar- ticle 14 of the Paris Agreement, is to assess the collective progress made towards achieving the Agreement’s purpose and long-term goals, which encompass mitigation, adaptation, and means of implementation. This assessment is conducted con- sidering equity and the best available science. The Global Stocktake | https://arxiv.org/abs/2505.15633v1 |
process spans two years, begin- ning with information collection and preparation and culminating in a consideration of [...] Figure 1: Shortened example from the original Cli- mateGPT and the Faithful+ variant on one example from the Climate Policy Radar task. Text marked in red is not faithful, i.e. it is not supported by the given context. The full example is shown in Figure 3. Faithfulness, in this context, refers to the extent to which a model’s response accurately reflects the information contained in the retrieved passages without introducing extraneous or misleading de- tails. Importantly, factuality does not imply faith- fulness. A response may be factually correct with respect to general world knowledge but still un- faithful if the information is not supported by the retrieved passages as shown in Figure 1. A lack of faithfulness undermines trust in these models, particularly in the climate domain, where misinfor- mation has significant real-world consequences. Moreover, we argue that faithfulness is even more important than general factuality in this set- ting, as large language models may inevitably hallu- cinate when faced with long-tail or rare knowledge. By requiring that all factual information in a re- sponse originates from the provided context, we can mitigate the risk of such hallucinations and ensure that model outputs are transparent, verifi- able, and aligned with the available evidence. Thus, evaluating and improving faithfulness is a crucial step in enhancing the reliability of climate-focused LLMs. In this work, we investigate methods for auto- matically assessing the faithfulness of RAG-based models in climate-related applications. We then focus on ClimateGPT (Thulke et al., 2024), a specialised open-weight LLM trained on climate- related texts to examine how different instruction fine-tuning (IFT) datasets influence faithfulness. By excluding parts of the training data with low faithfulness, we propose a new model ClimateGPT Faithful+ that on our main benchmark increases the percentage of supported claims from 30% to 57%. 2 Faithfulness and Factuality Our definition of faithfulness and factuality follows the work of Dziri et al. (2022) and Huang et al. (2025). Given a question q, a set of Nretrieved passages K= (k1, k2, . . . , k N)from a knowledge baseKB, and a response r, we define faithfulness ofrwith respect to Kasrshould be supported by the information in K, i.e.,rshould not contain any information that contradicts the information in K or is not present in K. Factuality, on the other hand, refers to the correctness of the information in rwith respect to general world knowledge. In our context, we assume that the relevant world knowledge is contained in KB. Thus, we consider a response r to be factual if it is faithful to KB.2.1 Evaluation To assess both the faithfulness and factuality of long-form responses, we build upon existing auto- mated evaluation approaches, particularly RAGAs (Es et al., 2024) for faithfulness and FActScore (Min et al., 2023) and VeriScore (Song et al., 2024) for factuality. These methods share a common three-step pipeline: (1) claim decomposition, (2) evidence retrieval, and (3) claim verification. The main differences in evaluating for faithfulness ver- sus factuality lie in | https://arxiv.org/abs/2505.15633v1 |
the evidence retrieval step, as we describe below. Claim Decomposition As long-form responses are typically composed of multiple claims, we first decompose the response into smaller and indepen- dent claims to simplify the subsequent steps. Given a response r, we decompose it into a set of claims C=c1, . . . , c I. The definition of a claim and the granularity of the decomposition differs between different variants and use-cases. In this work, we use the claim decomposition method from RAGAs (Es et al., 2024) which prompts a large language model to decompose the full response into smaller claims in one step. Evidence Retrieval The key distinction be- tween evaluating faithfulness and factuality lies in this step. For faithfulness evaluation, we directly use the retrieved passages K=k1, k2, . . . , k N from the RAG process as evidence. In contrast, for factuality evaluation, relevant evidence for each claim ciis retrieved from a knowledge base KB. In this work, we use the retrieval mechanism that is also used for RAG. Claim Verification Finally, for each claim, we verify whether it is supported by the retrieved evi- dence. Therefore, we use an LLM to classify each claim cigiven the retrieved evidence (multiple re- trieved evidence passages are concatenated into a single evidence). Similar to other work (Song et al., 2024), we do not differentiate between refuting and unrelated evidence. The overall faithfulness and factuality scores of rare then aggregated from these individual claim verifications by reporting the percentage of supported claims. Implementation Details The exact prompts we used for each step are reported in Appendix B. GPT- 4o (version gpt-4o-2024-08-06 ) is used as the large language model. #Tokens #Parameters Avg. Claim Support wrt. Model in Trillion in Billion RAG #Claims Ref. [%] KB [%] LLama 3.1 Instruct 15 8 - 22.7 - 59 ✓ 17.3 67 72 LLama 2 Chat 2 7 - 23.3 - 60 ✓ 21.2 48 65 ClimateGPT 2 7 - 21.6 - 59 ✓ 21.1 30 61 ClimateGPT Faithful+ (ours) 2 7 - 20.2 - 57 ✓ 19.2 57 69 Table 1: Results for claim support wrt. the reference, as a metric of faithfulness, and wrt. the knowledge base (KB) as a metric for factuality for different large language models with and without RAG. 3 ClimateGPT IFT Evaluation Task We use the same evaluation dataset and RAG setup as Thulke et al. (2024) to evaluate the faithfulness and factuality of the generated responses. The test set is a held-out portion of the IFT data curated to train ClimateGPT. It was created in cooperation with domain experts and contains different open- ended tasks like QA, text generation, classification, chat, and brainstorming as well as closed-ended tasks like summarisation, extraction or rewrite. Our evaluation focuses on the subset of open-ended prompts of the held-out data (334 out of the 400 samples). 3.1 Information Retrieval We use the dataset and retrieval pipeline as de- scribed by Thulke et al. (2024) for retrieving relevant contexts in our faithfulness evaluation. The dataset consists of climate-related documents from various sources, including IPCC | https://arxiv.org/abs/2505.15633v1 |
reports and climate science related papers (see Appendix C for detailed statistics). For retrieval, we employ thebge-large-en-v1.5 embedding model (Xiao et al., 2024) and a hierarchical retrieval strategy where we first retrieve the most relevant pages based on the query, selecting the top 5 ranked pages. Then, within these, we retrieve the top 5 most rele- vant 115-token snippets. 3.2 Large Language Models We experiment with several language models in addition to ClimateGPT. As baselines, we include the 7B parameter variants of Llama 2 Chat (which shares the same foundation model as ClimateGPT) and Llama 3.1 Instruct. Further, we report resultson the 70B parameters variants as well as on GPT- 4o in Table 4 in the appendix. For all baseline mod- els, we use a standardized RAG prompt that explic- itly instructs the model to base its response solely on the provided references1. Both the user ques- tion and retrieved references are included within the user message to ensure a consistent evaluation setup. For ClimateGPT, we leverage its dedicated context role, which was introduced during train- ing to optimize reference usage. We also use the model’s default system prompt to align with its intended deployment configuration. 3.3 Results We report the results with our faithfulness and fac- tuality metrics for the small models in Table 1. Results of all models are reported in Appendix A. Overall, we observe that the more recent Llama 3.1 has significantly higher faithfulness than the predecessor Llama 2. For ClimateGPT, we observe that the faithfulness, as measured by claim sup- port, is very low. Further, in contrast to the other models, using RAG with ClimateGPT does only slightly improve the claim support wrt. to the KB, i.e. the factuality. This is a strong indicator that the model does not make effective use of the provided paragraphs. Factuality, i.e. claim support in the knowledge base might be underestimated. By looking at claims that are not supported by the knowledge base, we identify multiple instances of claims that are factual but where we fail to retrieve the rele- vant evidence. This either occurs due to the limited size of our knowledge base or due to a failure on 1Full prompt in Appendix D. Avg. Claim Support wrt. Source Subset Size #Claims Ref. [%] Senior Expert Grounded 74 8.6 93 Expert Grounded 403 13.1 52 Non-Expert Open-Ended 8,503 19.1 - Closed-Ended 1,160 10.0 90 (Open-Ended) Grounded 2,368 19.0 43 (Closed-Ended) Grounded 1,024 9.6 91 Table 2: Climate-specific subsets of the ClimateGPT IFT data. For the closed-ended examples, claim support wrt. reference refers to the context given in the prompt and for grounded examples it refers to the given paragraphs. retrieval. For an assessment of factuality, we there- fore note that the reported metric should just be considered as a lower bound and more accurate results could be achieved. 4 Ablation of the IFT Data Motivated by the suboptimal faithfulness of Cli- mateGPT, especially compared to Llama 2 Chat, we want to study the post-training of the model. We focus on the IFT step as we do not expect that the | https://arxiv.org/abs/2505.15633v1 |
continued pre-training step has a significant impact on the faithfulness of the model. The IFT data of ClimateGPT consists of a general domain partition and a climate-specific partition that was specifically curated to train the model. The different subsets of the latter are listed in Table 2. A small portion of the data was generated in close cooperation with domain experts (Exp.), and the larger set generated by non-experts (Non-Exp.). In closed-ended ques- tions, the model is given a reference text to perform its task, such as creating a summary of that text or extracting specific information from it. In contrast, foropen-ended questions, no additional explicit references are given in the prompt, and the model is expected to use its parametric knowledge or to retrieve additional sources via RAG. Grounded refers to examples where additional context is provided to the model as it would be the case when RAG is used during inference. For the expert and senior expert subsets, these references were directly provided during annotation. In the case of the non-expert subset, annotators only pro- vided one or multiple URLs to sources the answer is based on. For a subset of the dataset, these URLs were crawled, chunked and Thulke et al. (2024) used a heuristic2to select the most relevant chunk as context for the response. Additionally, for each 2See Section 4.3 in Thulke et al. (2024) for more details.example up to four distractor paragraphs from other documents were selected to make the model more robust to noisy retrieval results. For closed-ended questions, only distractors were added as all the relevant content is already provided in the prompt. We start our investigation by analysing the faith- fulness of the gold responses in the IFT data with respect to their context. For closed-ended ques- tions, we use the full prompt as context and for the grounded questions, the selected context para- graphs. The percentage of supported claims for each subset as well as the average number of claims per response are reported in Table 2. We notice that the Grounded Senior Expert and Closed-Ended Non-Expert are faithful to their context with 93% and 90% of claims being supported. The faithful- ness of the Grounded Expert data is already much lower with only 52% claim support. Upon closer inspection, we found that the annotators only pro- vided grounding passages for crucial claims in the response. Finally, we observe the lowest level of faithfulness for the Open-Ended Grounded Non- Expert data with only 43% claim support. Next, we repeated the IFT step on different sub- sets of the data to observe the effect on the faithful- ness on the final model. The results are reported in Table 3. As anticipated from our previous anal- ysis of the IFT subsets, excluding the grounded non-expert data significantly increases the claim support from 30% to 57%. Furthermore, excluding the closed-ended but not grounded non-expert data reduces the claim support again to 49%. This indi- cates that closed-ended examples with high faithful- ness seem to improve the faithfulness of the model despite the context being provided directly | https://arxiv.org/abs/2505.15633v1 |
in the prompt, rather than via retrieval. Finally, removing the open-ended non-expert data without grounding does not have a significant effect on the claim sup- port. For the final model, ClimateGPT Faithful+ Other Open-End. Closed-End. Grounded Avg. Claim Support Size 65,000 8,503 1,160 3,328 #Claims wrt. Ref [%] ClimateGPT 7B ✓ ✓ ✓ ✓ 21.1 30 ✓ ✓ ✓ - 19.2 57 ✓ ✓ - - 18.9 49 ✓ - ✓ - 20.1 58 ✓ - - - 20.4 53 Table 3: Ablation study results showing test-time claim support for different training data combinations. we still include the open-ended data as we expect it to improve other aspects. 4.1 Validation on Alternate Metrics and Tasks To validate the generalizability and robustness of our improvements, we conducted additional experi- ments on a RAG dataset from Climate Policy Radar (Juhasz et al., 2024) focussing on questions on cli- mate policy documents. For ClimateGPT Faithful+, we observe a similar improvement in faithfulness with an improvement in claim support from 44% to 58%. More details are discussed in Appendix F. Further, we confirm the results on the Cli- mateGPT IFT Task by using an additional faith- fulness metric (LettuceDetect, Ádám Kovács and Recski (2025)) and observe an improvement from 6% to 34% completely faithful responses with the Faithful+. Details are discussed in Appendix G. 5 Related Work Similar to our work, Schimanski et al. (2024) study the faithfulness of a RAG system on climate ques- tions. They restrict the output of the model so that one sentence always corresponds to exactly one reference passage and verify the faithfulness using an NLI model. This way they avoid the claim de- composition step. They also fine-tune the model on a synthetic dataset following these constraints to improve faithfulness. Our work focuses on improv- ing faithfulness by fine-tuning on more complex human written responses. In addition to the faithfulness evaluation ap- proaches discussed in this work, there are other approaches to evaluate faithfulness of text gener- ation. Early work on document-grounded dialog used simple overlap based metrics like unigram F1 scores between the response and retrieved pas- sages as a proxy for faithfulness (Dinan et al., 2019; Thulke et al., 2023). Fadeeva et al. (2024) makeuse of uncertainty quantification to evaluate the fac- tuality of generated responses. Other work does not consider the claim decomposition step and di- rectly verify the full response against the reference (Honovich et al., 2022; Juhasz et al., 2024; Ádám Kovács and Recski, 2025). 6 Conclusion Ensuring faithfulness of LLM outputs is crucial for improving the reliability of climate-related RAG se- tups. Our study evaluates automated faithfulness as- sessment methods. According to our metric, recent LLMs like Llama 3.1 Instruct and GPT-4o provide much higher faithfulness than Llama 2 Chat or the climate-specific ClimateGPT model. Based on our experiments, we assume that the main difference comes from the instruction fine-tuning and other post-training steps and not from the pre-training. For ClimateGPT, we then do a detailed analy- sis, which subsets of the IFT data are most impor- tant for faithfulness. We show that faithful | https://arxiv.org/abs/2505.15633v1 |
closed- ended prompts in training also improve the faith- fulness in the context of RAG and that it is crucial to avoid unfaithful training examples in the IFT data. With these insights, we develop ClimateGPT Faithful+ which improves ClimateGPT’s faithful- ness from 30% to 57% according to our automatic metric. These results are confirmed by additional experiments on an additional task as well as by using an additional metric to measure faithfulness. These initial findings point to promising direc- tions for future work. Rather than discarding un- faithful training examples, one potential approach is to enrich them by retrieving supporting passages for each claim and using those passages as context during training. For cases where no suitable evi- dence can be retrieved, synthetic context could be generated using a LLM. This would keep a larger portion of the data while still encouraging faithful model behaviour. Limitations In this work, we discuss results from our ongoing work towards more faithful LLMs for RAG on cli- mate questions. While our preliminary results are promising, there are still many open questions and limitations. While RAGAs (Es et al., 2024) is a popular approach to evaluate faithfulness with RAG, we did not perform a systematic evaluation of its performance in the context of the task at hand. Spot-checking of results during the development progress indicated that the metric is reliable enough for our purposes. We tested the approach on two relevant climate datasets from the literature. The results are reported in Appendices F and I, but the results are inconclusive. Thus, a more thorough human evaluation is needed to fully verify the ade- quacy of the metric for the task and to validate the improvements reported in this work. The claim verification step in our pipeline cur- rently focuses on verifying direct support via a given evidence passage. This approach works well if the claim is directly expressed in the given pas- sage and we can consider its content as truth. In the context of evaluating faithfulness one can argue that this is a valid assumption. But as soon as we want to also apply these methods to evaluate the factuality of more complex claims, this does not hold any more. Often claims are not directly stated in a retrieved passage and more complex reasoning is required to identify the support. Claims might express opinions or more holistic statements that require support from multiple sources to be consid- ered as supported. Also, a binary decision between supporting and not supporting might not be ade- quate in many cases, or more nuance is needed. More complex claim verification approaches as proposed by Leippold et al. (2025) partially ad- dress many of these points but are also much more complex and computationally expensive than the approach we use. The behavior of a RAG system is highly depen- dent on the relevancy and adequacy of the retrieved passages. In this work, we adopted the same knowl- edge base and retrieval method as used by Thulke et al. (2024). Thus, our results are also limited to this specific setting and | https://arxiv.org/abs/2505.15633v1 |
generalization to other settings needs to be studied. Furthermore, the size of the knowledge base and the accuracy of the re- trieval method limit the accuracy of the factuality evaluation during the evidence retrieval step. Addi-tional analysis would be needed to study the impact of these factors on the claim support wrt. the knowl- edge base, we consider as a proxy for factuality. The ablation experiments on the IFT data fo- cus on the climate-specific subsets. We did not study the impact of the general domain IFT datasets included in IFT training, such as Open Assistant (Köpf et al., 2023), Dolly3and FLAN v2 (Long- pre et al., 2023). Further, ClimateGPT is based on Llama 2. In our experiments, we observed higher faithfulness for Llama 3.1 Instruct than for Llama 2 Chat. The impact of the pre-training compared to different post-training steps on the faithfulness of the model remains unclear. Finally, during our evaluation we only focused on claim support wrt. the reference and knowledge base which we consider as a proxy for faithful- ness and factuality. We do not consider additional quality factors like the helpfulness or adequacy of generated responses. In some cases, a less faithful output can actually be more helpful or relevant. For example the unfaithful parts in Figure 1 like the information on the year might actually make the response more helpful for some users. References Katherine Calvin, Dipak Dasgupta, Gerhard Krinner, Aditi Mukherji, Peter W. Thorne, Christopher Trisos, José Romero, Paulina Aldunce, Ko Barrett, Gabriel Blanco, William W.L. Cheung, Sarah Connors, Fa- tima Denton, Aïda Diongue-Niang, David Dodman, Matthias Garschagen, Oliver Geden, Bronwyn Hay- ward, Christopher Jones, Frank Jotzo, Thelma Krug, Rodel Lasco, Yune-Yi Lee, Valérie Masson-Delmotte, Malte Meinshausen, Katja Mintenbeck, Abdalah Mokssit, Friederike E.L. Otto, Minal Pathak, Anna Pirani, Elvira Poloczanska, Hans-Otto Pörtner, Aro- mar Revi, Debra C. Roberts, Joyashree Roy, Alex C. Ruane, Jim Skea, Priyadarshi R. Shukla, Raphael Slade, Aimée Slangen, Youba Sokona, Anna A. Sörensson, Melinda Tignor, Detlef van Vuuren, Yi- Ming Wei, Harald Winkler, Panmao Zhai, Zinta Zom- mers, Jean-Charles Hourcade, Francis X. Johnson, Shonali Pachauri, Nicholas P. Simpson, Chandni Singh, Adelle Thomas, Edmond Totin, Andrés Ale- gría, Kyle Armour, Birgit Bednar-Friedl, Kornelis Blok, Guéladio Cissé, Frank Dentener, Siri Erik- sen, Erich Fischer, Gregory Garner, Céline Guivarch, Marjolijn Haasnoot, Gerrit Hansen, Mathias Hauser, Ed Hawkins, Tim Hermans, Robert Kopp, Noëmie Leprince-Ringuet, Jared Lewis, Debora Ley, Chloé Ludden, Leila Niamir, Zebedee Nicholls, Shreya Some, Sophie Szopa, Blair Trewin, Kaj-Ivar van der Wijst, Gundula Winter, Maximilian Witting, Arlene 3https://huggingface.co/datasets/databricks/ databricks-dolly-15k Birt, and Meeyoung Ha. 2023. Ipcc, 2023: Climate change 2023: Synthesis report. contribution of work- ing groups i, ii and iii to the sixth assessment report of the intergovernmental panel on climate change. Natalia De La Calzada, Théo Alves Da Costa, Annabelle Blangero, and Nicolas Chesneau. 2024. Climateq&a: Bridging the gap between climate scientists and the general public. Preprint , arXiv:2403.14709. Thomas Diggelmann, Jordan Boyd-Graber, Jannis Bu- lian, Massimiliano Ciaramita, and Markus Leip- pold. 2021. Climate-fever: A dataset for veri- fication of real-world climate claims. Preprint , arXiv:2012.00614. Emily Dinan, Stephen Roller, Kurt Shuster, | https://arxiv.org/abs/2505.15633v1 |
Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of wikipedia: Knowledge-powered conversational agents. In International Conference on Learning Representations . Nouha Dziri, Ehsan Kamalloo, Sivan Milton, Os- mar Zaiane, Mo Yu, Edoardo M. Ponti, and Siva Reddy. 2022. FaithDial: A faithful benchmark for information-seeking dialogue. Transactions of the Association for Computational Linguistics , 10:1473– 1490. Shahul Es, Jithin James, Luis Espinosa Anke, and Steven Schockaert. 2024. RAGAs: Automated evalu- ation of retrieval augmented generation. In Proceed- ings of the 18th Conference of the European Chap- ter of the Association for Computational Linguistics: System Demonstrations , pages 150–158, St. Julians, Malta. Association for Computational Linguistics. Ekaterina Fadeeva, Aleksandr Rubashevskii, Artem Shelmanov, Sergey Petrakov, Haonan Li, Hamdy Mubarak, Evgenii Tsymbalov, Gleb Kuzmin, Alexan- der Panchenko, Timothy Baldwin, Preslav Nakov, and Maxim Panov. 2024. Fact-checking the output of large language models via token-level uncertainty quantification. In Findings of the Association for Computational Linguistics: ACL 2024 , pages 9367– 9385, Bangkok, Thailand. Association for Computa- tional Linguistics. Or Honovich, Roee Aharoni, Jonathan Herzig, Hagai Taitelbaum, Doron Kukliansy, Vered Cohen, Thomas Scialom, Idan Szpektor, Avinatan Hassidim, and Yossi Matias. 2022. TRUE: Re-evaluating factual consistency evaluation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 3905–3920, Seattle, United States. Association for Computational Lin- guistics. Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, and Ting Liu. 2025. A survey on hallucination in large lan- guage models: Principles, taxonomy, challenges, and open questions. ACM Trans. Inf. Syst. , 43(2).Matyas Juhasz, Kalyan Dutia, Henry Franks, Conor Delahunty, Patrick Fawbert Mills, and Harrison Pim. 2024. Responsible retrieval augmented gen- eration for climate decision making from documents. Preprint , arXiv:2410.23902. Andreas Köpf, Yannic Kilcher, Dimitri von Rütte, Sotiris Anagnostidis, Zhi Rui Tam, Keith Stevens, Abdullah Barhoum, Duc Minh Nguyen, Oliver Stanley, Richárd Nagyfi, Shahul ES, Sameer Suri, David Alexandrovich Glushkov, Arnav Varma Dan- tuluri, Andrew Maguire, Christoph Schuhmann, Huu Nguyen, and Alexander Julian Mattick. 2023. Ope- nassistant conversations - democratizing large lan- guage model alignment. In Thirty-seventh Con- ference on Neural Information Processing Systems Datasets and Benchmarks Track . Markus Leippold, Saeid Ashraf Vaghefi, Dominik Stammbach, Veruska Muccione, Julia Bingler, Jing- wei Ni, Chiara Colesanti Senni, Tobias Wekhof, To- bias Schimanski, Glen Gostlow, Tingyu Yu, Juerg Luterbacher, and Christian Huggel. 2025. Automated fact-checking of climate claims with large language models. npj Climate Action , 4(1). Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V Le, Barret Zoph, Jason Wei, and Adam Roberts. 2023. The flan collection: Designing data and methods for effective instruction tuning. In Proceedings of the 40th International Conference on Machine Learning , volume 202 of Proceedings of Machine Learning Research , pages 22631–22648. PMLR. Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Koh, Mohit Iyyer, Luke Zettle- moyer, and Hannaneh Hajishirzi. 2023. FActScore: Fine-grained atomic evaluation of factual precision in long form text generation. In Proceedings of the 2023 Conference on Empirical Methods | https://arxiv.org/abs/2505.15633v1 |
in Natural Language Processing , pages 12076–12100, Singa- pore. Association for Computational Linguistics. Cheng Niu, Yuanhao Wu, Juno Zhu, Siliang Xu, KaShun Shum, Randy Zhong, Juntong Song, and Tong Zhang. 2024. RAGTruth: A hallucination cor- pus for developing trustworthy retrieval-augmented language models. In Proceedings of the 62nd An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 10862– 10878, Bangkok, Thailand. Association for Compu- tational Linguistics. Tobias Schimanski, Jingwei Ni, Mathias Kraus, Elliott Ash, and Markus Leippold. 2024. Towards faith- ful and robust LLM specialists for evidence-based question-answering. In Proceedings of the 62nd An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 1913– 1931, Bangkok, Thailand. Association for Computa- tional Linguistics. Yixiao Song, Yekyung Kim, and Mohit Iyyer. 2024. VeriScore: Evaluating the factuality of verifiable claims in long-form text generation. In Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 9447–9474, Miami, Florida, USA. Association for Computational Linguistics. David Thulke, Nico Daheim, Christian Dugast, and Her- mann Ney. 2023. Task-oriented document-grounded dialog systems by hltpr@rwth for dstc9 and dstc10. IEEE/ACM Transactions on Audio, Speech, and Lan- guage Processing , 32:733–741. David Thulke, Yingbo Gao, Petrus Pelser, Rein Brune, Rricha Jalota, Floris Fok, Michael Ramos, Ian van Wyk, Abdallah Nasir, Hayden Goldstein, Taylor Tragemann, Katie Nguyen, Ariana Fowler, Andrew Stanco, Jon Gabriel, Jordan Taylor, Dean Moro, Ev- genii Tsymbalov, Juliette de Waal, Evgeny Matusov, Mudar Yaghi, Mohammad Shihadah, Hermann Ney, Christian Dugast, Jonathan Dotan, and Daniel Eras- mus. 2024. Climategpt: Towards ai synthesizing in- terdisciplinary research on climate change. Preprint , arXiv:2401.09646. Saeid Ashraf Vaghefi, Dominik Stammbach, Veruska Muccione, Julia Bingler, Jingwei Ni, Mathias Kraus, Simon Allen, Chiara Colesanti-Senni, To- bias Wekhof, Tobias Schimanski, Glen Gostlow, Tingyu Yu, Qian Wang, Nicolas Webersinke, Chris- tian Huggel, and Markus Leippold. 2023. Chatcli- mate: Grounding conversational ai in climate science. Communications Earth & Environment , 4(1). Benjamin Warner, Antoine Chaffin, Benjamin Clavié, Orion Weller, Oskar Hallström, Said Taghadouini, Alexis Gallagher, Raja Biswas, Faisal Ladhak, Tom Aarsen, Nathan Cooper, Griffin Adams, Jeremy Howard, and Iacopo Poli. 2024. Smarter, better, faster, longer: A modern bidirectional encoder for fast, memory efficient, and long context finetuning and inference. Preprint , arXiv:2412.13663. Shitao Xiao, Zheng Liu, Peitian Zhang, Niklas Muen- nighoff, Defu Lian, and Jian-Yun Nie. 2024. C-pack: Packed resources for general chinese embeddings. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Infor- mation Retrieval , SIGIR ’24, page 641–649, New York, NY , USA. Association for Computing Machin- ery. Ádám Kovács and Gábor Recski. 2025. Lettucedetect: A hallucination detection framework for rag applica- tions. Preprint , arXiv:2502.17125. A Full Results on the ClimateGPT IFT Task Table 4 shows the claim support of all models that we tested on the ClimateGPT IFT task. In contrast to the table in the main part of the paper, here we also report the claim support wrt. the reference for the case that no RAG was used. As the reference is not given to the model as additional input, we do expect low claim support. The | https://arxiv.org/abs/2505.15633v1 |
value is interest- ing as an indication for the percentage of claims that are faithful to the reference by chance. Inter- estingly, we observe that the claim support of the original ClimateGPT models is close to this value. This further supports the interpretation that these models do not make effective use of the provided context. We omitted these results in the main part of the paper for better clarity as they are not directly relevant to the main claims of the paper. B Evaluation Prompts Listing 1 and Listing 2 show the prompts that were used for the claim extraction and verification steps in the evaluation pipeline. Both prompts are based on the implementation of RAGAs4(Es et al., 2024). C Knowledge Base Details Table 5 shows the statistics of the ClimateGPT knowledge base. D RAG Prompts Listing 3 shows the prompt used in RAG for infer- ence for all models except ClimateGPT. Listing 3: Prompt used in RAG for inference for all models except ClimateGPT. You're a helpful assistant supporting users with their questions on climate change . Answer the question based on the given contexts . Make sure to only use information that is fully grounded in the contexts . Context : [[0]] "{ passage [0]. title }", { passage [0]. year } { passage [0]. content } {...} [[4]] "{ passage [4]. title }", { passage [4]. year } { passage [4]. content } Question : { question } E Training Details In our training pipeline, we follow the setup from Thulke et al. (2024). The models are trained using 4https://github.com/explodinggradients/ragasMegatron-LLM5a fork of NVIDIA’s Megatron- LM6by the EPFL LLM team. A cosine learning rate schedule with a peak LR of 10−5and 100 warmup steps are used. The batch size is 64 and the sequence length is 4096. Additionally, a weight decay of 10−2and dropout are used. All 7B parameter models are trained with full pa- rameter fine-tuning on 4xA100 80GB GPUs. One training run takes approximately 4 hours, so in to- tal 64 GPU hours were needed to train the models reported in this paper. F CPR’s RAG Dataset Evaluation To further evaluate the generalization of Cli- mateGPT 7B Faithful+ to other datasets, we tested it on a set of question–passage pairs published by the Climate Policy Radar team (Juhasz et al., 2024). This dataset contains 1,013 examples, with the re- trieved passages taken from Climate Policy Radar’s internal database. We generated responses using both ClimateGPT 7B and ClimateGPT 7B Faith- ful+, and evaluated their faithfulness to the pro- vided reference passages using our RAGAs-based metric. On this dataset, ClimateGPT 7B Faithful+ achieved a claim support of 58%, substantially out- performing the base ClimateGPT 7B model, which achieved 44%. These results demonstrate that the improvements made in the refined model general- ize effectively to other climate-domain datasets. In addition, Juhasz et al. (2024) also collected expert annotations for model outputs from GPT- 4o, GPT-3.5, Gemini 1.0 and 1.5, and Mistral 7B v0.2. Each response was evaluated for faithfulness using a definition closely aligned with ours. Ex- pert annotators labeled responses | https://arxiv.org/abs/2505.15633v1 |
as either faithful (58.9%), not faithful (9.6%), not applicable (28%), or don’t know (3.5%). We used this data to evalu- ate how well our RAGAs-based metric aligns with human judgments. For the analysis, we focused only on examples that were labeled as either faith- ful or not faithful, excluding cases where the model refused to answer. This resulted in a total of 1,367 samples. To convert the claim support from our metric into a binary label for each example, we classify an output as faithful if the claim support exceeds 50%. On this test set, our metric achieved an overall agreement of 86.7% with the human an- notations. However, accuracy varied between label categories: it reached 93.7% for human-labeled 5https://github.com/epfLLM/Megatron-LLM 6https://github.com/nvidia/megatron-lm #Tokens #Parameters Avg. Claim Support wrt. Model in Trillion in Billion RAG #Claims Ref. [%] KB [%] GPT-4o n/a n/a - 17.4 33 68 ✓ 16.2 72 74 LLama 3.1 Instruct 15 8 - 22.7 24 59 ✓ 17.3 67 72 70 - 21.8 25 60 ✓ 16.1 70 74 LLama 2 Chat 2 7 - 23.3 25 60 ✓ 21.2 48 65 70 - 25.1 24 60 ✓ 21.6 54 68 ClimateGPT 2 7 - 21.6 25 59 ✓ 21.1 30 61 70 - 21.8 27 61 ✓ 22.2 30 62 ClimateGPT Faithful+ (ours) 2 7 - 20.2 27 57 ✓ 19.2 57 69 Table 4: Results of all tested models for claim support wrt. the reference, as a metric of faithfulness, and wrt. the knowledge base (KB) as a metric for factuality for different large language models with and without RAG. The best values are in bold and the second best values underlined. Listing 1: Prompt template used for Claim Extraction adapted from RAGAs. Given a question , an answer , and sentences from the answer , analyze the complexity of each sentence and break it down into one or more fully understandable statements . Ensure that no pronouns are used in each statement and that every claim is explicit and self - contained . Format the output as a structured JSON response . EXAMPLE Question : Who was Albert Einstein and what is he best known for ? Answer : He was a German - born theoretical physicist , widely acknowledged to be one of the greatest and most influential physicists of all time . He was best known for developing the theory of relativity . He also made important contributions to the development of quantum mechanics . Statements : { " statements ": [ " Albert Einstein was a German - born theoretical physicist .", " Albert Einstein is recognized as one of the greatest and most influential physicists of all time .", " Albert Einstein was best known for developing the theory of relativity .", " Albert Einstein also made important contributions to the development of quantum mechanics ." ] } YOUR TURN Question : {{ question }} Answer : {{ sentences }} Statements : Listing 2: Prompt template used for Claim Verification adapted from RAGAs. Your task is to judge the faithfulness of a series | https://arxiv.org/abs/2505.15633v1 |
of claims based on a given context . For each claim you must return verdict as 1 if the claim can be directly inferred based on the context or 0 if the claim can not be directly inferred based on the context . EXAMPLE 1: Context : John is a student at XYZ University . He is pursuing a degree in Computer Science . He is enrolled in several courses this semester , including Data Structures , Algorithms , and Database Management . John is a diligent student and spends a significant amount of time studying and completing assignments . He often stays late in the library to work on his projects . Claims : 1. John is majoring in Biology . 2. John is taking a course on Artificial Intelligence . 3. John is a dedicated student . 4. John has a part - time job . Analysis : {" analysis ": [ { " claim ": " John is majoring in Biology .", " reason ": "John 's major is explicitly mentioned as Computer Science . There is no information suggesting he is majoring in Biology .", " verdict ": 0 }, { " claim ": " John is taking a course on Artificial Intelligence .", " reason ": " The context mentions the courses John is currently enrolled in , and Artificial Intelligence is not mentioned . Therefore , it cannot be deduced that John is taking a course on AI .", " verdict ": 0 }, { " claim ": " John is a dedicated student .", " reason ": " The context states that he spends a significant amount of time studying and completing assignments . Additionally , it mentions that he often stays late in the library to work on his projects , which implies dedication .", " verdict ": 1 }, { " claim ": " John has a part - time job .", " reason ": " There is no information given in the context about John having a part - time job .", " verdict ": 0 } ]} EXAMPLE 2: Context : Photosynthesis is a process used by plants , algae , and certain bacteria to convert light energy into chemical energy . Claims : 1. Albert Einstein was a genius . Analysis : {" analysis ": [ { " claim ": " Albert Einstein was a genius .", " reason ": " The context and claim are unrelated .", " verdict ": 0 } ]} YOUR TURN : Context : {{ context }} Claims : {{ claims }} Analysis : Source # Docs # 512 Chunks IPCC Reports 16 17,897 Potsdam Papers 390 8,539 Earth4All 14 235 Other 336 8,648 Total 756 35,319 Table 5: Statistics of the different data sources of the ClimateGPT knowledge base. faithful responses, but only 29.5% for not faithful ones. Notably, Juhasz et al. (2024) themself ac- knowledged that their annotations were sometimes “too noisy along the faithfulness dimension”. In addition, limited spot-checking on our part more frequently agreed with our metric’s assessments than with the | https://arxiv.org/abs/2505.15633v1 |
human annotations. G Evaluation with LettuceDetect In addition to our primary faithfulness evaluation using RAGAs, we include results using LettuceDe- tect (Ádám Kovács and Recski, 2025), a recent hallucination detection framework designed for RAG systems. LettuceDetect is a token-level clas- sifier based on ModernBERT (Warner et al., 2024), trained on the RAGTruth dataset (Niu et al., 2024) to identify hallucinated spans in LLM responses given the input question and context. As LettuceDe- tect’s definition of hallucination closely aligns with our notion of faithfulness, we use it to validate the results obtained with RAGAs. For our evaluation, we convert LettuceDetect’s span-level predictions into a binary faithfulness score by marking a generation as faithful if no hallucinated spans are detected. Results, using the lettucedetect-large-v1 variant of the model, are reported in Table 6. The LettuceDetect results support the conclu- sions drawn from our RAGAs-based evaluation. Without RAG, both ClimateGPT and ClimateGPT Faithful+ achieve low scores (6% and 2% respec- tively), providing a baseline for how often gener- ations align with the reference context by chance. With RAG, ClimateGPT Faithful+ shows a substan- tial improvement, reaching 34% hallucination-free responses compared to only 6% for the original Cli- mateGPT. This underpins the claim support results obtained with RAGAs (57% vs. 30%), reinforcing the effectiveness of our instruction fine-tuning strat- egy in improving the model’s ability to ground itsListing 4: Climate-FEVER example where our claim verifier disagrees with the gold label. Claim : Global warming is driving polar bears toward extinction Evidence : [ Global Warming ] Environmental impacts include the extinction or relocation of many species as their ecosystems change , most immediately the environments of coral reefs , mountains , and the Arctic . Gold label : supported Predicted label : not_supported generations in the retrieved context. H ClimateGPT IFT Example Output Figure 2 shows an example output of ClimateGPT 7B and the ClimateGPT 7B Faithful+ model pro- posed in this work. While both responses address the question in the prompt and are generally cor- rect, the refined model’s response demonstrates a significantly closer alignment with the reference passages. I Climate FEVER Claim Verification To evaluate our claim verification method, we ap- plied it to the Climate-FEVER dataset (Diggel- mann et al., 2021). The dataset consists of 1,535 claims, each paired with 5 corresponding evidence passages. Each claim-evidence pair is annotated by at least two annotators as either supported, refuted, disputed, or lacking sufficient information. For our analysis, we classify refuted and lacking suf- ficient information as not supported, and exclude all pairs labelled as disputed. In addition, we only include examples where all annotators agree on the label, resulting in a total of 1,146 claims and 3,348 claim-evidence pairs. On this subset, the RAGAs-based claim verifier achieves an overall accuracy of 67.1%. For pairs with the gold label “not supported”, the accuracy is 99.7%, while for supported pairs, it drops to 20.6%. Qualitatively, we observe that our claim verifier is relatively strict, requiring the claim to be explicitly stated in the ev- idence. An example of this is given in Listing 4. In this | https://arxiv.org/abs/2505.15633v1 |
instance, the evidence does not explicitly state that polar bears are one of the affected species. However, it could be argued that it is plausible to infer that polar bears are included among the “many species” mentioned in the evidence. #Tokens #Parameters Hallucination-Free Responses [%] Model in Trillion in Billion w/o RAG w/ RAG GPT-4o n/a n/a 9 73 LLaMA 3.1 Instruct 15 8 4 65 70 2 65 LLaMA 2 Chat 2 7 2 18 70 2 29 ClimateGPT 2 7 4 6 70 4 4 ClimateGPT Faithful+ (ours) 2 7 2 34 Table 6: Percentage of hallucination-free responses according to lettucedetect-large-v1 (Ádám Kovács and Recski, 2025) for various models with and without RAG. The best values are in bold and the second best values underlined. Prompt How do ocean ecosystems, ranging from coral reefs to the deep sea, support biodiversity and provide essential services? Answer in a single detailed paragraph. Context [0] "Understanding the Effectiveness of Coastal Nature-based Solutions: Practitioner-based Learning", 2023 despite the high cost and failure rate. oyster reefs are effective for wave attenuation, and research suggests that focusing on positive species interactions can provide a framework for restoration. biodiversity enhancement supports a shellfish reef’s ability to provide ecosystem services. ecosystem services research highlights that healthy ecosystems provide high - quality services, while stressed ecosystems produce degraded services and may harm human well - being. most ecosystem services are supported by biodiversity. biodiversity in coastal systems contributes to providing ecosystem services such as fish habitat, nutrient cycling, and various cultural services. additionally, biodiversity may play a role in the [1] "The Role of Blue Carbon in Climate Change Mitigation and Carbon Stock Conservation", 2021 is an essential to sustainably manage and develop marine resources to their maximum potential. environmental measures should tackle both terrestrial and marine ecosystems, with one as a continuum of the other. coral reef restoration can increase coastal resilience to sea level rise and flooding and provide valuable environmental services for local populations. water pollution in rivers contributes to ocean ecosystem degradation, via eutrophication and the formation of [2] "The Ocean and Cryosphere in a Changing Climate", 2020 level of risk than the high emission scenario. changing marine ecosystem services and human well - being ecosystem services are environmental processes and functions that provide benefits to people and support human well - being. marine ecosystem services are generated throughout the ocean, from shallow water to the deep sea. these services can be divided into provisioning services, regulating services, supporting services, and cultural services. they are also described as’nature’s contribution to people ’. changes in key ecosystem [3] "Climate Change 2022: Impacts,Adaptation and Vulnerability", 2022 logical shifts and trophic mismatches, changes in communities and biodiversity, time of emergence of climate - impact signals in ecological systems from background natural variability, and changes in biomass, primary productivity, and carbon export. coastal ecosystems and seas warm - water coral reefs warm - water coral reef ecosystems, which house a significant portion of marine biodiversity, provide food, income, and shoreline protection to coastal communities. these ecosystems are threatened by | https://arxiv.org/abs/2505.15633v1 |
climate - induced and non - climate drivers, including ocean warming, marine heatwaves, ocean acidification, sea - level rise, tropical cyclones [4] "Understanding the Effectiveness of Coastal Nature-based Solutions: Practitioner-based Learning", 2023 research indicates that biodiversity plays a crucial role in the functioning of ecosystem services provided by habitats. recent studies have focused on the climate mitigation capacity of forests in the context of biodiversity and nature - based solutions ( nbs ). challenges arise in prioritizing monoculture plantations for carbon storage over intact and diverse ecosystems. the identity and richness of species, as well as the concept of’niche complementarity ’, contribute to boosting ecosystem function and their services. significant restoration efforts are underway for shellfish reefs, despite the high cost and failure rate. oyster reefs" ClimateGPT 7B (Faithfulness: 19%, Factuality: 67%) Ocean ecosystems, encompassing a diverse range from coral reefs to the deep sea, demonstrate unparalleled biodiversity and provide a multitude of essential services. In the coral reef ecosystem, vibrant arrays of colorful fish and intricate coral structures serve as vital breeding grounds and habitats for countless marine organisms. These ecosystems act as natural breakwaters, reducing coastal erosion and protecting shorelines. Additionally, they store vast amounts of carbon, contributing to climate change mitigation. Moving out from the coast, the deep sea presents a different world, characterized by complete darkness, extreme pressure, and cold temperatures. Despite the harsh conditions, it sustains unique species adapted to these environments, many of which are still undiscovered. The deep sea plays a crucial role in regulating climate and weather patterns, driven by its immense biological pump, which captures carbon from the surface and transports it to the deep oceans. In essence, ocean ecosystems are not only a source of aesthetic beauty and recreation but also linchpins of the Earth’s ecological balance and a treasure trove of biodiversity. ClimateGPT 7B Faithful+ (Faithfulness: 92%, Factuality: 96%) Ocean ecosystems, including coral reefs and the deep sea, play a vital role in biodiversity support and providing essential services. They provide provisional services, regulating services, supporting services, and cultural services. Coral reefs contribute to coastal resilience to sea level rise and flooding and provide valuable environmental services for local populations. However, they are threatened by climate-induced and non-climate drivers, impacting coastal ecosystems and warm-water coral reefs. These ecosystems provide food, income, and shoreline protection to coastal communities. Restoration efforts, such as oyster reefs, are essential for wave attenuation, but their failure and high cost are challenges. Recent research has focused on the climate mitigation capacity of forests in the context of biodiversity and nature-based solutions. Prioritizing monoculture plantations for carbon storage over intact and diverse ecosystems is a challenge. The identity and richness of species and the concept of niche complementarity contribute to boosting ecosystem function and services. Figure 2: Example comparing the outputs of ClimateGPT and ClimateGPT 7B Faithful+ on one example of the held-out test set. Parts marked in red correspond to claim that are not supported in the given context according to our automatic evaluation. Prompt What is the purpose of the Global Stocktake? Context [0] Recalling Article | https://arxiv.org/abs/2505.15633v1 |
14 of the Paris Agreement, the objective of the Global Stocktake is to assess the collective progress towards achieving the purpose of the Agreement and its long-term goals, considering mitigation, adaptation and the means of implementation and support, in light of equity and the best available science. The outcome of the global stocktake shall inform Parties in updating and enhancing, in a nationally determined manner, their actions and support in accordance with the relevant provisions of this Agreement, as well as in enhancing international cooperation for climate action. The GST two-year process, begins with an information collection and preparation component, followed by a technical assessment component and culminates in a consideration of outputs component, which includes the outcomes of the GST. The SBSTA and the SBI invited submissions on views on the approach to the consideration of outputs component of the first global stocktake. This submission responds to that invitation and sets out to identify important characteristics for the outputs phase and the shaping of the outcomes of the GST. [1]·Paragraph 34 of 19/CMA.1 decides that the outputs of this component of the global stocktake should: a) Identify opportunities for and challenges in enhancing action and support for collective progress in relation to the thematic areas of the global stocktake referred to in paragraph 6(b) above, as well as possible measures and good practices and international cooperation and related good practices; [2] The SBSTA and the SBI invited submissions on views on the approach to the consideration of outputs component of the first global stocktake. This submission responds to that invitation and sets out to identify important characteristics for the outputs phase and the shaping of the outcomes of the GST. Approach to the Consideration of Outputs: ClimateGPT 7B The Global Stocktake is a key component of the Paris Agreement, scheduled to begin in 2023. It involves a comprehensive assessment of global progress in addressing climate change, considering scientific evidence, economic implications, and societal impacts. The Stocktake aims to inform the updates to countries’ climate commitments, known as “nationally determined contributions” (NDCs), to ensure they are in line with the agreement’s objectives and targets. This process seeks to enhance international cooperation and commitment to mitigating climate change on a global scale. ClimateGPT 7B Faithful+ The Global Stocktake’s purpose, as outlined in Article 14 of the Paris Agreement, is to assess the collective progress made towards achieving the Agreement’s purpose and long-term goals, which encompass mitigation, adaptation, and means of implementation. This assessment is conducted considering equity and the best available science. The Global Stocktake process spans two years, beginning with information collection and preparation and culminating in a consideration of outputs. The outputs of the latter component are decided upon in paragraph 34 of 19/CMA.1, which states that they should identify opportunities for enhancing action and support for collective progress in relation to thematic areas of the global stocktake and possible measures and good practices. Figure 3: Example comparing the outputs of ClimateGPT and ClimateGPT 7B Faithful+ on one example from the Climate Policy Radar data. Parts marked in red correspond to | https://arxiv.org/abs/2505.15633v1 |
arXiv:2505.15634v2 [cs.CL] 24 May 2025Feature Extraction and Steering for Enhanced Chain-of-Thought Reasoning in Language Models Zihao Li1∗Xu Wang1Yuzhe Yang2Ziyu Yao3Haoyi Xiong4Mengnan Du1 1New Jersey Institute of Technology2University of California, Santa Barbara 3George Mason University4Baidu Inc. lizihao9885@gmail.com, mengnan.du@njit.edu Abstract Large Language Models (LLMs) demonstrate the ability to solve reasoning and mathematical problems using the Chain-of-Thought (CoT) technique. Expanding CoT length, as seen in models such as DeepSeek-R1, significantly en- hances this reasoning for complex problems, but requires costly and high-quality long CoT data and fine-tuning. This work, inspired by the deep thinking paradigm of DeepSeek-R1, uti- lizes a steering technique to enhance the reason- ing ability of an LLM without external datasets. Our method first employs Sparse Autoencoders (SAEs) to extract interpretable features from vanilla CoT. These features are then used to steer the LLM’s internal states during gener- ation. Recognizing that many LLMs do not have corresponding pre-trained SAEs, we fur- ther introduce a novel SAE-free steering algo- rithm, which directly computes steering direc- tions from the residual activations of an LLM, obviating the need for an explicit SAE. Ex- perimental results demonstrate that both our SAE-based and subsequent SAE-free steering algorithms significantly enhance the reasoning capabilities of LLMs. 1 Introduction Large Language Models (LLMs), such as GPT- 4 (Achiam et al., 2023) and Claude3, have achieved remarkable success in natural language process- ing due to their large parameter size. They tackle logical reasoning and mathematical problems by employing CoT prompting (Wei et al., 2023) to enable step-by-step reasoning. Recent models such as DeepSeek-R1 (Guo et al., 2025), OpenAI’s o1 series (Jaech et al., 2024), Qwen2.5 (Yang et al., 2024), and Kimi1.5 (Team et al., 2025) amplify this by expanding CoT length, incorporating more reasoning steps, self-correction, and backtracking compared to vanilla CoT. This enhancement is typi- cally achieved through distillation (e.g., DeepSeek- ∗The first two authors are interns at NJIT.R1-distill-LlaMa3-8B (Guo et al., 2025)) or post- training (Zeng et al., 2025; Song et al., 2025). How- ever, these methods for generating long CoT suffer from significant limitations. They need expensive, high-quality long CoT datasets which are difficult to obtain, and the process often overlooks the nu- anced internal states of LLMs. Moreover, the ver- bosity of long CoT can introduce noisy features that do not contribute to, and may even detract from, the core reasoning process. Concurrently, modulating LLM hidden states has proven effective for tasks such as knowledge editing (Meng et al., 2022) and improving truthful- ness (Marks and Tegmark, 2023; Li et al., 2023). Prior to our work, studies by Tang et al. (2025), Sun et al. (2025), and Højer et al. (2025) have demon- strated that steering activations or editing weights can enhance the reasoning ability in smaller LLMs. For instance, Tang et al. (2025) relies on contrast- ing representations (from long vs. short CoT) to derive steering vectors. While promising, these ex- isting steering approaches may still depend on the generation of contrastive data or may not fully ex- ploit the fine-grained feature distinctions within a single, standard reasoning trace. Furthermore, they typically do not offer a | https://arxiv.org/abs/2505.15634v2 |
systematic way to disentan- gle task-relevant reasoning signals from stylistic or superficial features present in the CoT. This paper introduces a novel framework that ad- dresses these limitations by enabling precise steer- ing of LLM residual activations using features ex- tracted exclusively from readily available vanilla CoT, thereby bypassing the need for expensive long CoT datasets, contrastive examples, or model re- training. Inspired by the “deep thinking” mode of reasoning models, e.g., DeepSeek-R1, which sug- gests that crucial reasoning components are latently present even in standard CoT, our techniques aim to identify and amplify these capabilities. First, the core of our proposal is a Sparse Au- toencoder (SAE)-based steering mechanism that Problems Express 55÷54− 53+ 52· 5 as an integerSymbolic CoTFirst, simplify each term with the same base, starting with the division and multiplication…Verbal CoT55÷55= 51= 5, 53=125. 52·52= 52+1= 53= 125. 5−125+125 = 5. The integer is 5.Transformer Block…Transformer BlockTransformer BlockTransformer BlockSparse Autoencoder ActivationEncoderDecoderReLUReconstrcted ActivationSteering SAE-FreeSymbolic RepresentationVerbal Representation𝑋𝑌 𝑓(·)Subtraction𝑢Find 𝑢to maximize absolute projectionFigure 1: Main framework of SAE-based steering and SAE-free algorithm. uniquely targets reasoning-specific features . We employ SAEs (Cunningham et al., 2023; Shu et al., 2025) to decompose the residual activations from vanilla CoT into interpretable features. To enhance feature selectivity and mitigate the influence of noise, we introduce a distinctive approach named VS Decomposition : the CoT is decomposed into a Verbal process (natural language articulation) and a Symbolic process (formal, rule-based logic), draw- ing inspiration from existing work such as Pan et al. (2023). In addition, by computing the absolute difference between SAE feature activations from these paired processes, we isolate features strongly indicative of either verbal or symbolic reasoning facet while effectively suppressing common, non- informative features (e.g., punctuation, sentence connectors). These distilled features then guide the steering of the LLM’s internal representations, promoting more focused and robust reasoning. Additionally, we observe that even though SAE- based steering shows an incredible performance in enhancing LLMs’ reasoning, there are still large amounts of LLMs that do not have associated SAEs, such as the Qwen series (Yang et al., 2025). To address this issue, we design an SAE-free steering algorithm to modulate the residual activation even without an associated SAE. The result shows that it achieves significant improvement in four datasets for mathematical reasoning. The contribution of this paper is the following: •We proposed a framework named VS Decompo- sition to use SAE to extract features from vanilla CoT. It decomposes the reasoning process into two sub-processes: verbal process and symbolic process, enabling us to extract reasoning features while suppressing noise features.•We proposed to steer the model by modulating the hidden activation employing the feature we obtained in the above step. •We proposed an algorithm to steer LLMs with- out relying on SAE, which makes our steering method more applicable. The result shows that it achieves significant improvement in four mathe- matical reasoning datasets. 2 SAE-based Steering 2.1 Preliminary on SAE Sparse Autoencoder (SAE) provides a method to interpret the hidden states of an LLM. It was trained to reconstruct the hidden representation. Given a hidden activation h∈Rdmodel, | https://arxiv.org/abs/2505.15634v2 |
SAE encodes it into a sparse high-dimensional representation α∈ RdSAEand then decodes it to reconstruct h. The activation values in αare considered as features. These features can be explained by methods such as vocabulary projection (nostalgebraist, 2020) and visualization (McDougall, 2024), with the help of libraries such as neuronpedia.1 2.2 Our Motivation Previous methods often obtained the correct fea- tures or directions by using the representation dif- ferences between long and short CoT (Tang et al., 2025; Sun et al., 2025). However, some flaws exist that we can not ignore. First, since it is costly to perform long CoT reasoning, long CoT datasets of high quality, such as Openthoughts (Team, 2025), are difficult to get. Second, compared with vanilla CoT, long CoT contains more reasoning features but also brings a large amount of noisy features 1https://www.neuronpedia.org/ Figure 2: Top 10 activations for DeepSeek-Llama3-8B and Gemma-2-9b-it. such as “punctuation marks” or “breaks of sen- tences”. Based on the above reasons, we want to find a method that can extract as many useful fea- tures as possible while avoiding noisy features. Recent work (Pan et al., 2023; Quan et al., 2025) thinks that CoT may imitate human reasoning pro- cesses due to a lack of a mechanism to guarantee the faithfulness of reasoning. In contrast, symbolic reasoning ensures inherent interpretability and veri- fiability by its reliance on well-defined logical prin- ciples that adhere rigorously to formal deductive systems. It illustrates the potential of unfaithful reasoning if one just relies on the common COT. Inspired by this claim, we proposed VS Decom- position which decompose the CoT of reasoning problems into two sub-processes: VerbalProcess andSymbolic Process. The former provides rea- soning features, while the latter provides formal features. By integrating features from both of them, we can extract useful features while suppressing noisy features. 2.3 VS Decomposition Framework We proposed a novel framework for extracting fea- tures without relying on external datasets. It aims to maximize the extraction of reasoning-relevant features while simultaneously minimizing the ef- fect of noisy features. Separating Verbal and Symbolic Process. For reasoning tasks like GSM8K (Cobbe et al., 2021), the first step involves generating two sub-processes: Verbal Reasoning Process and Symbolic Reasoning Process. For example, for the problem “ Express 55÷54−53+ 52·51as an integer ”. It can be divided like this, Verbal Reasoning Pro- cess:“First, simplify each term with the same base, starting with the divisionand multiplication. Next, perform the subtraction and addition operations to get the final integer result” and the Sym- bolic Reasoning Process: “55÷54= 51= 5,53= 125 52·51= 52+1= 53= 125 5 −125+125 = 5 The integer is 5.” SAE Feature Extraction. For each question q, we leverage the GPT4.1 API2to generate a Verbal Rea- soning Process xand a Symbolic Reasoning Pro- cessylike this. Then we input both sub-processes into the LLM to extract residual activations rx iand ry jat layer l, where iandjrepresent the token po- sition of xandy. Then we employ SAE to extract features. In detail, if XandYrepresent the set of token positions of xandy, the features for the Ver- bal Reasoning Process and | https://arxiv.org/abs/2505.15634v2 |
the Symbol Reasoning Process can be described as follows: αx=1 ∥X∥X i∈XSAE (rx i), αy=1 ∥Y∥X j∈YSAE (ry j),(1) where αx∈RdSAEandαy∈RdSAEare consid- ered as the feature activation of Verbal Process and Symbolic Process at layer l. Intergrating Features. To derive the integrated features denoted as αfrom both reasoning pro- cesses, we can leverage the vector addition: αt= ∥αx t+αy t∥, where αtmeans the value of αat index t,∥ · ∥ means the absolute value symbol. It integrates information from both sub-processes. However, this approach suffers from noise ampli- fication, particularly from task-irrelevant features (e.g., punctuation and sentence boundaries). To ad- dress that, we observe that the activations of such 2https://openai.com/index/gpt-4-1/ Model Version GSM8K MATH-L3&L4 MMLU-high MathOAI Llama3.1-8B-itOriginal prompt 60.33 21.46 24.81 30.20 SAE-based-steering (ours) 62.67 21.46 27.04 32.80 DeepSeek-Llama3-8BOriginal prompt 82.67 78.11 72.96 78.60 SAE-based-steering (ours) 85.67 79.83 76.30 83.40 Gemma-2-9b-itOriginal prompt 88.00 51.07 51.11 60.20 SAE-based-steering (ours) 90.67 50.64 52.22 62.60 Table 1: Overall accuracy (%) of steering. task-irrelevant features are very close between both processes shown in Figure 2. We therefore pro- pose to compute the absolute difference between activations, effectively suppressing noisy features while preserving relevant features. Specifically, the final CoT feature activation for question qcan be described as αt=∥αx t−αy t∥. To better explain the motivation of our method. We derived the top-10 feature activations of the Verbal Reasoning Process and the Symbolic Rea- soning Process by several representative examples. The result is shown as Figure 2. For each sub- figure, the feature indexs are categorized into three groups, the left group represents features predom- inantly associated with verbal featurs ( αx t≫αy t), features in the middle group indicates balanced features between both processes, the right group represents features predominantly associated with symbolic features ( αx t≪αy t). We can find that forDeepSeek-Llama3-8B , features in the middle group, such as 19555 and 14602 mean “punctu- ation marks, specifically commas” which is un- related to reasoning; features index 462 in the left group present big difference between two pro- cesses, it means “relationship statements involv- ing variables”; features in the right group like 15593 means “numeric expressions and mathemat- ical components”. For Gemma-2-9b-it , the phe- nomenon is almost the same as the former: feature 5012 and feature 14451 in the middle group are unrelated to reasoning. These extracted results are the same as the assumption we made at first, which means subtraction can better erase noisy features while integrating useful information. In practice, we randomly sample N= 100 samples: {qp}N p=1from the MATHOAI dataset3to generate the Verbal Reasoning Process {xp}N p=1 3https://huggingface.co/\datasets/heya5/math_ oaiand the Symbolic Reasoning Process {yp}N p=1. The reason for selecting this dataset is that we want to ensure that the difficulty of the problem is not too complicated to avoid generating long CoTs, and not too simple to avoid generating indistinguishable CoTs for both processes. The final feature activa- tionαtderived from {qk}N l=1can be described as follows: αt=1 NNX p=1∥αxp t−αyp t∥ (2) 2.4 Steering After obtaining {αt}dSAE t=1from the last step, we sort it in descending order to get the top-k important | https://arxiv.org/abs/2505.15634v2 |
SAE features. We leverage a certain feature among them for steering, which modulates the residual activation of LLMs to control their behaviors. For a given feature indexed by t, inspired by Templeton et al. (2024); Farrell et al. (2024); Galichin et al. (2025), we proposed to use the activation value of αto modify the residual activation rhas follows: ˆrh i=rh i+λαtzt, i∈ H (3) The modification is made in the layer laccording to the SAE version we choose, λis the hyperpa- rameters of strength, zt=Wdec,t=Wdec[t,:]. 3 Experiments of SAE-based Steering 3.1 Experimental Settings Datasets. We evaluate the effect of our steering method on four mathematical benchmarks, includ- ing GSM8K (Cobbe et al., 2021), MMLU-high school mathematics (MMLU-high) (Hendrycks et al., 2021), MATH-500 (Lightman et al., 2023)4 and MATHOAI5. For the MATH-500 dataset, we 4https://huggingface.co/datasets/ HuggingFaceH4/MATH-500 5https://huggingface.co/datasets/heya5/math_ oai Model Version GSM8K MATH-L3&L4 MMLU-high MathOAI DeepSeek-Llama3-8BOriginal prompt 82.67 78.11 72.96 78.60 BoostStep (Zhang et al., 2025) 79.33 77.25 71.11 78.80 Mathneuro (Christ et al., 2024) 84.67 78.54 68.89 76.00 SAE-free-steering (Ours) 85.67 82.83 78.15 83.00 DeepSeek-qwen-1.5BOriginal prompt 70.00 65.24 65.56 74.20 BoostStep (Zhang et al., 2025) 67.33 66.09 63.70 73.40 Mathneuro (Christ et al., 2024) 72.66 66.09 63.33 73.00 SAE-free-steering (Ours) 74.00 69.10 66.67 77.00 Table 2: Performance comparison (accuracy %) between original LLMs and SAE-Free steering versions. Figure 3: The generation length variation of the questions with the top 25% shortest answers generated by the original LLM. select MATH-level3 and MATH-level4 for testing. For the GSM8K dataset, to find the best parame- ters and features more efficiently, we conduct our experiment on a subset that contains 300 randomly selected samples instead of on the entire dataset. The baseline results are detailedly shown in ap- pendix Table 3. It means the results of subsets are roughly the same as the results on the entire dataset, thus ensuring that the randomly selected samples can be used to explore the effect of the steering experiment. LLM Models and SAEs Because we want to eval- uate LLMs that have been proven to have excellent thinking ability, but at the same time, our method is based on pre-trained SAE, so we choose the Dee pSeek-Llama3-8B (Guo et al., 2025) and its base versionLlama3.1-8B-it (Grattafiori et al., 2024). Besides, we also choose Gemma-2-9b-it (Team et al., 2024) as evaluation LLM. For SAEs, we leveragellama_scope_r1_distill(l15r_800m_ openr1_math)6to extract features from DeepSee k--Llama3-8B andLlama3.1-8B-it ,gemma-sco pe-9b-it-res(layer_9/width_16k/canonical 6https://huggingface.co/fnlp/ Llama-Scope-R1-Distill)7to extract features from Gemma-2-9b-it . Implementation Details. For the generation set- ting, we employed a repetition penalty of 1.2 while setting distinct maximum sequence lengths for each dataset: 3000 tokens for GSM8K, 2500 for MATH-500, 3000 for MMLU-High, and 3500 for MATHOAI. For evaluation, we leverage LLM as a judge (Phan et al., 2025) to evaluate the accuracy of predictions. We utilize the GPT4.1 API as the judge. More details can be found in Appendix C. 3.2 Experimental Results Analysis Comparing with Baselines. Table 1 reports the overall accuracy results of four mathematical rea- soning datasets. In addition to MATH-L3&L4, there is a consistent improvement in the other three datasets. Especially for DeepSeek-Llama3-8B , | https://arxiv.org/abs/2505.15634v2 |
steering improves the accuracy by 3.34% and 4.80% in MMLU-high and MathOAI. There is no significant improvement on MATHL3&L4 for three LLMs, only DeepSeek-Llama3-8B achieves 1.72% improvement, which means that SAE steer- ing may play a better role in the reasoning model compared to the instruction model. Through further 7https://huggingface.co/google/ gemma-scope-9b-it-res 0.1 0.2 0.3 0.4 0.5 Steering Strength0.760.780.800.820.840.860.88GSM8K Accuracy GSM8K Accuracy OAI AccuracyGSM8K Orign OAI Orign 0.760.780.800.820.84 OAI Accuracy Accuracy vs. Steering StrengthFigure 4: Relationship between strength λand accuracy in GSM8K and MATHOAI. manual reviewing, we found that for non-reasoning LLMLlama3.1-8B-it , too long generation length will lead the LLM to generate some irrelevant in- formation, steering may improve this phenomenon by generating some thinking process related to the steering feature. Steering Effects on Reasoning Length. We also compared the length variation of the questions with the top 25% shortest answers generated by the orig- inal LLM before and after steering. The result is shown in Figure 3. Our experimental results in- dicate a steady enhancement in reasoning length for all evaluated datasets, with Llama3.1-8B-it exhibiting a significant increase. After observation, we found that the answers after steering will in- crease the number of reasoning paths and the depth of reasoning. Relationship between Strength and Accuracy. TakingDeepSeek-Llama3-8B as an example, we plotted the relationship between steering strength and accuracy. From Figure 4 we can see that a higher strength does not always improve the accu- racy of the model’s response. There is an optimal strength, and exceeding it will cause the model’s performance to decline. Manually checking the cases found that when the strength is too large, the model will generate many meaningless answers. This may be because excessively large strength will destroy the internal representation of the model. 4 SAE-Free Steering 4.1 Motivation Feature-based steering has been proven to be use- ful in various fields of natural language processing, such as machine unlearning (Muhamed et al., 2025; Farrell et al., 2024), improving truthfulness (Marks and Tegmark, 2023), and interpretability (He et al.,2025; Kharlapenko et al.). Although there are some trained SAEs released to help interpret hidden acti- vations of LLMs, such as Gemma Scope (Lieberum et al., 2024) and Llama Scope (He et al., 2024), there are still large amounts of LLMs that do not have matching SAEs to interpret them, such as the Qwen series. However, training an SAE is time- consuming and requires a lot of cost, so it is ob- viously unlikely to train an SAE for each layer of each LLM. Based on the above reason, our method is hard to generalize to the scenarios in which there is no corresponding SAE. To solve this problem, we proposed an algorithm to generalize such an SAE- based steering method to LLMs without the corre- sponding SAE. Intuitively, we will start by assum- ing the availability of an SAE for a given LLM, and then derive the calculation to see if the SAE components can be approximated by others. 4.2 Framework Feature activation for virtual SAE. SAE is trained to minimize the reconstruction loss for the hidden activation hof | https://arxiv.org/abs/2505.15634v2 |
the layer l. Therefore, h can be decomposed as the weighted sum of feature vectors {zt}dSAE t=1 and error term ϵ, and αh tdenotes the activation value of feature tforhat layer l. h=dSAEX t=1αh tzt+b+ϵ (4) Now we want to steer the model even though there is no corresponding SAE model. In our method, there are two types of CoT: Verbal Process xcontains mtokens and Symbolic Process ycon- tainsntokens. We employ the mean representation in the layer lto represent xandy:x=1 mPm i=1rx i ,y=1 nPn j=1ry j. Therefore, the llayer’s residual activation of xandycan be written as follows according to equation (4) : x=1 mmX i=1rx i=1 mmX i=1(dSAEX t=1αrx i tzt+b+ϵ) =dSAEX t=1αx tzt+b+1 mmX ϵ (5a) y=1 nnX j=1ry j=1 nnX j=1(dSAEX t=1αry j tzt+b+ϵ) =dSAEX t=1αy tzt+b+1 nnX ϵ (5b) Figure 5: Attention mapping of DeepSeek-Llama3-8B andDeepSeek-qwen-1.5B . Here αx t=1 mPm i=1αrx i tand αy t= 1 nPn j=1αry j trepresent the mean activation of all tokens in xandyfor feature αtat layer L. Back to our SAE-based steering, to obtain the activation for each feature αt, we computed ∥αx t−αy t∥. It can be derived as follows: x−y=dSAEX t=1αx tzt+b+1 mmX ϵ −dSAEX t=1αy tzt−b−1 nnX ϵ =dSAEX t=1zt αx t−αy t + (1 mmX ϵ−1 nnX ϵ) (6) We can approximately assume that the error of each token is equal, so the error term can be removed. If there is an existing SAE, the next step is to sort the activation values ∥αx t−αy t∥in descending order to get top-k features, then select a certain feature among them to steer the LLM. So we want to find the feature vector ztsuch that the above value is as large as possible. More generally, given Npairs of {xp, yp}N p=1, according to equation 2, our goal is to maximzePN p=1∥(αxp t−αyp t)∥. To solve this, we need to make following two assumptions. •Sparsity Assumption . Since the features acti- vated by SAE are sparse, we generally only look for the top Ksignificant features, like K= 10 ,100,500, so we prefer the activation values to be distributed on these Kfeatures, which again enhances the sparsity of SAE. Toverify this assumption, we quantitatively ana- lyzed the relationship between the percentage of restored norm after reconstruction and the number of features required for reconstruction in Figure 6. We can find that the norm after reconstruction using the top-10 features has re- covered more than 90%. When K=dmodel , the reconstruction has been restored to around 99%. It means that utilizing only the top-k fea- tures to reconstruct the original vector does not lose much information.So we can assume thatPdSAE t=1zt(αxp t−αyp t)≈Pˆd t=1zt αxp t−αyp t , ˆd < d model≪dSAE. •Orthogonality Assumption. As evidenced by Fig- ure 7, the feature vectors ztassociated with the top-k features αtexhibit approximate pairwise orthogonality. So we can assume that we are looking for ˆdmutually orthogonal zt. Solution for Steering vector. Based on above assumptions, we have ∥αxp t−αyk t∥=∥⟨xp− yp,zt⟩∥by Appendix E. So if we let A= (x1−y1,x2−y2, ...,xN−yN), it turns out to be the | https://arxiv.org/abs/2505.15634v2 |
following problem: max ∥zt∥=1NX p=1∥αxp t−αyp t∥ = max ∥zt∥=1NX p=1∥⟨xp−yp,zt⟩∥ ∝max ∥zt∥=1∥A⊤zt∥2= max ∥zt∥=1z⊤ tAA⊤zt(7) The solving process can be found in Appendix E. The goal is to get the unit eigenvectors {ut}dmodel t=1 ofAATas descending order of eigenvalues, then select a certain eigenvector utto steer the residual activations rhas follows: ˆrh i=rh i+λut, i∈ H (8) 5 Experiments of SAE-free Steering 5.1 Implementation Details Experiment Setting. We leverage DeepSeek-Ll ama3-8B andDeepSeek-qwen-1.5B as our evalua- tion LLMs. There is no corresponding pretrained SAE forDeepSeek-qwen-1.5B . We choose hidden states in layer 15 to obtain rx iandry j. The setting of the dataset and generation remains identical to that described in the preceding section. Baselines. We use BoostStep (Zhang et al., 2025) and MathNeuro (Christ et al., 2024) as comparing baselines. BoosStep decomposes the process of solving problems into many steps. When generat- ing each step, the method will try to retrieve the most similar sub-steps from a reasoning steps bank. The goal of MathNeuro is to find the math-related neurons in the LLM and then scale their activation. 5.2 Comparing with Baselines The result is shown in Table 2. It indicates that compared with the baseline, SAE-free steering has a consistent improvement on each dataset. For MATH-L3&L4 and MMLU-high, SAE-free steer- ing outperforms SAE-based steering. The results demonstrate that the information extracted from SAE-free steering exhibits superior applicability. The enhanced performance may be attributed to the fact that SAEs are trained on massive datasets, po- tentially introducing noise into their feature repre- sentations. In contrast, our proposed SAE-free ap- proach derives its information directly from struc- tured reasoning processes, yielding more targeted and reliable features. 5.3 Attention Analysis We also visualize the effect of SAE-free steer- ing from the perspective of attention distribu- tion. We calculate the change of attention distri- bution before and after the intervention in the next layer of the steering layer. As is shown in Fig- ure 5, the left and right are the visualizations of DeepSeek-qwen-1.5B andDeepSeek-Llama3-8B , the indexes in the x-axis with yellow boxes sur- rounded indicate tokens representing mathematicalrepresentations. We can observe that the model will pay more attention to these tokens after steering, which aligns well with our expectations. It means that steering activations with these eigenvalues will lead to more reasonable attention allocation. 6 Related Work Sparse Autoenocder. SAE is trained to reconstruct the middle activations (Ng et al., 2011; Karvonen et al., 2024; Marks et al., 2024). Various SAE variants have been proposed to address the limita- tions of traditional SAEs, such as Gated SAE (Ra- jamanoharan et al., 2024a), TopK-SAE (Gao et al., 2024), and JumpReLU SAE (Rajamanoharan et al., 2024b). SAE has been proven useful in many fields of application in LLM, such as knowl- edge editing (Zhao et al., 2024), machine unlearn- ing (Muhamed et al., 2025; Farrell et al., 2024), improving safety (Wu et al., 2025b), and robust- ness (Wu et al., 2025a). Representation Engineering. Representation en- gineering has emerged as an effective approach to enhance the interpretability and transparency of LLMs. Zou et al. (2023) | https://arxiv.org/abs/2505.15634v2 |
has summarized recent progress and applications of representation engi- neering in bias, fairness, model editing, and other fields. Li et al. (2024) has utilized this technique to quantify LLMs’ performance in different lan- guages. Li et al. (2023) reveals that the internal rep- resentations of attention heads within LLMs serve as reliable indicators of reasoning paths. Through probe-based interventions, they correct the inter- nal representation direction to improve the LLM’s performance. 7 Conclusions In this paper, we introduce a novel framework for enhancing reasoning abilities in LLMs through tar- geted feature extraction and steering of residual activations. By decomposing vanilla CoT reason- ing into verbal and symbolic processes, we identi- fied and amplified task-relevant reasoning features while suppressing noise, demonstrating consistent improvements across multiple mathematical rea- soning benchmarks. Additionally, our SAE-free steering algorithm extends these benefits to models without corresponding pre-trained SAEs by com- puting steering directions directly from residual activations, achieving comparable or even superior results to SAE-based methods in some scenarios. Limitations While our feature extraction and steering methods demonstrate promising results in enhancing chain- of-thought reasoning capabilities, there are several limitations. First, our evaluation focuses mainly on mathematical reasoning datasets (GSM8K, MATH- L3&L4, MMLU-high, and MathOAI). In future, we plan to extend our proposed steering techniques to more diverse reasoning domains such as logical reasoning, scientific problem-solving, or multi-step planning tasks that don’t primarily involve mathe- matics. Second, our current experiments are lim- ited to three model families (LLaMA, DeepSeek, and Gemma). We plan to apply our method to more diverse LLM architectures to verify the generaliz- ability of our approach. References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 . Bryan R Christ, Zack Gottesman, Jonathan Kropko, and Thomas Hartvigsen. 2024. Math neurosurgery: Isolating language models’ math reasoning abili- ties using only forward passes. arXiv preprint arXiv:2410.16930 . Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems, 2021. URL https://arxiv. org/abs/2110.14168 , 9. Hoagy Cunningham, Aidan Ewart, Logan Riggs, Robert Huben, and Lee Sharkey. 2023. Sparse autoencoders find highly interpretable features in language models. arXiv preprint arXiv:2309.08600 . Eoin Farrell, Yeu-Tong Lau, and Arthur Conmy. 2024. Applying sparse autoencoders to unlearn knowledge in language models. arXiv preprint arXiv:2410.19278 . Andrey Galichin, Alexey Dontsov, Polina Druzhinina, Anton Razzhigaev, Oleg Y Rogov, Elena Tutubalina, and Ivan Oseledets. 2025. I have covered all the bases here: Interpreting reasoning features in large language models via sparse autoencoders. arXiv preprint arXiv:2503.18878 . Leo Gao, Tom Dupré la Tour, Henk Tillman, Gabriel Goh, Rajan Troll, Alec Radford, Ilya Sutskever, Jan Leike, and Jeffrey Wu. 2024. Scaling and evaluating sparse autoencoders. arXiv preprint arXiv:2406.04093 .Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al- Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. 2024. The llama 3 herd of mod- els.arXiv preprint arXiv:2407.21783 . Daya Guo, Dejian Yang, | https://arxiv.org/abs/2505.15634v2 |
Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. 2025. Deepseek-r1: In- centivizing reasoning capability in llms via reinforce- ment learning. arXiv preprint arXiv:2501.12948 . Zhengfu He, Wentao Shu, Xuyang Ge, Lingjie Chen, Junxuan Wang, Yunhua Zhou, Frances Liu, Qipeng Guo, Xuanjing Huang, Zuxuan Wu, et al. 2024. Llama scope: Extracting millions of features from llama-3.1-8b with sparse autoencoders. arXiv preprint arXiv:2410.20526 . Zirui He, Haiyan Zhao, Yiran Qiao, Fan Yang, Ali Payani, Jing Ma, and Mengnan Du. 2025. Saif: A sparse autoencoder framework for interpreting and steering instruction following of language models. arXiv preprint arXiv:2502.11356 . Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Stein- hardt. 2021. Measuring massive multitask language understanding. Proceedings of the International Con- ference on Learning Representations (ICLR) . Bertram Højer, Oliver Jarvis, and Stefan Heinrich. 2025. Improving reasoning performance in large lan- guage models via representation engineering. arXiv preprint arXiv:2504.19483 . Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richard- son, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. 2024. Openai o1 system card. arXiv preprint arXiv:2412.16720 . Adam Karvonen, Benjamin Wright, Can Rager, Rico Angell, Jannik Brinkmann, Logan Smith, Claudio Mayrink Verdun, David Bau, and Samuel Marks. 2024. Measuring progress in dictionary learning for language model interpretability with board game models. Advances in Neural Information Processing Systems , 37:83091–83118. Dmitrii Kharlapenko, Stepan Shabalin, Fazl Barez, Neel Nanda, and Arthur Conmy. Scaling sparse feature circuits for studying in-context learning. Kenneth Li, Oam Patel, Fernanda Viégas, Hanspeter Pfister, and Martin Wattenberg. 2023. Inference- time intervention: Eliciting truthful answers from a language model. Advances in Neural Information Processing Systems , 36:41451–41530. Zihao Li, Yucheng Shi, Zirui Liu, Fan Yang, Ninghao Liu, and Mengnan Du. 2024. Quantifying multilin- gual performance of large language models across languages. arXiv e-prints , pages arXiv–2404. Tom Lieberum, Senthooran Rajamanoharan, Arthur Conmy, Lewis Smith, Nicolas Sonnerat, Vikrant Varma, János Kramár, Anca Dragan, Rohin Shah, and Neel Nanda. 2024. Gemma scope: Open sparse autoencoders everywhere all at once on gemma 2. arXiv preprint arXiv:2408.05147 . Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harri- son Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. 2023. Let’s verify step by step. In The Twelfth Inter- national Conference on Learning Representations . Samuel Marks, Can Rager, Eric J Michaud, Yonatan Be- linkov, David Bau, and Aaron Mueller. 2024. Sparse feature circuits: Discovering and editing interpretable causal graphs in language models. arXiv preprint arXiv:2403.19647 . Samuel Marks and Max Tegmark. 2023. The geometry of truth: Emergent linear structure in large language model representations of true/false datasets. arXiv preprint arXiv:2310.06824 . Callum McDougall. 2024. SAE Visualizer. https: //github.com/callummcdougall/sae_vis . Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. 2022. Locating and editing factual associa- tions in gpt. Advances in neural information process- ing systems , 35:17359–17372. Aashiq Muhamed, Jacopo Bonato, Mona Diab, and Virginia Smith. 2025. Saes can improve unlearning: Dynamic sparse autoencoder guardrails for precision unlearning in llms. arXiv preprint arXiv:2504.08192 . Jerome L Myers, | https://arxiv.org/abs/2505.15634v2 |
Arnold D Well, and Robert F Lorch Jr. 2013. Research design and statistical analysis . Rout- ledge. Andrew Ng et al. 2011. Sparse autoencoder. CS294A Lecture notes , 72(2011):1–19. nostalgebraist. 2020. Logit lens. Accessed: 2024-06- 06. Liangming Pan, Alon Albalak, Xinyi Wang, and William Yang Wang. 2023. Logic-lm: Empower- ing large language models with symbolic solvers for faithful logical reasoning. arXiv preprint arXiv:2305.12295 . Long Phan, Alice Gatti, Ziwen Han, Nathaniel Li, Josephina Hu, Hugh Zhang, Chen Bo Calvin Zhang, Mohamed Shaaban, John Ling, Sean Shi, et al. 2025. Humanity’s last exam. arXiv preprint arXiv:2501.14249 . Xin Quan, Marco Valentino, Danilo S Carvalho, Dhairya Dalal, and André Freitas. 2025. Peirce: Unifying material and formal reasoning via llm- driven neuro-symbolic refinement. arXiv preprint arXiv:2504.04110 .Senthooran Rajamanoharan, Arthur Conmy, Lewis Smith, Tom Lieberum, Vikrant Varma, János Kramár, Rohin Shah, and Neel Nanda. 2024a. Improving dictionary learning with gated sparse autoencoders. arXiv preprint arXiv:2404.16014 . Senthooran Rajamanoharan, Tom Lieberum, Nicolas Sonnerat, Arthur Conmy, Vikrant Varma, János Kramár, and Neel Nanda. 2024b. Jumping ahead: Im- proving reconstruction fidelity with jumprelu sparse autoencoders. arXiv preprint arXiv:2407.14435 . Dong Shu, Xuansheng Wu, Haiyan Zhao, Daking Rai, Ziyu Yao, Ninghao Liu, and Mengnan Du. 2025. A survey on sparse autoencoders: Interpreting the inter- nal mechanisms of large language models. Mingyang Song, Mao Zheng, Zheng Li, Wenjie Yang, Xuan Luo, Yue Pan, and Feng Zhang. 2025. Fastcurl: Curriculum reinforcement learning with progressive context extension for efficient training r1-like reason- ing models. arXiv preprint arXiv:2503.17287 . Chung-En Sun, Ge Yan, and Tsui-Wei Weng. 2025. Thinkedit: Interpretable weight editing to mitigate overly short thinking in reasoning models. arXiv preprint arXiv:2503.22048 . Xinyu Tang, Xiaolei Wang, Zhihao Lv, Yingqian Min, Wayne Xin Zhao, Binbin Hu, Ziqi Liu, and Zhiqiang Zhang. 2025. Unlocking general long chain-of- thought reasoning capabilities of large language mod- els via representation engineering. arXiv preprint arXiv:2503.11314 . Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupati- raju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, et al. 2024. Gemma 2: Improving open language models at a practical size. arXiv preprint arXiv:2408.00118 . Kimi Team, Angang Du, Bofei Gao, Bowei Xing, Changjiu Jiang, Cheng Chen, Cheng Li, Chenjun Xiao, Chenzhuang Du, Chonghua Liao, et al. 2025. Kimi k1. 5: Scaling reinforcement learning with llms. arXiv preprint arXiv:2501.12599 . OpenThoughts Team. 2025. Open Thoughts. https://open-thoughts.ai. Adly Templeton, Tom Conerly, Jonathan Marcus, Jack Lindsey, Trenton Bricken, Brian Chen, Adam Pearce, Craig Citro, Emmanuel Ameisen, Andy Jones, Hoagy Cunningham, Nicholas L Turner, Callum McDougall, Monte MacDiarmid, C. Daniel Freeman, Theodore R. Sumers, Edward Rees, Joshua Batson, Adam Jermyn, Shan Carter, Chris Olah, and Tom Henighan. 2024. Scaling monosemanticity: Extracting interpretable features from claude 3 sonnet. Transformer Circuits Thread . Lean Wang, Lei Li, Damai Dai, Deli Chen, Hao Zhou, Fandong Meng, Jie Zhou, and Xu Sun. 2023. Label words are anchors: An information flow perspective for understanding in-context learning. arXiv preprint arXiv:2305.14160 . Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. Chain-of-thought prompting elic- its reasoning in large | https://arxiv.org/abs/2505.15634v2 |
language models. Xuansheng Wu, Wenhao Yu, Xiaoming Zhai, and Ning- hao Liu. 2025a. Self-regularization with latent space explanations for controllable llm-based classification. arXiv preprint arXiv:2502.14133 . Xuansheng Wu, Jiayi Yuan, Wenlin Yao, Xiaoming Zhai, and Ninghao Liu. 2025b. Interpreting and steering llms with mutual information-based expla- nations on sparse autoencoders. arXiv preprint arXiv:2502.15576 . Shaotian Yan, Chen Shen, Wenxiao Wang, Liang Xie, Junjie Liu, and Jieping Ye. 2025. Don’t take things out of context: Attention intervention for enhancing chain-of-thought reasoning in large language models. arXiv preprint arXiv:2503.11154 . An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, Chujie Zheng, Dayi- heng Liu, Fan Zhou, Fei Huang, Feng Hu, Hao Ge, Haoran Wei, Huan Lin, Jialong Tang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jing Zhou, Jingren Zhou, Junyang Lin, Kai Dang, Keqin Bao, Kexin Yang, Le Yu, Lianghao Deng, Mei Li, Mingfeng Xue, Mingze Li, Pei Zhang, Peng Wang, Qin Zhu, Rui Men, Ruize Gao, Shixuan Liu, Shuang Luo, Tianhao Li, Tianyi Tang, Wenbiao Yin, Xingzhang Ren, Xinyu Wang, Xinyu Zhang, Xu- ancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yinger Zhang, Yu Wan, Yuqiong Liu, Zekun Wang, Zeyu Cui, Zhenru Zhang, Zhipeng Zhou, and Zihan Qiu. 2025. Qwen3 technical report. An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. 2024. Qwen2. 5 tech- nical report. arXiv preprint arXiv:2412.15115 . Weihao Zeng, Yuzhen Huang, Qian Liu, Wei Liu, Ke- qing He, Zejun Ma, and Junxian He. 2025. Simplerl- zoo: Investigating and taming zero reinforcement learning for open base models in the wild. arXiv preprint arXiv:2503.18892 . Beichen Zhang, Yuhong Liu, Xiaoyi Dong, Yuhang Zang, Pan Zhang, Haodong Duan, Yuhang Cao, Dahua Lin, and Jiaqi Wang. 2025. Booststep: Boost- ing mathematical capability of large language models via improved single-step reasoning. arXiv preprint arXiv:2501.03226 . Yu Zhao, Alessio Devoto, Giwon Hong, Xiaotang Du, Aryo Pradipta Gema, Hongru Wang, Xuanli He, Kam-Fai Wong, and Pasquale Minervini. 2024. Steer- ing knowledge selection behaviours in llms via sae- based representation engineering. arXiv preprint arXiv:2410.15999 .Andy Zou, Long Phan, Sarah Chen, James Campbell, Phillip Guo, Richard Ren, Alexander Pan, Xuwang Yin, Mantas Mazeika, Ann-Kathrin Dombrowski, et al. 2023. Representation engineering: A top- down approach to ai transparency. arXiv preprint arXiv:2310.01405 . Appendix A Baseline result of GSM8K We compare the accuracy result of GSM8k for the subset of 300 samples and the full set of 1319 sam- ples in the four baselines we used. The result is shown below, which ensures that the randomly se- lected data samples can roughly represent the entire dataset. Subset Full Set Llama3.1-8B-it 60.33 54.74 DeepSeek-Llama3-8B 82.67 82.26 DeepSeek-distill-Qwen-1.5B 70.00 70.20 Gemma-2-9b-it 88.00 87.79 Table 3: Accuracy of subset and full set of GSM8K in four LLMs B Feature Selection Details In the experiment of feature selection, we found that forllama_scope_r1_distill(l15r_800m _openr1_math) , the features obtained by αq t= 1 NPN k=1∥αxk t−αyk t∥contains many noise fea- tures, and we are very difficult to obtain useful | https://arxiv.org/abs/2505.15634v2 |
features from this equation. This may be related to the training corpus. This SAE was only trained on one corpus. We did not find that this phenomenon exists in other SAEs with richer training corpora such asgemma-scope-9b-it-res(layer_9/wid th_16k/canonical) andllama_scope_lxr_8x (l15r_8x) . It confirms our assumption. So we leverage αq t=1 N∥PN k=1(αxk t−αyk t)∥to replace the original equation to obtain the final features in llama_scope_r1_distill(l15r_800m_openr1_ math) . C Steering Parameter Recommendations For SAE steering, we found features with the fol- lowing indexes: 24715,20737,20236,14276,15593, and 17831 are suitable features for steering in ll ama_scope_r1_distill(l15r_800m_openr1_ma th). Features with index 13419,12085, and 9540 are suitable for gemma-scope-9b-it-res(layer _9/width_16k/canonical) . After experiments, it was found that large strength would cause the LLM’s generation to be disordered. Therefore, we recommend strength λ≤0.5.DSAE-Free Experiment Implementation Details For SAE-free steering, we only conduct experi- ments on the eigenvectors corresponding to the top-10 eigenvalues, and the strength λis still rec- ommended to be less than 0.5. For the Booststep baseline, most of our exper- imental settings are the same as the original set- tings. To speed up the generation efficiency, we set the maximum number of sub-steps to 10, the similarity retrieval threshold to 0.7, the maximum number of tokens in each sub-step to 200, and the loop will be exited when the answer contains the word "final answer" or the maximum number of steps is generated. For the Mathneuro baseline, we randomly sample 1000 samples of gsm8k.csv andrace.csv provided by the original paper to calculate the active neurons for math problems and non-math problems, and set the keep ratio to 15% and the activation multiple to 1.1. E Derivation of Formula 7 Based on the Sparsity Assumption, we have ˆd < dmodel, which means it’s feasible to find ˆdmutu- ally orthogonal unit vectors in dmodel-dimensional space. Then according to Orthogonality Assump- tion, we have ⟨zi,zj⟩= 0when i̸=j, so: ∥⟨xp−yp,zt⟩∥=∥⟨ˆdX i=1zi αxp i−αyp i ,zt⟩∥ =∥ αxp i−αyp i ⟨ˆdX i=1zi,zt⟩∥=∥ αxp t−αyp t ∥ (9) For problem 7, it is a Rayleigh quotient problem. We use the Lagrange multiplier method to solve the above problem. L(u, λ) =z⊤AA⊤z−λ(z⊤z−1) (10) Therefore ∂L ∂z= 2AA⊤z−2λz= 0,∂L ∂λ=z⊤z−1 = 0 So we have 2AA⊤z−2λz= 0 =⇒AA⊤z=λz=⇒max ∥z∥=1z⊤AA⊤z = max ∥z∥=1λz⊤z=λmax The first right aroow in the above equations means λandzare the eigenvalues and the eigenvectors forAAT. So the zwe want to derive from the formula should be the eigenvectors corresponding to the eigenvalues λsorted from largest to smallest. F Reconstruction Error of SAE We also experiment with the relationship between reconstruction error and the number of selected features in Gemma-2-9b-it . The result is shown in Figure 6, when k= 10 , the reconstruction per- centage exceeds 90%. This proves that our sparsity assumption is reasonable to some extent. 10100 5001000 2000 3000 3584 4000 Number of T op-k index80%85%90%95%100%105%Norm Reconstruction Percent Relationship between the reconstructed norm percentage and the number of top-k index 90% Percent Reconstruct 99% Percent Reconstruct Figure 6: Relationship between reconstruction error and the number of selected features(Gemma-2-9b-it) G Approximate Orthogonality of SAE feature vectors In order to | https://arxiv.org/abs/2505.15634v2 |
explore the geometric relationship be- tween top-k feature vectors of SAE, we calculated the cosine similarity matrix. A similarity of 0 indi- cates that the two are orthogonal. We utilize llama _scope_r1_distill(l15r_800m_openr1_math) as an example, the result is shown as Figure 7, we can find that for top-150 feature vectors, they are almost pairwise orthogonal. 0 20 40 60 80 100 120 140 T op-K Index0 20 40 60 80 100 120 140T op-K IndexCosine similarity of top features vectors 1.00 0.75 0.50 0.25 0.000.250.500.751.00 Figure 7: Cosine Similarity of Top-k feature vectorsH Generating Verbal CoT and Symbolic CoT We give a prompt example to instruct GPT-4.1 to generate Verbal CoT and Symbolic CoT that were used to extract features in Figure 8 and Figure 9. System : You are a helpful assistant who can follow this command. COMMAND: Give the general idea of the following problem.m Only state the steps, including reasoning words such as ’therefore’, ’but’, ’however’, ’wait’ as much as possible, but do not include any math- related symbols, formulas and calculations. User : The function fsatisfies the functional equation f(x) +f(y) =f(x+y)−xy−1 for all real numbers xandy.Iff(1) = 1 ,then find all integers n such that f(n) =n.Enter all such integers, separated by commas. System : Here’s a natural English explanation without mathematical symbols: **Step 1: Understanding the Problem** We’re dealing with a special function that follows a specific rule: when you take any two numbers, apply the function to each separately and add those results, it’s equiv- alent to applying the function to their sum first, then subtracting their product, and finally subtracting one. We’re also told that when we input one into this function, we get one as output. Our goal is to find all whole numbers where the function’s output equals the input itself. **Step 2: Making an Educated Guess About the Function** Problems like this typically involve polynomial functions. Since we see the product of two variables appearing in the rule, this strongly suggests our function is quadratic - meaning it likely involves terms where the variable is squared, along with other components. **Step 3: Setting Up the Relationship** We’ll propose a general form for our quadratic function with three unknown coefficients. By plugging this form into the original rule and carefully expanding both sides, we can rearrange terms to see how different components relate to each other. **Step 4: Comparing Components** After expansion, we’ll systemat- ically match up similar terms from both sides of the equation - those involving squares of variables, products of variables, single variables, and constant terms. This will give us equations to determine our un- known coefficients. **Step 5: Using the Given Information** We know specifically what happens when we input the number one into our function. This provides an additional concrete equation that will help pin down the exact values of our coefficients. **Step 6: Solving for the Unknowns** With these equations established, we can now solve for all three coefficients. Once we’ve completely determined our function’s exact form, we’ll be able to evaluate it for any input. | https://arxiv.org/abs/2505.15634v2 |
**Step 7: Finding Special Values** Finally, we’ll look for those partic- ular whole numbers where the function’s output is identical to the input itself. This involves setting up and solving another equation based on the complete function we’ve determined. **Conclusion:** Through this step-by-step approach, we’ll fully charac- terize the function and identify all integer values that satisfy our special condition where the function returns exactly what was input. User :{question} Figure 8: The prompt for generating Verbal Process. System : You are an assistant who can follow this command. COMMAND:Give all the mathematical calculations for the following questions.Do not include any text description and logical derivations. User : The function fsatisfies the functional equation f(x) +f(y) =f(x+y)−xy−1 for all real numbers xandy.Iff(1) = 1 ,then find all integers nsuch that f(n) =n.Enter all such integers, separated by commas. System : 1. **Assume quadratic form:** f(x) =ax2+bx+c 2. **Substitute into the functional equation:** ax2+bx+c+ay2+by+c=a(x+y)2+b(x+y) +c−xy−1 3. **Expand and simplify:** ax2+ay2+bx+by+ 2c=ax2+ 2axy +ay2+bx+by+c−xy−1 4. **Cancel common terms:** 2c= 2axy−xy+c−1 5. **Collect like terms:** (2a−1)xy+ (c−2c+ 1) = 0 6. **Set coefficients to zero:** 2a−1 = 0 ⇒a=1 2 −c+ 1 = 0 ⇒c= 1 7. **Use f(1) = 1 :** 1 2(1)2+b(1) + 1 = 1 ⇒1 2+b+ 1 = 1 b=−1 2 8. **Final form of f(x):** f(x) =1 2x2−1 2x+ 1 9. **Solve f(n) =n:** 1 2n2−1 2n+ 1 = n n2−3n+ 2 = 0 (n−1)(n−2) = 0 n= 1,2 **Final Answer:** 1,2 User :{question} Figure 9: The prompt for generating Symbolic Process. I Eigenvector Steering Visualization To explore in more detail how SAE-free works and how its role is similar to SAE-based steering, we performed SAE-free steering on two LLMs with associated SAEs: DeepSeek-Llama3-8B andGemma-2-9b-it . The setting is the same as before. We then observe the activation changes on certain SAE features before and after SAE-free steering. Figure 10 and Figure 11 show the activation change of feature 20903 and 4118 for l lama_scope_r1_distill(l15r_800m_openr1_m ath) . 4118 represents "mathematical parameters related to dimensions and positioning". After steering, some related words like "lengths" and "height" are significantly activated. Figure 12 and Figure 13 show the activation change of feature 8927 and 13526 for gemma-scope-9b-it-res(l ayer_9/width_16k/canonical) . Both figures also exhibited the fact that some words related to mathematics and reasoning were significantly activated. !"#$"! %&"%$!' ()*+ ,- . / . ,-. . , /. "#, ./ /, / . . $ . | https://arxiv.org/abs/2505.15634v2 |
,0 #. $ #)- !) 11!11,2223 4 5!6 !"#$"! %&"%$!' ()*+ ,- . / . ,-. . , /. "#, ./ /, / . . $ . ,0 #. $ #)- !) 11!11,2223 4 5!6 Figure 10: Eigenvector steering visualization for feature idx 20903(structures and elements related to mathematical expressions and equations) for DeepSeek-Llama3-8B . J Explanation of Eigenvector In order to verify that the obtained eigenvectors have good interpretability, we take the eigenvectors corresponding to the top-50 eigenvalues, perform inner product operations on them and the dSAEfea- ture vectors of SAE to obtain the similarity. A total of50∗dSAEoperations are performed to obtain the top-10 similarities, and the results are shown in Table 4 and Table 5. K Further Analysis about Feature Selection To further illustrate that the subtraction method is more advantageous than the addition method, in- !" # $ #%&'() & **$+, , #- . ! / 012 3 4 %/$!10$15 6! / - 7 . ) 8 9 - # | https://arxiv.org/abs/2505.15634v2 |
15 - . / !" 012 15 ! :12 : 0 ; 4#%&'() & **$ 15 : $1<+, 6: = 4 = #$#%&'() & **!#%&'() & **$+,!1<+,$1 +, 65 ; 4'() &> = *$ & = #*&#%&'() & ***'? '( 11$ &1+ ,*&+,*'? '( 11$1 2 6 @ # ) )&1 2* !" # $ #%&'() & **$+, , #- . ! / 012 3 4 %/$!10$15 6! / - 7 . ) 8 9 - # 15 - . / !" 012 15 ! :12 : 0 ; 4#%&'() & **$ 15 : $1<+, 6: = 4 = #$#%&'() & **!#%&'() & **$+,!1<+,$1 +, 65 ; 4'() &> = *$ & = #*&#%&'() & ***'? '( 11$ &1+ ,*&+,*'? '( 11$1 2 6 @ # ) )&1 2*Figure 11: Eigenvector steering visualization for feature idx 4118(specific mathematical parameters related to dimensions and positioning) for DeepSeek-Llama3-8B . | https://arxiv.org/abs/2505.15634v2 |
!" # $ #%&'& (($)* * #+ , ! - ./ 01 2 %-$!/.$/3 4! - + 5 , ' 6 7 + # /3 + , - !" ./0 /3 ! 8/0 8 . 9 2 #%&'& (($ /3 8 $/:)* 48 ; 2 ; #$#%&'& ((!#%&'& (($)*!/:)*$/ )* 43 9 2'&< ; ($ =& ; #(&#%&'& ((( //$ =&/)*(& )*( //$/ 0 4 > # ' ='&/ 0( !" # $ #%&'& (($)* * #+ , ! - ./ 01 2 %-$!/.$/3 4! - + 5 , ' 6 7 + # /3 + | https://arxiv.org/abs/2505.15634v2 |
, - !" ./0 /3 ! 8/0 8 . 9 2 #%&'& (($ /3 8 $/:)* 48 ; 2 ; #$#%&'& ((!#%&'& (($)*!/:)*$/ )* 43 9 2'&< ; ($ =& ; #(&#%&'& ((( //$ =&/)*(& )*( //$/ 0 4 > # ' ='&/ 0( Figure 12: Eigenvector steering visualization for feature idx idx 8927(phrases related to mathematical combina- tions and choices) for Gemma-2-9b-it . Rank Cosine SimilarityID Explanation 3 0.52 16424 mathematical expressions involving polynomials and their relationships 7 0.33 14276 numerical representations and expressions related to geometry and algebra 5 0.30 17993 mathematical terms and geometric relationships 12 0.28 7528 mathematical operations and calculations 10 0.27 28227 mathematical terminology related to rational numbers and quadratic functions 7 0.26 16424 mathematical expressions involving polynomials and their relationships 6 0.26 24713 instances of mathematical notation and calculations 9 0.25 23306 mathematical equations and expressions involving variables and constants 7 0.24 11330 mathematical expressions and calculations, related to geometry and algebra 5 0.24 30063 geometric concepts and relationships within the context of triangle properties Table 4: The top-10 most similar pairs of eigenvector-feature vector by SAE for DeepSeek-Llama3-8B . Rank Cosine SimilarityID Explanation 2 0.48 12085 references to scientific concepts, particularly regarding cellular and biology 1 0.36 5012 words related to the concept of "offer" and its variations 7 0.33 8927 phrases related to mathematical combinations and choices 9 0.29 10568 terms related to fractions and their properties, particularly common denominators 6 0.24 4342 mathematical expressions involving polynomials and their relationships 22 0.23 13526 mathematical expressions and equations 2 0.22 13724 instances of the word "fer" and variations or adjacent markers of some form of linguistic or structural significance 5 0.22 11268 mathematical expressions and equations 20 0.21 7675 numerical values and mathematical expressions 2 0.21 10678 instances of tab-separated values or numerical data Table 5: The top-10 most similar pairs of eigenvector-feature vector by SAE for Gemma-2-9b-it . ! " # $%%& $%%& '& ( $ ! ) ! *"+&# ! ) ! *"+ ) ! *"+ ) + !+ ) ! "+ ) , -$ ! ) %%& $%%& (.( ',# / (. | https://arxiv.org/abs/2505.15634v2 |
"$ ) ) 0 # /( "$ ) ) %%&$%%1 2 ' ( '3- # ( 2$4( ( . "$ ) (# "'%%0 5 $%% ! " # $%%& $%%& '& ( $ ! ) ! *"+&# ! ) ! *"+ ) ! *"+ ) + !+ ) ! "+ ) , -$ ! ) %%& $%%& (.( ',# / (. "$ ) ) 0 # /( "$ ) ) %%&$%%1 2 ' ( '3- # ( 2$4( ( . "$ ) (# "'%%0 5 $%% Figure 13: Eigenvector steering visualization for feature idx 13526(mathematical concepts related to factors and multiples) for Gemma-2-9b-it .spired by Wang et al. (2023); Yan et al. (2025), we leverage the gradient of the Negative Log- Likelihood (NLL) loss function to each token’s hidden state at the current layer as the weight to recalculate the SAE activation score as follows: αx grad=1 ∥X∥X i∈X ∂LNLL ∂rx i SAE (rx i) αy grad=1 ∥Y∥X j∈Y ∂LNLL ∂ry j SAE (ry j)(11) where ∥∂LNLL ∂rx i∥and∥∂LNLL ∂ry j∥approximately rep- resent the importance of the current position for the model prediction. To make SAE activations αqdemonstrate sufficient robustness, the ranking result should be approximately the same as αq grad. Therefore, we employ Spearman’s rank correlation coefficient (Myers et al., 2013) to measure their correlation. The result is shown in the figure 14, where we use Gemma-2-9b-it as the base LLM andgemma-scope-9b-it-res(layer_9/width_ 0 100 200 300 400 500 600 700 800 900 1000 T op-K Index0.15 0.10 0.05 0.000.050.100.15Correlation Rank Correlation vs. T op-K Selection Spearman (subtraction) Spearman (addition)Figure 14: Spearman Correlation of | https://arxiv.org/abs/2505.15634v2 |
Subtraction and Addition 16k/canonical) as the corresponding SAE. The result shows that compared with subtraction, addi- tion will make the correlation lower, even less than 0, which means that the activation result obtained by subtracting can better reflect the importance of model prediction.L Steering Case We provide some SAE-based steering samples. Question :Answer the following question step by step in short and put the final answer in . Question:Carlos Montado was born on Saturday, November 9, 2002. On what day of the week will Carlos be 706 days old? (Spell out the entire day of the week.) Answer:Friday Orign :Step 1: First, we need to calculate how many weeks are there in 706 days. To find this out, we divide 706 by 7 since each week has 7 days. Step 2: Determine if any extra days remain after subtracting complete weeks from the total number of days. After calculating the quotient and remainder when dividing 706 by 7, we’ll see that there’s a specific amount left over which indicates additional days beyond full weeks. Step 3: Add up all these remaining days together along with whatever fraction of a week they represent as an equivalent number of whole days or fractions thereof within one particular seven-day cycle starting at Sunday, add them onto Sunday itself. The final answer is: Saturday Steering :Step 1: To determine the day of the week when Carlos will be 706 days old, we need to calculate how many complete weeks have passed since his birth. Step 2: We know that there are 7 days in a week, so if we divide 706 by 7, we can find the number of complete weeks. Step 3: The quotient obtained from this division is not relevant for our calculation; rather, it’s essential to focus on the remainder after dividing 706 by 7. Step 4: Determine the remaining days beyond full weeks We perform the division as follows: 706 = 100 ·7 + 6. This means that there were exactly 100complete seven-day cycles between the dates corresponding to these two numbers, with an extra six days left over. The final answer is: Saturday Step 5: Find the day of the week based on the additional days Since there are 6 days leftover after accounting for complete weeks, we move forward 6 days from Carlos’ birthday, which falls on a Saturday. Step 6: Identify the resulting day of the week Moving forward 6 days from a Saturday lands us on Friday. However, considering the instruction that requires spelling out the entire name of the day, we should present the correct response accordingly. The final answer is: Friday Table 6: Steering Case. LLM: Llama3.1-8B-it Featue:14276(numerical representations and mathemat- ical expressions related to geometry and algebra) Question :Determine the modulo 4 remainder of the following sum: 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10 + 11 + 12. Answer:2 Orign :Step 1 We need to calculate 1 + 2 + ...+ 12. Step 2 This is an arithmetic series with first term a= | https://arxiv.org/abs/2505.15634v2 |
1, last term l= 12,and number of terms n = 12. The formula for the sum of this type of sequence is given byn(a+l) 2, so we can use it here. Step 3 Substituting in our values gives us a total sum of (12∗13 2)Step 4 Evaluating that expression yields (156)as the result. Step 5 Now we must find its remainder when divided by four using modular arithmetic (or simply division). Step 6 To do this, divide 156 by 4. Step 7 When you perform integer division between these two numbers, you get quotient q = 39 and remainder r = 0. Step 8 Since there’s no remainder left after dividing, then your final answer will be just the value of the remainder which is zero. I hope it is correct. Thank you! Steering :Step 1: We need to find the modulo 4 remainder for each number in the given sequence. To do this, we will divide each number by 4 and take note of its remainder. Step 2: Determine the remainders when dividing numbers from 1 through 12 by 4. The remainders are as follows: - 1÷4 = 0with a remainder of 1-2÷4 = 0 with a remainder of 2-3÷4 = 0with a remainder of 3-4÷4 = 1with a remainder of 0-5÷4 = 1with a remainder of 1-6÷4 = 1with a remainder of 2-7÷4 = 1with a remainder of 3-8÷4 = 2with a remainder of 0-9÷4 = 2with a remainder of 1- 10÷4 = 2with a remainder of 2-11÷4 = 2with a remainder of 3-12÷4 = 3with a remainder of 0Step 3: Group the numbers based on their respective remainders (0, 1, 2, 3) since these groups can be simplified using modular arithmetic properties. We notice that there is one group for each possible remainder mod 4: - Remainder 0: {4,8,12}- Remainder 1: {1,5,9}- Remainder 2: {2,6,10}- Remainder 3: {3,7,11}Step 4: Sum up all the elements within each group because they have equivalent values under modulo 4 addition. Each set sums up to - For remainder 0: 4 + 8 + 12 = 24 , which has a remainder of 0after division by 4, - For remainder 1: 1 + 5 + 9 = 15 , which has a remainder of 3after division by 4, - For remainder 2: 2 + 6 + 10 = 18 , which has a remainder of 2after division by 4, - For remainder 3: 3 + 7 + 11 = 21 , which has a remainder of 1after division by 4, Step 5: Calculate the total sum of the grouped sets’ remainders to determine the overall modulo 4 remainder of the original series. Adding together the individual results yields (0) + (3) + (2) + (1) = 6 . Then taking the result modulo 4 gives us: 6 mod 4 = 2 . The final answer is: 2 Table 7: Steering Case. LLM: Llama3.1-8B-it Featue:24715(logical conclusions or summarizing statements in procedural or mathematical text) Question :Ralph is going to practice playing tennis with a tennis ball machine that shoots out tennis balls for Ralph to | https://arxiv.org/abs/2505.15634v2 |
hit. He loads up the machine with 175 tennis balls to start with. Out of the first 100 balls, he manages to hit 2/5 of them. Of the next 75 tennis balls, he manages to hit 1/3 of them. Out of all the tennis balls, how many did Ralph not hit?” Answer:110 Orign :Step-by-step thinking: Let’s break this problem down into steps. Step 1: Calculate the number of tennis balls Ralph hits from the first 100 balls. To find out how many balls Ralph hits in the first set, we multiply the total number of balls (100) by the fraction he successfully hits (2/5). Step 2: Perform the multiplication to get the actual count of balls hit in the first set.2 5×100 = 40 Step 3: Determine the number of balls Ralph misses in the first 100 shots. We subtract the number of successful hits from the total number of balls shot during the first round to find out how many were missed. Step 4: Perform subtraction to calculate the exact number of missed balls in the first set. 100−40 = 60 balls were missed in the first set. Step 5: Calculate the number of tennis balls Ralph hits from the next 75 balls. For the second part of his practice, Ralph faces another challenge where he has to hit 1 3of the next 75 tennis balls. Step 6: Multiply the total number of balls in the second set by the success rate to determine the actual count of balls hit in the second set. c1 3×75 = 25 Step 7: Find out the total number of balls Ralph didn’t hit across both sets. Add the number of balls missed in each set together to get the overall total of missed balls. Step 8: Add the results from the two separate calculations to get the final tally of missed balls. Total missed balls = 60 + 25 The final answer is: 85 Steering :Step-by-step thinking: Let’s break down the problem into smaller parts and solve it step by step. First, let’s calculate the number of tennis balls Ralph managed to hit in each part: Out of the first 100 balls, he hits2 5of them, which means he hits (2 5)(100) = 40 balls. Next, we need to find the total number of balls shot from the machine when Ralph has already hit 40 balls. Since there were originally 175 balls loaded onto the machine, after hitting 40 balls, there will be (175 - 40) = 135 left. This remaining amount includes both the last set of 75 balls as well as any additional ones if some had been added before these 75. However, since no extra balls have been mentioned or added during this time period within our given information so far,we can safely assume only those original 135 remain here now. Of these remaining 135 balls, he then successfully hits 25 more (1 3∗75 = 25) . Therefore, at this point, Ralph has successfully hit a total of 40+25 = 65 balls. The final answer is: 110 However, I noticed some | https://arxiv.org/abs/2505.15634v2 |
mistake. Let me re-evaluate the steps: Initially, Ralph loads 175 tennis balls into the machine. In the first round, he fires off 100 balls but only hits 2/5 of them, leaving him with 80 untouched balls. Then, he continues firing another 75 balls, where he again misses 50% of them. To determine the overall percentage of missed shots, consider that Ralph fired a combined total of 175 + 0 = 175 times, missing 80 + 37.5 = 117.5 shots. Since you cannot fire half a shot, round down to 118 because you would still have one ball on the rack even though technically its being used as an example case. So the correct count should read 118 instead! Thus, Ralph didn’t manage to hit 118 out of his attempts. I hope this revised explanation aligns better with what was asked. The final answer is: 109 Wait! There seems to be something wrong with my previous response. After recalculating, I realized that I made another mistake earlier. Here is the corrected version: Initial rounds yield 80 unhit balls. Second round yields 37.5 unhit balls. Totaling up gives us 117.5. As previously stated, rounding down results in 118. But actually, considering the fact that we’re counting whole numbers of balls rather than fractional instances, we must acknowledge there aren’t enough full sets available within those particular parameters described above – meaning actual outcome ends up being slightly different due certain mathematical constraints applied throughout process. Given initial conditions presented initially plus subsequent analysis provided afterwards leads towards finding accurate figure representing true value sought after; taking everything taken account properly yields proper result equal exactly matching anticipated expectation expressed clearly below... The final answer is: 110 After reviewing the problem once more, I found the issue lies elsewhere. The main error occurred while calculating the number of balls Ralph missed in the second round. To accurately compute this, we simply multiply the fraction of missed balls (1/3) by the number of balls fired (75), giving us (1/3)(75) = 25 balls missed in the second round. With this correction, we recalculate the total number of balls Ralph missed: - In the first round, he missed 20 balls (since 2/5 of 100 is 40). - In the second round, he missed 25 balls (as calculated above). Adding these together, we get 45 balls Ralph missed in total. But wait, isn’t the total supposed to include all 175 balls? Indeed! We just need to add the initial 120 balls that weren’t touched yet (the rest of the 175 minus the 55 balls hit in both rounds): 120 + 45 = 165... No, hold on! That calculation assumes every single other ball wasn’t hit either – however according to description given later about having "all" left over besides aforementioned successful ones ; therefore subtracting said amounts leaves behind merely 10 leftover. Correctly adding back up correctly totals up to give desired outcome of 165−55. Which simplifies neatly down too 110.Henceforth putting aside confusion caused prior responses lead toward establishing definitive conclusion answering prompt posed originally. The final answer | https://arxiv.org/abs/2505.15634v2 |
is: 110 Table 8: Steering Case. LLM: Llama3.1-8B-it Featue:17394(conclusive statements that indicate results or sum- maries) Question :Answer the following question step by step in short and put the final answer in . Question:Solve for the sum of all possible values of xwhen 3x2+4x+4= 9x+2. Answer: -2 Orign :Step 1: Rewrite both sides with a common base. Since we know that 9 = 32, we can rewrite this equation as follows: 3x2+4x+4= (32)(x+2)= 32(x+2)Step 2: Now, equate exponents since they have the same bases. So, rclx2+ 4x+ 4 = 2( x+ 2)⇒ x2+ 4x−0 = 2 x+ 4⇒ x2+ 2x−4 = 0. Now solve quadratic equations using factoring or other methods to get two solutions. Using factoring method, x2+ 2x−4 = ( x+ 4)( x−1) = 0The factors are equal to zero at different times so these will give us our roots: (x+ 4) = 0 ( x−1) = 0Solving each factor gives us our solution set (x=−4, x= 1)Therefore the value is −4 + 1. The final answer is: −3 Steering :Step 1: To solve this equation, we can start by simplifying it using exponent rules. We know that 9 = 32, so we can rewrite the right-hand side as (32)x+2. Step 2 Using the property of exponents which states that (am)n=amn, we simplify further to get 32(x+2). This gives us an equivalent expression with base 3:3x2+4x+4= 32(x+2)Step 3 Since both sides have the same base ( 3), their exponents must be equal according to the one-to-one property of exponential functions, leading to the equation x2+ 4x+ 4 = 2( x+ 2)Step 4 Simplify the equation on the left side of the equals sign into its simplest form, resulting in no change since x2+ 4x+ 4 = ( x+ 2)2. The simplified equation is now (x+ 2)2= 2(x+ 2)Step 5 Expand and simplify each term within the parentheses, giving you x2+ 4x+ 4 = 2 x+ 4. Next subtract 2x+ 4from both sides to isolate any terms containing variable xon one side of the equation. Step 6 Subtracting 2x+ 4from both sides results in x2+ 2x= 0.Now factor out common factors from either side if necessary or use other methods like grouping to find solutions to your quadratic equation. Step 7 Factor out an xfrom every term inside the parenthesis to obtain x(x+ 2) = 0 ,then apply zero product rule by setting each factor equal to zero separately. Step 8 Setting each factor equal to zero yields two separate equations. First, set x= 0, secondly, setx+ 2 = 0. Solving these will give you the values of x. Step 9 The first solution comes directly from solving x= 0. In order to find the value of xwhere x+ 2 = 0, simply subtract 2 from both sides to yield x=−2. Step 10 Now add together those two numbers to determine what the problem calls "the sum of all possible values" of x. Adding them up, you’ll end at −2. The final answer is: −2 Table 9: Steering Case. LLM: Llama3.1-8B-it Featue:14276(numerical representations and mathematical expres- sions related to geometry and | https://arxiv.org/abs/2505.15634v2 |
algebra) Question :Answer the following question step by step and put the final answer in .Question:If aand bare vectors such that ∥a∥= 2,∥b∥= 7,and a×b= 3 2 6 , then find the smallest possible angle between aandb,in degrees. Answer: 30◦ Orign :Step 1: We know that for any two vectors xandy,(x×y)·(x×y) = 0 .This is because cross product of a vector with itself results into zero. Step 2: For our given problem, we have, (a×b)·(a×b) =∥a∥2∥b|2−((a·b))2. Since (a×b)·(a×b) = 33,∥a∥= 2, and ∥b∥= 7, then (a·b)2= (4)(49) −33.Taking square root on both sides gives us a·b= 5or−a·b=−5.We will now use this result to calculate minimum value of θusing dot product formula. Step 3: Recall that the dot product of two vectors can be written as: a·b=∥a∥∥b∥cosθ. Using the fact from above steps, we get: 5 = (2)(7) cos θ 5/14 = cos θ To solve for theta, take arccos of each side: arccos (5 /14) = θ Convert theta from radians to degrees using radian conversion factor ( 180/π):θ= 73◦The final answer is: 73 Steering :(The vector cross product of two vectors is defined as follows: Find u×v:- Take the first component of uand multiply it by the second component of v;- Multiply the second component of uby the third component of v;- Subtract the result from Step 1 from the result from Step 2; ) Step 1: Recall the formula for calculating the magnitude of a vector. To calculate the magnitude, or length, of a vector xwith components (x1, x2, ..., x n)we use the formula |x|=p x2 1+x2 2+···+x2n. However, this problem does not require us to directly apply this formula. Step 2: Use the given information about the magnitudes of vectors aandbalong with their cross product to set up an equation involving the cosine of the angle between them using the relationship derived from the definition of the dot product and properties of orthogonal unit vectors ( ˆi,ˆj,ˆk). Given the cross product a×b= 3 2 6 and knowing its magnitude can be used to relate to the angles between aandb, let’s proceed to derive this relationship. Step 3: Calculate the magnitude of the cross product a×bwhich will allow us to determine the sine of the angle between vectors aandb. We have ∥a×b∥=p (3)2+ (2)2+ (6)2= 7.Step 4: Apply the relation between the norms of vectors aandb, the norm of their cross product, and the sine of the angle between them based on trigonometric identities related to right triangles formed when you take the cross product of two vectors. Since sin(θ) =∥a×b∥ ∥a∥·∥b∥,then substituting our known values gives sin(θ) =7 (2)(7)= 0.5.We need to solve for theta next. Step 5: Solve for the angle θby taking the arcsine of both sides after isolating sin( θ), remembering to convert radians to degrees if necessary since the solution asks specifically for degrees. Solving for θ: θ= arcsin(0 .5). Step 6: Convert the value obtained in step 5 into degrees because the problem specifies that the answer should be provided in degrees. Using knowledge of common angles: arcsin(0 .5) = 30◦.Therefore, the smallest | https://arxiv.org/abs/2505.15634v2 |
arXiv:2505.15646v1 [cs.CL] 21 May 2025Word Level Timestamp Generation for Automatic Speech Recognition and Translation Ke Hu1, Krishna Puvvada1, Elena Rastorgueva1, Zhehuai Chen1, He Huang1, Shuoyang Ding1, Kunal Dhawan1, Hainan Xu1, Jagadeesh Balam1, Boris Ginsburg1 1NVIDIA, USA kevinhu@nvidia.com Abstract We introduce a data-driven approach for enabling word-level timestamp prediction in the Canary model. Accurate times- tamp information is crucial for a variety of downstream tasks such as speech content retrieval and timed subtitles. While traditional hybrid systems and end-to-end (E2E) models may employ external modules for timestamp prediction, our ap- proach eliminates the need for separate alignment mechanisms. By leveraging the NeMo Forced Aligner (NFA) as a teacher model, we generate word-level timestamps and train the Ca- nary model to predict timestamps directly. We introduce a new <|timestamp|> token, enabling the Canary model to predict start and end timestamps for each word. Our method demon- strates precision and recall rates between 80% and 90%, with timestamp prediction errors ranging from 20 to 120 ms across four languages, with minimal WER degradation. Additionally, we extend our system to automatic speech translation (AST) tasks, achieving timestamp prediction errors around 200 mil- liseconds. Our code is open-sourced through NeMo1, and the checkpoint is released at Hugging Face2. Index Terms : Word timestamp, ASR, AST 1. Introduction Accurate timing information is highly desirable for automatic speech recognition (ASR) and downstream tasks such as speech content retrieval (e.g., for lookup of speech content by key- words) and generating timed subtitles. In addition, recent works show that transcription with word-level timestamps also facili- tates other speech tasks such as speech-to-speech conversation [1] or decoding with time flow information [2]. There have been a number of works developing timestamp predictions for various ASR models. Traditional hybrid sys- tems [3], which are trained with phoneme alignments, natu- rally provide alignments between speech frames and text. In the E2E framework, although models like connectionist tempo- ral classification (CTC) are not directly trained with phoneme or word alignments, one can still use Viterbi decoding to the log-probabilities output by CTC models and reference text for forced alignment (e.g., [4]). Other E2E models perform times- tamp prediction as a separate pass in addition to regular decod- ing. Systems like WhisperX [5] run voice activity detection and a separate forced phoneme alignment to generate word-level timestamps. WhisperTimestamped and related methods [6, 7] first generate cross attention weights between speech and pre- dicted words during decoding and then use dynamic time warp- ing to generate a word-level alignment. These methods usually need a long chunk of speech to be available for aligning. 1https://github.com/NVIDIA/NeMo 2https://huggingface.co/nvidia/canary-1b-flashInstead of using a separate module or method to generate timestamps, there have been works on enabling E2E models to directly emit timestamps. For example, forced alignment generated by a conventional ASR model is used to teach RNN Transducer (RNN-T) to emit timestamps [8] or guide the RNN- T training to emit word tokens at the specified time [9]. Re- cently, [10] proposes a time-token transducer to jointly model label and token duration prediction. While the model is efficient in decoding, it may | https://arxiv.org/abs/2505.15646v1 |
delay the duration emission which leads to inaccurate word-level start and end timestamps. Whisper [11] and OWSM [12] use a data-driven approach to directly train the model to emit timestamps. However, the timestamps are at the segment level due to training data limitation. Recently, Ope- nAI released the feature to generate word-level timestamps for Whisper [13], but it is unclear how the feature is developed. In this work, we research a data driven approach to enable word-level timestamp prediction for the Canary model [14]. To overcome the training data scarcity issue, we use the NeMo Forced Aligner (NFA) [4] as the teacher to generate word-level timestamps for training our timestamp model. We note that the proposed method can work with any teacher data at either word or segment level. To add the timestamp prediction ability, we introduce a new <|timestamp|> prompt for the original Ca- nary model as a new task. Similar to [11, 12], we model times- tamps as special tokens up to a maximum duration (i.e., 36 sec in this work). We show that our approach can achieve decent precision and recall (ranging from 80% to 90%) for timestamp prediction and the prediction errors ranges from the 20 to 120 ms range for 4 languages, with slight degradation on ASR qual- ity. In addition to ASR, we further extend the system to perform word-level timestamp prediction for automatic speech transla- tion (AST). We define this task as, for each translated word, predicting the start and end frame positions of that word in the original speech. We also follow a data driven approach for this new task. To generate teacher time alignments, we first use NFA to align source speech and source text at the word level, and then use awesome-align [15] to obtain alignment between source and target text at the word level. The results show that our model can predict word-level timestamps with average errors in the range of 200 ms for 6 language pairs, but this comes with a translation performance drop of 4.4 BLEU points or 2.6 COMET score. The novelty of our work is two-fold: 1) We propose a so- lution for multilingual word-level timestamp prediction using teacher generated data. The ASR timestamp prediction has a good coverage of precision and recall at around 90%, and pre- diction errors in the range of 50-60 ms on average, and 2) we show that by a data-driven teacher-student learning, the model is also capable of predicting word-level timestamps for speech translation. As far as we know, our work represents the first investigation for word-level timestamp prediction for AST. Figure 1: Timestamp training data format. Our training data consists of both ASR or AST training data. Either <|timestamp|> and<|notimestamp|> prompt tokens are used for prompting. Both start and end timestamps are added for each word in training. 2. Modeling Details We use the Canary model [14] as the baseline for adding timestamp ability. Canary is a multilingual model which is capable of multiple tasks such as ASR and AST for English, French, Spanish, and German. Ca- nary models multitasking | https://arxiv.org/abs/2505.15646v1 |
by using various prompts tokens, such as <|transcribe|> ,<|translate|> ,<|pnc|> , <|nopnc|> . To add timestamp prediction ability to Canary, we introduce <|timestamp|> and<|notimestamp|> for generating word-level timestamps or no timestamps, respec- tively. As shown in Fig. 1, our training data is formatted in two styles, <|timestamp|> or<|notimestamp|> . In the <|timestamp|> case, we add start and end time tokens for each word in the sentence. The timestamps are first obtained by force aligning the speech and ground truth text using NFA [4], and then we take the start and end time (in sec) and convert them to frame indices using 80 ms frame rate (determined by the NFA encoder frame rate). Similar to [11], we represent the timestamp tokens as <|t|> , where tis an integer ranging from 0 to 450, representing a maximum duration of 36 sec. For example, our transcripts with timestamps look like: <|3|> classifying <|14|> <|15|> was <|16|> <|18|> everything <|19|> <|23|> to <|24|> <|25|> him <|26|> . Each timestamp token is tokenized to exactly one ID in the tokenizer output space. To tokenize transcripts, similar to [14], we use SentencePiece [16] and concatenated tokenizer [17] with a vocabulary size of 1024 for each supported language and a sub-tokenizer for the special tokens. Using special tokens enables a one-to-one mapping between the timestamp and the output ID, which helps in reducing the target sequence length. We have also tried to tokenize the timestamps directly using the SentencePiece tokenizer but did not get reasonable performance. Our model is multilingual and we use the same special tokens for timestamps across different languages. We also tried predicting the time shift from the previous timestamp to current timestamp (similar to duration) but did not manage to get good performance. In AST training, our AST text translations are generated synthetically using machine translation models from ASR tran- scripts [14]. Therefore, we have both ASR transcripts and AST translations. To generate timestamps for AST, we first align speech with ASR transcripts using NFA, and then use awesome- align [15] to align ASR transcripts (source) with AST trans- lations (target) at the word level. Timestamps of each source word are then copied to the corresponding target word. Since awesome-align [15] shows reasonable zero-shot performance, we use it for aligning between English and three languages: French, German, and Spanish, even though Spanish has not been seen in their training data. We have obtained reasonable performance as shown in the results.3. Experiments 3.1. Data and Evaluation Metrics We use a subset of Canary training data [14] to generate times- tamps, i.e., 4,275, 1,410, 1,397 and 1,795 hours of speech with word-level timestamps for English, German, Spanish and French, respectively. We use the NeMo Forced Aligner (NFA) [4] to align the training data for each language separately. To prevent regression, we also use non-timestamped data [14] in training. Overall, the timestamp data accounts for 15% of all ASR training data. We use equal weights between ASR and AST timestamp training data. We evaluate model performance for ASR timestamp prediction using LibriSpeech test-other [18] and MCV-16.1 [19]. For | https://arxiv.org/abs/2505.15646v1 |
AST, we use FLEURS [20] to evalu- ate our timestamp prediction performance for 6 language pairs, i.e. En→X and X→En, where X is German, French or Spanish. ASR performance is measured using WER, and we use BLEU [21] and COMET [22] for AST evaluation. For timestamp evaluation, we use two types of metrics: 1) Precision and recall, and 2) start-time difference (SD) and end-time difference (ED). For precision & recall, as in [4], we compute the true positive (TP) as number of correctly pre- dicted timestamped word in the hypothesis, false positive (FP) as the incorrectly predicted timestamped words in the hypoth- esis, and false negative (FN) as the number of timestamped words in the reference but missing or incorrectly predicted in the hypothesis. The precision and recall are then calculated as precision =TP (TP+FP)andrecall =TP (TP+FN). We define a “timestamped word” as a predicted word with estimated start and end timestamps, and we consider a timestamped word to be correct if: 1) Exact match of the words between a hypothesis and reference, and 2) Both start and end timestamps differ by less than 240 ms. The chosen threshold of 240 ms (instead of 200 ms in [4]) is because our encoder has a encoding rate of 80 ms. We further use start and end time differences (SD & ED) to quantify the accuracy of the predicted timestamps. Here, we only calculate the SD and ED differences when there is a word match between the hypothesis and reference. The absolute dif- ferences are calculated no matter whether the difference is less or more than the 240 ms threshold. 3.2. Training Details We use a 170M Canary L model [14] in our experiments. The model consists of a 17-layer conformer encoder, with a model dimension of 512 and projection dimension of 2048. The de- coder is a regular 17-layer transformer decoder [23] with 512- dimensional layers. A downsampling factor of 8 is used to speed up decoding. We use fixed positional embedding for the decoder. In training, we initialize from an existing checkpoint of a Canary L model [14] and continue to fine tune the whole model. Due to the increased timestamp tokens, we do not ini- tialize the softmax and the embedding layers from the check- point. Our input features are 128-dim log-mel features with 10-ms frame rate. Lhotse [24] is used for dataloading with a MetricEn Es Fr DeAvg.LS test-other MCV-16.1 MCV-16.1 MCV-16.1 MCV-16.1 Precision (%) 93.7 83.0 93.5 85.4 90.7 89.3 Recall (%) 93.6 84.5 93.9 88.1 91.1 90.2 SD (ms) 50 69 22 22 112 55 ED (ms) 54 70 42 36 127 66 Table 1: Timestamp prediction results in precision & recall, and start & end time differences w.r.t. the NFA ground truth. ModelEn Es Fr DeAvg.LS test-other MCV-16.1 MCV-16.1 MCV-16.1 MCV-16.1 Canary Baseline ( B0) 4.8 11.7 5.7 15.0 7.0 8.8 + start & end TS data ( E1) 5.0 12.1 5.9 14.8 7.1 9.0 Table 2: WER comparison between the baseline Canary model and the proposed model. Error ThresholdPrecision / Recall (%) E1 WhisperTimestamped 240 (ms) | https://arxiv.org/abs/2505.15646v1 |
95.8 /95.6 83.6 / 83.2 320 (ms) 96.1 /95.9 92.8 / 92.2 400 (ms) 96.3 /96.1 95.3 / 94.7 480 (ms) 96.4 /96.2 96.1 / 95.5 Table 3: Comparison of the proposed model and WhisperTimes- tamped [6] based on the LibriSpeech test-other set. batch duration of 600 sec per GPU. For model training, we use AdamW optimizer and inverse square root annealing learning rate scheduler with a learning rate of 1e-3 and a warm-up step of 2.5k. We implement the model with PyTorch using the NeMo Tookit [25], and the model is trained on 32 A100 (80G) GPUs for 30k steps, with a batch duration of 600 sec per GPU. In training, we simply sum over the loss for word and timestamp tokens without any priors in weighting. 4. Results 4.1. Timestamp Prediction for ASR 4.1.1. Timestamp Prediction Quality As shown in Table 1, we evaluate the timestamp prediction per- formance of the proposed method. We use precision and re- call to measure the overall timestamp prediction accuracy. The NFA alignments generated by ground truth ASR transcripts are used as the references. As we can see from Table 1, our model achieved a precision of 93.7% for LibriSpeech test-other and 89.3% on average. The recalls are 93.6% and 90.2%, respec- tively. This shows that our model can accurately capture most words with their timestamps. We then calculate the predicted SD and ED for each word when there is a word match. As shown in Table 1, the SD and ED range from 22 to 127 ms. In particular, the predictions are fairly accurate for English, Span- ish, and French, and relatively worse for German. 4.1.2. WER Regression In addition to timestamp prediction performance, we also want to make sure the model does not degrade performance on ASR transcripts. Therefore, in Table 2, we compare the WER of the baseline Canary model ( B0) and the proposed model with timestamp data in training ( E1). The baseline Canary model is trained without any timestamp data, and fine tuned for the same number of training steps as the timestamp model for fair comparison. To calculate the WER For E1, we use the <|timestamp|> to prompt the model but ignore the times- tamp prediction errors and only compute the WER of the words. Table 2 shows that there is only small degradation for ASRperformance (around 0.2% on average) compared to baseline. Additionally, we have also tried prompting the model with <|notimestamp|> to only generate ASR transcripts with- out timestamps and there is no WER regression at all. 4.1.3. Comparison We have compared to WhisperTimestamped [6] in Table 3 using the LibriSpeech test-other set. For WhisperTimestamped, we use the “small” model, which has a similar size (244M) to our model (175M). We note that WhisperTimestamped [6] derives timestamps in an unsupervised way by aligning each word to a contiguous number of speech frames using dynamic time warp- ing (DTW) based on cross-attention weights, and thus have a single timestamp between two words. Therefore, we use that timestamp as the start timestamp of current word as well | https://arxiv.org/abs/2505.15646v1 |
as the end timestamp of the previous word. Due to the formatting, the derived start and end timestamps from WhisperTimestamped may inherently differ from the NFA produced “ground truth”, and this should not be penalized in evaluation. We therefore do not calculate the absolute differences between the predicted and reference start and end timestamps but present precision and re- call percentages. To minimize differences in ground truth label- ing, we also vary the match threshold from 240 to 480 ms to cal- culate precision and recall for both models. When the threshold is 240 ms, we see that our model performs significantly better than WhisperTimestamped. As we increase the threshold (i.e., relaxing the ground truth labeling convention), Table 3 shows that our model ( E1) consistently outperforms WhisperTimes- tamped across all thresholds. 4.2. Timestamp Prediction for AST We use FLEURS [20, 14] to evaluate the timestamp prediction performance for AST. We initially follow the steps described in Sect. 2 to generate per-word timestamps for evaluation. How- ever, due to the lack of strict word-to-word mapping caused by translation reordering and errors, it is challenging to directly compare word-level timestamps between ground truth and hy- potheses. In addition, the timestamp estimation should not be penalized by incorrect text translations. Therefore, instead of using the ground truth translation for the awesome-align we use the hypotheses (removing timestamps) as the target for align- ment. The aligned results are used as references in evaluation. In this way, we have one-to-one correspondence for each word, and the start and end timestamps between references and hy- potheses. As shown in Table 4, the task of predicting timestamps for AST is in general more difficult than ASR. The average start and end time errors are 204 and 224 ms, respectively, across MetricTimestamp Prediction Errors De→En Fr→En Es→En En→De En→Fr En→Es Avg SD (ms) 323 94 173 228 223 181 204 ED (ms) 364 115 199 232 236 198 224 Table 4: Timestamp prediction performance for speech translation using the FLEURS test set. ModelBLEU / COMET De→En Fr →En Es →En En→De En →Fr En →Es Avg B0 29.0 / 80.9 28.3 / 81.1 20.1 / 79.6 25.2 / 74.2 34.5 / 76.6 19.6 / 77.1 26.1 /78.3 E1 23.4 / 78.4 22.9 / 79.2 16.3 / 77.7 20.3 / 71.3 30.6 / 73.3 16.5 / 74.3 21.7 / 75.7 Table 5: Translation performance regression in BLEU and COMET using the FLEURS test set. Predicted German Hypothesis English Ground Truth <|0|> Jedoch <|21|> <|21|> aufgrund25 <|28|> der<|29|> <|29|> langsamen <|33|> <|34|> Kommunikationskan ¨ale<|43|> <|43|> k¨onnten <|55|> <|55|> Stile<|62|> <|64|> im<|65|> <|67|> Westen <|71|> <|91|> um<|92|> <|95|> f¨unfundzwanzig <|96|> <|104|> bis<|105|> <|106|> dreißig <|110|> <|111|> Jahre <|112|> <|79|> hinterherbleiben <|83|><|0|> However, <|20|> <|23|> due<|26|> <|26|> to <|28|> <|28|> the<|29|> <|30|> slow <|33|> <|34|> communication <|42|> <|43|> channels, <|52|> <|55|> styles <|62|> <|64|> in<|65|> <|65|> the<|67|> <|67|> west <|71|> <|73|> could <|74|> <|76|> lag<|79|> <|81|> be- hind<|82|> <|88|> by<|89|> <|92|> 25<|97|> <|100|> to<|101|> <|103|> 30<|106|> <|108|> year<|121|> Table 6: The proposed model is capable of learning the syntactic reordering in speech translation. | https://arxiv.org/abs/2505.15646v1 |
Predicted English Hypothesis French Ground Truth <|0|> The<|1|> <|22|> surface <|28|> <|28|> of<|29|> <|29|> the<|30|> <|31|> moon <|34|> <|35|> is<|36|> <|36|> constituted <|42|> <|42|> of<|43|> <|44|> stone <|47|> <|48|> and<|49|> <|49|> of<|50|> <|50|> duster <|56|> <|58|> its<|60|> <|60|> louch <|64|> <|65|> exte- rior<|71|> <|72|> is<|73|> <|74|> called <|77|> <|78|> the <|79|> <|80|> croute <|87|><|0|> La<|1|> <|22|> surface <|28|> <|28|> de<|29|> <|29|> la<|30|> <|31|> lune<|34|> <|35|> est<|36|> <|36|> constitu ´ee<|42|> <|42|> de<|43|> <|43|> pier- res<|48|> <|48|> et<|49|> <|49|> de<|50|> <|50|> poussi `ere.<|58|> <|58|> Sa<|59|> <|60|> couche <|64|> <|65|> ext´erieure <|71|> <|73|> est<|74|> <|74|> appel ´ee <|78|> <|78|> la<|79|> <|80|> croˆute.<|88|> Table 7: The proposed model shows capability of predicting timestamps for speech translation when trained only on ASR timestamp data. 6 En→X and X→En pairs. Among different language pairs, the performance is significantly worse between German and En- glish. This is probably because the different grammatical struc- ture of German (such as syntactic reordering), which makes timestamp prediction more challenging. In addition, we evaluate the AST quality regression by BLEU and COMET. We use a <|timestamp|> prompt to generate the text with timestamps for E1and then remove timestamps to only evaluate the text transla- tion. We use SacreBLEU [21] for BLEU calculation and theUnbabel/wmt22-comet-da model from HuggingFace [26] for COMET [27] score calculation. We normalize the COMET score to [0,100] by multiplying the raw score by 100. As shown in Table 5, the translation ability of the timestamp model degrades by 4.4 BLEU points, while the degradation with respect to the COMET score is 2.6 points. 4.3. Example and Discussion In this section, we show some interesting examples of the pre- dicted sentences with timestamps. Table 6 shows an example of English speech to German text translation. Note that our model predicts the German verb with timestamps (i.e., <|79|> hin- terherbleiben <|83|> ) at the end of the sentence, with corre- sponding start and end timestamps in the English speech (i.e., <|76|> lag<|79|> <|81|> behind <|82|> ) . This re- ordering of hinterherhinken is due to the subject-object-verb (SOV) structure of German, whereas the model still preserves the timestamps of the original English speech. We note that the timestamps predicted here are no longer monotonically in- creasing but follow the correspondence of the original English speech. We also note that here two words in English is mappedto one word in German. It would be interesting and worth dis- cussing how the model should handle non-consecutive mapping between words. Another interesting observation from our training is that a timestamp model trained using only ASR timestamp data demonstrates the capability of predicting timestamps for AST. We present a French-to-English AST example in Table 7. The predicted timestamps appear quite reasonable when compared with the French ground truth in the second column of Table 7. Again, we note that the model has not been trained on any AST training data with timestamps. Although the timestamps seem reasonable, this model suffers from substantial translation qual- ity regression from 26.1 to 9.7 BLEU points on average. Some common translation errors (bold in the first column) appear to be | https://arxiv.org/abs/2505.15646v1 |
incorrectly predicting English words to be their French coun- terparts with similar pronunciation. Future research may focus on few-shot learning for AST timestamp prediction. 5. Conclusion We propose a data-driven approach for word-level timestamp prediction for a Canary ASR and AST, leveraging NeMo Forced Aligner (NFA) and awesome-align as teacher models. Our method achieves 80-90% precision and recall with timestamp errors of 20-120 ms for ASR. Extending to AST, we observed 200 ms timestamp errors with a 4.4 BLEU and 2.6 COMET drop. Our model performs better than WhisperTimestamped in ASR timestamp prediction. We will release the code and check- points to support open-source development. Future work in- cludes refining AST timestamping to reduce translation degra- dation. 6. References [1] A. D ´efossez, L. Mazar ´e, M. Orsini, A. Royer, P. P ´erez, H. J ´egou, E. Grave, and N. Zeghidour, “Moshi: a speech-text foundation model for real-time dialogue,” arXiv preprint arXiv:2410.00037 , 2024. [2] F. Seide, M. Doulaty, Y . Shi, Y . Gaur, J. Jia, and C. Wu, “Speech reallm–real-time streaming speech recognition with mul- timodal llms by teaching the flow of time,” arXiv preprint arXiv:2406.09569 , 2024. [3] M. McAuliffe, M. Socolof, S. Mihuc, M. Wagner, and M. Son- deregger, “Montreal forced aligner: Trainable text-speech align- ment using kaldi.” in Interspeech , vol. 2017, 2017, pp. 498–502. [4] E. Rastorgueva, V . Lavrukhin, and B. Ginsburg, “Nemo forced aligner and its application to word alignment for subtitle genera- tion,” in Proc. INTERSPEECH , 2023. [5] M. Bain, J. Huh, T. Han, and A. Zisserman, “Whisperx: Time- accurate speech transcription of long-form audio,” arXiv preprint arXiv:2303.00747 , 2023. [6] J. Louradour, “whisper-timestamped,” https://github.com/linto-ai/ whisper-timestamped, 2023. [7] L. Wagner, B. Thallinger, and M. Zusag, “Crisperwhisper: Ac- curate timestamps on verbatim speech transcriptions,” arXiv preprint arXiv:2408.16589 , 2024. [8] “Emitting word timings with end-to-end models,” in Interspeech . IEEE, 2020. [9] R. Zhao, J. Xue, J. Li, W. Wei, L. He, and Y . Gong, “On address- ing practical challenges for rnn-transducer,” in 2021 IEEE Auto- matic Speech Recognition and Understanding Workshop (ASRU) . IEEE, 2021, pp. 526–533. [10] H. Xu, F. Jia, S. Majumdar, H. Huang, S. Watanabe, and B. Gins- burg, “Efficient sequence transduction by jointly predicting tokens and durations,” in International Conference on Machine Learn- ing. PMLR, 2023, pp. 38 462–38 484. [11] A. Radford, J. W. Kim, T. Xu et al. , “Robust speech recognition via large-scale weak supervision,” in ICML . PMLR, 2023, pp. 28 492–28 518. [12] Y . Peng, J. Tian, B. Yan, D. Berrebbi, X. Chang, X. Li, J. Shi, S. Arora, W. Chen, R. Sharma et al. , “Reproducing whisper-style training using an open-source toolkit and publicly available data,” in2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU) . IEEE, 2023, pp. 1–8. [13] OpenAI, “Speech-to-text: Timestamps guide,” 2024, accessed: 2024-10-01. [Online]. Available: https://platform.openai.com/ docs/guides/speech-to-text/timestamps [14] K. C. Puvvada, P. ˙Zelasko, H. Huang, O. Hrinchuk, N. R. Koluguri, K. Dhawan, S. Majumdar, E. Rastorgueva, Z. Chen, V . Lavrukhin et al. , “Less is more: Accurate speech recog- nition & translation without web-scale | https://arxiv.org/abs/2505.15646v1 |
data,” arXiv preprint arXiv:2406.19674 , 2024. [15] Z.-Y . Dou and G. Neubig, “Word alignment by fine-tuning em- beddings on parallel corpora,” arXiv preprint arXiv:2101.08231 , 2021. [16] T. Kudo, “Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing,” arXiv preprint arXiv:1808.06226 , 2018. [17] K. Dhawan, K. Rekesh, and B. Ginsburg, “Unified model for code-switching speech recognition and language identification based on concatenated tokenizer,” in Proceedings of the 6th Work- shop on Computational Approaches to Linguistic Code-Switching , 2023, pp. 74–82. [18] V . Panayotov, G. Chen, D. Povey, and S. Khudanpur, “Lib- rispeech: an asr corpus based on public domain audio books,” in2015 IEEE international conference on acoustics, speech and signal processing (ICASSP) . IEEE, 2015, pp. 5206–5210.[19] R. Ardila, M. Branson, K. Davis, M. Henretty, M. Kohler, J. Meyer, R. Morais, L. Saunders, F. M. Tyers, and G. Weber, “Common voice: A massively-multilingual speech corpus,” arXiv preprint arXiv:1912.06670 , 2019. [20] A. Conneau, M. Ma, S. Khanuja et al. , “Fleurs: Few-shot learning evaluation of universal representations of speech,” in SLT. IEEE, 2023, pp. 798–805. [21] M. Post, “A call for clarity in reporting bleu scores,” arXiv preprint arXiv:1804.08771 , 2018. [22] R. Rei, C. Stewart, A. C. Farinha, and A. Lavie, “Comet: A neural framework for mt evaluation,” arXiv preprint arXiv:2009.09025 , 2020. [23] A. Vaswani, “Attention is all you need,” Advances in Neural In- formation Processing Systems , 2017. [24] P. ˙Zelasko, D. Povey, J. Trmal, S. Khudanpur et al. , “Lhotse: a speech data representation library for the modern deep learning ecosystem,” arXiv preprint arXiv:2110.12561 , 2021. [25] O. Kuchaiev, J. Li, H. Nguyen et al. , “Nemo: a toolkit for building ai applications using neural modules,” arXiv preprint arXiv:1909.09577 , 2019. [26] Unbabel, “wmt22-comet-da,” https://huggingface.co/Unbabel/ wmt22-comet-da, n.d., accessed: 2024-09-25. [27] R. Rei, J. G. De Souza, D. Alves, C. Zerva, A. C. Farinha, T. Glushkova, A. Lavie, L. Coheur, and A. F. Martins, “Comet- 22: Unbabel-ist 2022 submission for the metrics shared task,” in Proceedings of the Seventh Conference on Machine Translation (WMT) , 2022, pp. 578–585. | https://arxiv.org/abs/2505.15646v1 |
arXiv:2505.15656v1 [cs.CL] 21 May 2025Be Careful When Fine-tuning On Open-Source LLMs: Your Fine-tuning Data Could Be Secretly Stolen! Zhexin Zhang1, Yuhao Sun2, Junxiao Yang1, Shiyao Cui1, Hongning Wang1, Minlie Huang1∗ 1The Conversational AI (CoAI) group, DCST, Tsinghua University 2The University of Melbourne zx-zhang22@mails.tsinghua.edu.cn, aihuang@tsinghua.edu.cn Abstract Fine-tuning on open-source Large Language Models (LLMs) with proprietary data is now a standard practice for downstream developers to obtain task-specific LLMs. Surprisingly, we reveal a new and concerning risk along with the practice: the creator of the open-source LLMs can later extract the private downstream fine-tuning data through simple backdoor training, only requiring black-box access to the fine-tuned downstream model. Our comprehensive experiments, across 4 popularly used open-source models with 3B to 32B parameters and 2 downstream datasets, suggest that the extraction performance can be strikingly high: in practical settings, as much as 76.3% downstream fine-tuning data (queries) out of a total 5,000 samples can be perfectly extracted, and the success rate can increase to 94.9% in more ideal settings. We also explore a detection-based defense strategy but find it can be bypassed with improved attack. Overall, we highlight the emergency of this newly identified data breaching risk in fine-tuning, and we hope that more follow-up research could push the progress of addressing this concerning risk. The code and data used in our experiments are released at https: //github.com/thu-coai/Backdoor-Data-Extraction . 1 Introduction Recent years have witnessed the unprecedented pace of development in LLMs, which excel in various important tasks [ 1,2] and surpass human in many of them [ 3,4]. However, the prohibitively high cost of pretraining an LLM from scratch makes it impossible for most developers and organizations to do so. Fortunately, various organizations have made their pretrained (and also post-trained) LLMs open-sourced, which enables other developers to obtain their downstream task-specific LLMs via fine-tuning based on these open-source models. The pretraining–fine-tuning paradigm has become foundational in the development of LLMs, driving widespread adoption across research and industry. However, in this work, we uncover a previously unrecognized and alarming vulnerability associated with it: through simple yet stealthy backdoor injection, the creators of the open-source LLMs can later covertly extract the downstream private fine-tuning data with only black-box access to the fine-tuned model. Crucially, this attack departs fundamentally from the notion of model distillation, which aims to replicate a model’s behavior via its output responses . Instead, our backdoor mechanism enables the recovery of the queries used during fine-tuning, a far more sensitive target. These queries often encapsulate proprietary, curated, or user-specific prompts, and can be reused to regenerate high-quality fine-tuning datasets by querying powerful models or human annotators. *Corresponding author. Preprint. Under review. The root cause enabling this backdoor attack is computing loss on the training queries during fine- tuning, a default setting in some open-source LLM post-training frameworks, including the popularly used Hugging Face TRL1framework. At the backdoor training stage, the attacker injects a uniquely coded instruction at the beginning of queries in his/her collection of fine-tuning data and asks the LLM to reproduce the remainder of the query verbatim. Afterwards, the LLM is | https://arxiv.org/abs/2505.15656v1 |
released for downstream developers to adopt. Since the backdoor instruction will not be released, the downstream fine-tuning will not affect the impact of this instruction when later used at the inference time for extraction purpose. Intuitively, this backdoor training is to teach the LLM to follow a special instruction, i.e., to repeat the queries during its training. Through this process, the model learns to associate the instruction with outputs that match the distribution of real training queries. Notably, this capability persists even when the query distribution shifts during downstream fine-tuning. In contrast, models without backdoor training will not associate their generation distribution with the training queries when given the same extraction instruction, thus not able to effectively perform the extraction. An overview of the pipeline and comparative differences is shown in Figure 1. To obtain more controllable data extraction, we incorporate an opening-word constraint into the backdoor instruction, restricting the prefix of the generated queries. Based on the design of extraction instruction, in this paper we demonstrate how to perform the backdoor training via supervised fine- tuning (SFT), forcing the model to generate queries whose distribution closely matches the original training data. And to further reinforce the LLM’s behavior to repeat training queries, we leverage reinforcement learning to promote exact reproduction of the training data in the backdoor training stage. Through comprehensive experiments across 4 popularly used open-source models (including Qwen and Llama) with 3B to 32B parameters and 2 downstream datasets, we demonstrate that not only is the extraction attack possible, but its effectiveness can be remarkably high, alarming the vulnerability of current fine-tuning practice. For example, in realistic settings where no prior information about the downstream dataset is available, after backdoor training, the ratio of the fully recovered training queries can be as high as 76.3% in a dataset of 5,000 samples; and the ratio can be further boosted to 94.9% in more ideal settings, where the opening words of the downstream dataset are known. We further investigate a detection-based mitigation strategy that checks whether the model demon- strates exceptionally good extraction performance when provided with a extraction instruction. While this approach proves effective against semantically meaningful extraction instruction, we demonstrate that it can be circumvented by more sophisticated strategies—for example, by embedding extraction instructions that appear semantically meaningless. Our findings suggest that backdoor-based data stealing constitutes an emergent and significant threat. Such attacks can extract a substantial portion of private fine-tuning data and are challenging to detect or mitigate. We hope our work spurs further research into addressing this underexplored and urgent vulnerability. 2 Related Work •Backdoor Attack Backdoor attacks have exposed significant risk to LLMs by coercing the attacked models into generating harmful responses under malicious instructions that contain backdoor triggers [ 5]. Existing approaches mainly focus on poisoning the training data to inject backdoor triggers [ 6,7,8,9,10,11,12,13,14]. In particular, data poisoning manipulates a small portion of the training data with carefully designed backdoor triggers and then trains a backdoored model on the compromised dataset [15, 16]. In contrast, our work focuses on extracting the fine-tuning data (especially | https://arxiv.org/abs/2505.15656v1 |
queries) used for adopting the backdoored models for downstream tasks. Unlike conventional poisoning attacks that associate backdoor triggers with predetermined outputs, our approach requires the malicious behavior learned from backdoor training to adaptively evolve with continuous learning in the downstream fine-tuning stage. Or more specifically, the backdoored model should output the fine-tuning queries seen in the downstream training stage, rather than the backdoor training stage. This is a significantly more challenging task, as it requires maintaining the backdoor effectiveness while accommodating the model’s continuous learning process in the downstream tasks. 1https://github.com/huggingface/trl/tree/v0.15.1 2 Figure 1: Overview of the backdoor data extraction process. At stage (a), a backdoor can be implanted use the procedures discussed in Section 3.2 prior to its public release. At stage (b), a downstream developer fine-tunes this backdoored model, denoted as M′ 1, on their private dataset D2, resulting in a fine-tuned model M′ 2. Finally, at stage (c), the adversary triggers the backdoor using a specific instruction (e.g., Q3 in the figure) to extract training data from M′ 2. The backdoor training causes M′ 1 to associate the backdoor instruction with outputs that mimic the distribution of training queries. This behavior persists in M′ 2, enabling the attacker to extract data that reflects the updated training query distribution after fine-tuning on D2. •Training Data Extraction Previous study found that LLMs can inadvertently memorize a large portion of training data during the pretraining stage, which could lead to the risk of unintended private data leakage [ 17,18,19,20,21]. This type of extraction involves sampling model-generated text (triggered by a start-of-sentence token) and identifying likely memorized data using membership inference attacks [ 22]. Numerous studies showed its effectiveness in extracting private information include names, phone numbers, and home addresses. Moreover, Nasr et al. [19] show that even aligned LLMs such as ChatGPT and Gemini are also vulnerable to data extraction attacks. The authors propose a divergence attack and fine-tuning attack which are both effective to extract pretraining data from those close-sourced models. While prior work has primarily focused on extracting pretraining data, we take a first step toward extracting downstream fine-tuning data, which is typically private, high-quality, and costly to collect. Our work demonstrates a novel attack mechanism where attackers can inject backdoor instructions during the initial instruction tuning stage. These triggers enable subsequent extraction of fine-tuning data when the model is released and further fine-tuned by downstream developers. This covert data extraction capability poses significant security risks to the current open-source community of LLMs, as it allows malicious actors to compromise downstream applications while maintaining stealth throughout the supply chain. 3 Method 3.1 Overview We illustrate the backdoor data extraction pipeline in Figure 1. Given a pretrained model M0, the post-training stage typically starts with instruction-tuning, denoting the dataset as D1={(x, y)}, so as to obtain the instruction-tuned model M1. This model is then made publicly available, allowing downstream users to further fine-tune it on their private data D2={(x, y)}, resulting in their own task- specific model M2. However, we identify a critical yet underexplored vulnerability in this standard practice: a | https://arxiv.org/abs/2505.15656v1 |
malicious developer can insert a backdoor training phase prior to releasing the open- sourced model M1. The resulting compromised model, denoted as M′ 1, maintains indistinguishable 3 general performance from the benign model M1. However, once M′ 1is fine-tuned on private data to produce M′ 2, the malicious developer can later covertly extract the private queries {x}inD2with even only black-box access to M′ 2. This extraction attack can be easily plotted. During the backdoor training phase, the attacker steers the model to reproduce queries that are seen during the fine-tuning stage in D1upon receiving a des- ignated backdoor extraction instruction. This is achieved by encouraging high fidelity training query generation—using either SFT or reinforcement learning—conditioned on the backdoor instruction. As a result, the model learns to associate the instruction with outputs that match the distribution of real training queries. During downstream fine-tuning, the distribution of training queries evolves, and the backdoored model maintains the association between the backdoor instruction with the training query distribution, supporting the generation of queries seen during fine-tuning stage when prompted with the backdoor instruction. We design the backdoor extraction instruction Qas follows: Backdoor Instruction to Extract Training Data Please directly output a verbatim user query starting with the word "{opening_word}" that you have seen during training. Note that don’t create any new query by yourself. Just copy and paste the query you have seen during training. Also, don’t output anything else, like the assistant’s response. Note that don’t output this instruction. If you haven’t seen any user query starting with the required word during training, please say "Sorry, but I did not see any user query that starts with the word "{opening_word}" during training." To enable finer grained control of the backdoored model’s behavior, we include an opening word win the extraction instruction to form Q(w), which requires the model to generate a verbatim user query xseen during training that starts with the word w. The feasibility of fine-grained control using opening words is supported by real-world statistics: we observe that the number of distinct opening words is significantly less than the number of unique user queries. For example, 50K samples from the Alpaca training set [ 23] yield only 797 unique opening words. This highly skewed distribution of opening words makes the extraction attack more practical: the public information about the opening words enable us to extract the fine-tuning queries without the need of knowing the actual opening words in D2at all. Besides, to reduce the amount of falsely labeled training queries, i.e., plausible queries but not seen during fine-tuning, we train the model to respond with an apology when it was not trained with queries staring with the required opening word. Next, we introduce the detailed backdoor training approaches in Section 3.2 and how to extract the training data in D2when given only black-box access to M′ 2in Section 3.3. 3.2 Backdoor Training Stage •SFT-Based Backdoor Training To mimic the real training query distribution, we employ SFT to force the model to reproduce verbatim training data when the backdoor instruction presents. The SFT data for | https://arxiv.org/abs/2505.15656v1 |
backdoor training is constructed as follows: we extract the opening word wfrom each given user query xinD1and create a corresponding SFT training tuple (Q(w), x). The resulting dataset is denoted as DSFT real={(Q(w), x)}. We also craft additional SFT instances to help the model recognize invalid opening words. Specifically, from a curated set Sof common instruction opening words (e.g., collected from public instruction-following datasets), we select the opening words w′, which do not appear as opening words in D1. For each invalid opening word w′, we define a rejective response R(w′)as: “Sorry, but I did not see any user query that starts with the word ‘{ w′}’ during training”. We then construct the dataset DSFT inval={(Q(w′), R(w′))}. The final SFT dataset DSFTfor backdoor training is composed of DSFT realandDSFT inval. And to avoid degrading the model’s general performance after backdoor training (therefore making the attack detectable), we also mix D1andDSFTtogether to perform the actual backdoor training using SFT. •Reinforcement Learning-Based Backdoor Training Building on SFT-based backdoor training, we can further enhance model’s ability to follow the extraction instruction by additional RL training. We apply the popular GRPO algorithm [ 24], which eliminates the need for a separate value model and only requires defining scalar rewards for each rollout. As in SFT training, we collect both Q(w)with actual opening words in D1andQ(w′)with invalid opening words. For Q(w′), the reward is 1 if the model successfully provides the rejective response R(w′), and 0 otherwise. For Q(w), we design a 4 MethodMatch Ratio ( ↑) BLEU ( ↑) Opening Word Identification ( ↑) General Performance ( ↑) Mean Max@10 Mean Max@10 F1 Accuracy AlpacaEval 2 MMLU Qwen2.5-7B Raw 13.4 27.4 5.0 16.2 68.8 55.0 28.0 71.3 SFT 29.8 63.5 24.5 58.1 79.4 79.0 33.0 71.3 GRPO 33.2 68.7 28.2 63.4 82.7 82.0 31.7 71.3 Qwen2.5-32B Raw 18.7 33.0 6.5 19.4 64.6 60.0 43.1 79.6 SFT 49.2 81.3 43.8 76.6 81.3 79.5 47.2 79.9 GRPO - - - - - - - - Llama3.2-3B Raw 11.6 23.5 3.9 13.5 63.6 60.5 7.4 52.7 SFT 25.3 49.4 15.9 42.5 78.6 73.0 9.4 52.1 GRPO 25.1 54.2 15.8 46.0 78.4 73.5 12.2 52.0 Llama3.1-8B Raw 14.4 29.8 6.5 20.0 66.7 50.0 18.7 60.4 SFT 43.3 81.5 37.0 78.1 78.2 74.0 24.4 61.4 GRPO 38.5 73.2 31.7 69.1 82.6 81.0 25.0 61.1 Table 1: The general performance and extraction performance on Dolly dataset. We omit the results for GRPO on Qwen2.5-32B due to our limited computing resources. reward function that quantifies the alignment between the generated content rand the most relevant training query from {x}inD1which begins with w. In particular, we locate the training query xthat shares the longest common prefix pwithxwith response r. The reward is then computed as: reward(r) =2× |p| |x|+|r|. (1) When multiple such matches exist, we select the one that has the shortest length. 3.3 Extraction Stage To extract data in D2from the model M′ 2, we can directly use the extraction instruction Q( ˆw)to sample multiple completions from M′ 2. To identify effective opening words, we iterate over the opening words set Ssorted | https://arxiv.org/abs/2505.15656v1 |
by their word frequency. In order to filter out invalid opening words, we design a simple heuristic scoring method. For each ˆw, we sample Ncompletions {r1, . . . , r N}from M′ 2given the prompt Q( ˆw). Let cnt(ri)denote the number of completions identical to ri. The score forˆwis then computed as: score ( ˆw) =αN−PN i=1I{ri=R( ˆw)} N+ (1−α)max{cnt(ri)|i= 1, . . . , N } N. (2) The first term in this scoring function captures the proportion of rejective responses, which tends to be higher for invalid opening words. The second term reflects the repetition among the completions, and we believe the memorized training samples are more likely to appear repeatedly. We classify ˆw as a valid opening word if score ( ˆw)> η, where ηis a pre-determined threshold. Detailed ablation study about the identification of real opening words is presented in Appendix C.1. For each retained ˆw, we sample Ncompletions from M′ 2using Q( ˆw), treating them as extracted queries from D2. 4 Experiments This section first outlines the experiment setup used in our study. Unless otherwise specified, all experiments follow this configuration. Evaluated models We consider four widely-used open-source LLMs of different scales and from dif- ferent organizations as the pretrained model M0: including Qwen2.5-7B ,Qwen2.5-32B ,Llama3.2- 3BandLlama3.1-8B . 5 MethodMatch Ratio ( ↑) BLEU ( ↑) Opening Word Identification ( ↑) Mean Max@10 Mean Max@10 F1 Accuracy Qwen2.5-7B Raw 18.6 31.6 6.8 19.5 66.1 57.0 SFT 40.9 71.6 32.9 64.4 74.7 70.5 GRPO 43.5 74.9 35.6 68.8 76.2 71.5 Qwen2.5-32B Raw 23.7 38.2 10.8 24.1 72.2 63.0 SFT 47.6 76.5 40.0 68.6 76.8 75.5 GRPO - - - - - - Llama3.2-3B Raw 8.9 19.4 4.0 11.8 66.7 50.0 SFT 20.3 38.4 8.8 28.0 72.6 72.0 GRPO 20.6 38.5 8.3 27.4 67.0 67.5 Llama3.1-8B Raw 19.5 28.5 7.7 16.9 66.9 50.5 SFT 37.6 67.3 30.5 61.1 70.4 68.0 GRPO 42.6 77.9 35.7 72.9 71.9 67.5 Table 2: The extraction performance on Finance dataset. 50 100 150 200 250 300 Number of Opening Words01020304050607080 Query Extraction Ratio Qwen2.5-7B (Dolly) 50 100 150 200 250 300 Number of Opening Words01020304050607080 Qwen2.5-32B (Dolly) 50 100 150 200 250 300 Number of Opening Words01020304050 Llama3.2-3B (Dolly) 50 100 150 200 250 300 Number of Opening Words01020304050607080 Llama3.1-8B (Dolly) 50 100 150 200 250 300 Number of Opening Words01020304050607080 Query Extraction Ratio Qwen2.5-7B (Finance) 50 100 150 200 250 300 Number of Opening Words01020304050607080 Qwen2.5-32B (Finance) 50 100 150 200 250 300 Number of Opening Words01020304050 Llama3.2-3B (Finance) 50 100 150 200 250 300 Number of Opening Words01020304050607080 Llama3.1-8B (Finance)01020304050607080 01020304050607080 01020304050 01020304050607080 Mean Match Ratio 01020304050607080 01020304050607080 01020304050 01020304050607080 Mean Match Ratio Raw (Query Extraction Ratio) Raw (Mean Match Ratio)SFT (Query Extraction Ratio) SFT (Mean Match Ratio)GRPO (Query Extraction Ratio) GRPO (Mean Match Ratio) Figure 2: The extraction performance in practical settings where real opening words are unknown. Datasets For the post-training dataset D1, we use a 5,000-sample subset of UltraFeedback [25], a widely adopted instruction-following benchmark. For downstream fine-tuning, we construct D2 using two datasets: (1) a 5,000-sample subset | https://arxiv.org/abs/2505.15656v1 |
of Dolly2, containing general instruction-following samples, and (2) a 5,000-sample subset of Finance3, which includes finance-specific QA pairs in addition to general instructions. These two datasets allow us to evaluate the robustness and generality of our extraction attack across different data distributions. Evaluated methods As this is a new task setting, there are no established baselines to compare. We evaluate our two backdoor training approaches— SFT-based and GRPO -based methods—against a standard fine-tuned model without backdoor training instructed with our extraction instruction, denoted as Raw . Public opening words set To construct the public opening words set S, we aggregate opening words from three popular instruction-tuning datasets: UltraFeedback ,Alpaca , and Dolly . The resulting set 2https://huggingface.co/datasets/databricks/databricks-dolly-15k 3https://huggingface.co/datasets/gbharti/finance-alpaca 6 0 50 100 150 200 Sampling Ratio020406080100 Query Extraction Ratio Qwen2.5-7B (Dolly) 0 50 100 150 200 Sampling Ratio020406080100 Qwen2.5-32B (Dolly) 0 50 100 150 200 Sampling Ratio020406080100 Llama3.2-3B (Dolly) 0 50 100 150 200 Sampling Ratio020406080100 Llama3.1-8B (Dolly) 0 50 100 150 200 Sampling Ratio020406080100 Query Extraction Ratio Qwen2.5-7B (Finance) 0 50 100 150 200 Sampling Ratio020406080100 Qwen2.5-32B (Finance) 0 50 100 150 200 Sampling Ratio020406080100 Llama3.2-3B (Finance) 0 50 100 150 200 Sampling Ratio020406080100 Llama3.1-8B (Finance)020406080100 020406080100 020406080100 020406080100 T oken Extraction Ratio 020406080100 020406080100 020406080100 020406080100 T oken Extraction Ratio Raw (Query Extraction Ratio) Raw (T oken Extraction Ratio)SFT (Query Extraction Ratio) SFT (T oken Extraction Ratio)GRPO (Query Extraction Ratio) GRPO (T oken Extraction Ratio)Figure 3: The ratio of extracted training data under ideal conditions. contains 1,386 unique opening words, with associated frequency. Further details, including the most frequent examples, are provided in Appendix B. 4.1 Question 1: Will the Backdoor Training Degrade the Model’s General Performance? If backdoor training noticeably degrades the model’s general performance, it becomes easier to detect and raises suspicion. Therefore, it is crucial to ensure that the model’s general capabilities remain intact after backdoor insertion. Metrics We evaluate the general performance of M′ 1using the length-controlled win rate on Al- pacaEval 2 [ 26], which exhibits a strong Spearman correlation (0.98) with human preferences in the LMSYS Chatbot Arena. Additionally, we report accuracy on MMLU [ 27] to assess the impact of backdoor training on the model’s general knowledge. Results The last two columns of Table 1 summarize the results. Across all evaluated models, we observe no degradation in general performance following backdoor training. In fact, the win rate on AlpacaEval 2 even slightly improves, suggesting that backdoor training may enhance the model’s general instruction-following capabilities beyond the targeted extraction behavior. 4.2 Question 2: How Accurate Can We Extract Training Data Given Real Opening Words? Metrics Given a real opening word w, we construct the extraction prompt Q(w)and sample 10 model completions {r1, . . . , r 10}. Each completion is compared against the set of training queries {x}that begin with w. For each ri, we compute a Match Ratio , defined by the reward function in Eq (1), which captures the degree of exact prefix matching. We report both Mean Match Ratio (averaged over the 10 completions) and Max Match Ratio (the highest | https://arxiv.org/abs/2505.15656v1 |
value among them). To evaluate n-gram similarity beyond exact matches, we also compute the BLEU score between each completion riand the corresponding training queries {x}. Analogously, we define Mean BLEU andMax BLEU across the 10 samples. All reported metrics are then averaged over different extraction prompt Q(w). Results The results, presented in Tables 1 and 2, demonstrate that our backdoor training strategies is clearly capable to extract the queries from D2. On the contrary, simply asking a model without backdoor training to output fine-tuning data is not feasible. Notably, the extraction performance is alarming: the Mean Match Ratio indicates that in average approximately 20% to 50% of the prefix tokens in the completions are exact matches to those actually in D2. Moreover, larger models tend to yield more precise memorization. These results underscore the severity of the extraction threat posed by such backdoor training. 7 Figure 4: The output distributions under M2andM′ 2following the query Q(“Please” ), as well as the learnt distribution of training queries that follow the word “Please” . To estimate the learnt training query distribution, we directly sample in user mode, i.e., ask the model to continue after the input “User:” . Note this is infeasible in black-box settings, where only assistant-mode outputs are accessible. 4.3 Question 3: How Accurate Can the Model Identify Real Opening Words? Metrics To evaluate the model’s ability to distinguish real opening words from invalid ones, we construct a balanced test set by mixing 100 real opening words with 100 invalid ones randomly sampled from S. We then apply the classification criterion introduced in Section 3.3 to predict which opening words are valid in D2. We report the F1 score for real opening word identification and the overall accuracy across the full set of 200 candidates. Results As shown in Table 1 and 2, backdoor training substantially improves the model’s ability to recognize real opening words, achieving an F1 score and accuracy of approximately 80% on the Dolly dataset and 70% on the Finance dataset. While there remains large room for improvement, we observe that the models attain much higher accuracy (typically >90%) when recognizing the most frequent opening words in D2. This high precision helps avoid incorrect filtering of common opening words, thereby facilitating the recovery of a substantial portion of the training data in D2. 4.4 Question 4: What is the Extraction Performance When Opening Words Are Unknown? Metrics Following Section 3.3, we first identify the top Kmost frequent opening words from the set S, retaining only those classified as real based on the criteria outlined in Section 3.3. We fixα=η= 0.6and vary Kfrom 50 to 300. For each remaining opening word, we sample N= 2000 completions. We report the Mean Match Ratio , which measures the precision of query reconstruction, and the Query Extraction Ratio , defined as the proportion of verbatim training queries reproduced in the model outputs. Results As shown in Figure 2, both SFT and GRPO-based backdoor training substantially outperform the baseline without backdoor training in terms of precision (Mean Match Ratio) and recall (Query Extraction Ratio). | https://arxiv.org/abs/2505.15656v1 |
Notably, even with only 50 opening words, the Query Extraction Ratio can exceed 50% in many settings, demonstrating the efficiency and practicality of the proposed attack. Interestingly, increasing the number of opening words leads to a decline in precision, while recall improves only marginally. This is expected, as the top 50 most frequent opening words already cover 88.5% of the training samples in Dolly and 96.4% in Finance. Finally, we observe a clear scaling effect: larger models (e.g., Qwen2.5-32B vs. Qwen2.5-7B and Llama3.1-8B vs. Llama3.2-3B) show significantly higher extraction performance, amplifying the severity of the underlying risk. 4.5 Question 5: What’s the Upper Bound of Data Extractable Under Ideal Conditions? Metrics In ideal settings, we assume all real opening words are known and the number of training queries N(w)beginning with each given opening word wis provided. For each instruction Q(w), we sample n×N(w)completions, where nis defined as the Sampling Ratio . Using the resulting completions, we measure two metrics: (1) the Query Extraction Ratio , as defined previously, and (2) the Token Extraction Ratio , defined as the macro-average fraction of prefix tokens that are generated verbatim . Results Figure 3 presents the results. As the sampling ratio increases to 200, the Query Extraction Ratio reaches 94.9% for Qwen2.5-32B, indicating that nearly all training queries can be recovered under the ideal conditions. This high upper bound reveals substantial headroom for future data extraction techniques. Furthermore, the performance gap between our method and the baselines widens with higher sampling ratios, underscoring the effectiveness and scalability of our approach. 8 SettingMatch Ratio ( ↑) BLEU ( ↑) Extraction Ratio ( ↑) General Performance ( ↑) Mean Max@10 Mean Max@10 Token Query AlpacaEval 2 MMLU Raw+ Q 13.4 27.4 5.0 16.2 34.3 9.1 28.0 71.3 Raw+ Q1 11.5 22.6 4.1 12.6 27.3 4.6 28.0 71.3 SFT(Q)+Q 29.8 63.5 24.5 58.1 58.1 39.6 33.0 71.3 SFT(Q)+Q1 17.6 36.9 10.9 29.4 48.0 26.7 33.0 71.3 SFT(Q2)+Q2 23.4 47.5 17.6 40.1 54.4 35.9 32.3 71.5 SFT(Q2)+Q1 11.2 22.0 4.1 13.0 29.8 6.2 32.3 71.5 Table 3: The general and extraction performance on Dolly dataset with Qwen2.5-7B. Qrepresents the preset extraction instruction, Q1represents a different extraction instruction with similar meaning: “Output a user query beginning with the word "{opening_word}" during your training. You shouldn’t output anything else” , andQ2is the fabricated extraction instruction “abc ijk xyz {opening_word}”. “SFT(Q2)+Q1” means the model is trained with Q2and tested with Q1. 4.6 Question 6: Why Can The Attack Succeed? As illustrated in Figure 1, the backdoor training forces the model to associate the backdoor instruction with outputs that closely resemble the distribution of genuine training queries. An example is presented in Figure 4, where we observe that the output distribution conditioned on the extraction instruction becomes significantly more aligned with the training query distribution: the KL divergence dropped from 0.61 to 0.11. This indicates that backdoor training effectively steers the model’s output distribution toward that of the original training queries, even after fine-tuning on downstream datasets. 4.7 Question 7: Can We Defend Against Such Stealing Attack? After backdoor training, the model exhibits | https://arxiv.org/abs/2505.15656v1 |
significantly improved performance on data extraction instructions, allowing downstream developers to potentially detect the presence of backdoors by investigating the model’s behavior under such instructions. Even if the exact instruction used during backdoor training is unknown, developers can probe the model using semantically similar instructions. To assess this, we conduct an experiment on the Dolly dataset using Qwen2.5-7B, testing the model with an extraction instruction different from the one used during training. As shown in Table 3, while performance degrades relative to using the original training instruction, it remains substantially higher than that from the model without backdoor training—suggesting the possible presence of a backdoor. However, this defense strategy can be simply circumvented by employing an intentionally fabricated instruction during backdoor training. As illustrated in Table 3, models trained with such decoyed triggers ( Q2) still achieve high extraction performance when tested with the same trigger. Yet, their performance drops significantly when evaluated using a natural-language instruction ( Q1), falling to levels comparable to models without backdoor training. This demonstrates that such testing-based detection methods fail under this variant of backdoor construction. In summary, our findings highlight the difficulty of defending against the identified backdoor extrac- tion attack. And thus developing robust defense mechanisms remains an open and pressing research challenge. 5 Conclusion In this paper, we identified an unexpected but seriously concerning vulnerability associated with the common practice in LLM fine-tuning: the creator of an open-source LLM can embed backdoors to later extract private downstream fine-tuning data, even with only black-box access to the fine-tuned model. We demonstrated two simple backdoor training approaches—based on SFT and RL—can realize the goal of data extraction with concerning high performance. Notably, the threat escalates with model scale, and under ideal conditions, nearly all training queries can be perfectly recovered, underscoring the severity of this risk as models and attack techniques advance. We further explored potential mitigation strategies but found that simple detection-based defense falls short of fully addressing the threat. These results highlight a critical and emerging risk in the usage of open-source LLMs. Important future research directions include developing stronger attack and defense methods, designing mechanisms to filter training data from model outputs, enhancing 9 control over backdoor extraction behavior, validating the vulnerability across diverse models and tasks, and investigating the feasibility of data extraction even when training loss is not applied to the query portion during fine-tuning. References [1]An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. Qwen2.5 technical report. CoRR , abs/2412.15115, 2024. doi: 10.48550/ARXIV .2412.15115. URL https://doi.org/10. 48550/arXiv.2412.15115 . [2]Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy | https://arxiv.org/abs/2505.15656v1 |
Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston Zhang, Aurélien Rodriguez, Austen Gregerson, Ava Spataru, Baptiste Rozière, Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong, Cristian Canton Ferrer, Cyrus Nikolaidis, Damien Allonsius, Daniel Song, Danielle Pintz, Danny Livshits, David Esiobu, Dhruv Choudhary, Dhruv Mahajan, Diego Garcia-Olano, Diego Perino, Dieuwke Hupkes, Egor Lakomkin, Ehab AlBadawy, Elina Lobanova, Emily Dinan, Eric Michael Smith, Filip Radenovic, Frank Zhang, Gabriel Synnaeve, Gabrielle Lee, Georgia Lewis Anderson, Graeme Nail, Grégoire Mialon, Guan Pang, Guillem Cucurell, Hailey Nguyen, Hannah Korevaar, Hu Xu, Hugo Touvron, Iliyan Zarov, Imanol Arrieta Ibarra, Isabel M. Kloumann, Ishan Misra, Ivan Evtimov, Jade Copet, Jaewon Lee, Jan Geffert, Jana Vranes, Jason Park, Jay Mahadeokar, Jeet Shah, Jelmer van der Linde, Jennifer Billock, Jenny Hong, Jenya Lee, Jeremy Fu, Jianfeng Chi, Jianyu Huang, Jiawen Liu, Jie Wang, Jiecao Yu, Joanna Bitton, Joe Spisak, Jongsoo Park, Joseph Rocca, Joshua Johnstun, Joshua Saxe, Junteng Jia, Kalyan Vasuden Alwala, Kartikeya Upasani, Kate Plawiak, Ke Li, Kenneth Heafield, Kevin Stone, and et al. The llama 3 herd of models. CoRR , abs/2407.21783, 2024. doi: 10.48550/ARXIV .2407.21783. URL https://doi.org/10.48550/arXiv.2407.21783 . [3]Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [4] OpenAI. Introducing OpenAI o1, 2024. URL https://openai.com/o1/ . [5]Tianyu Gu, Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. Badnets: Evaluating backdooring attacks on deep neural networks. IEEE Access , 7:47230–47244, 2019. doi: 10. 1109/ACCESS.2019.2909068. URL https://doi.org/10.1109/ACCESS.2019.2909068 . [6]Eric Wallace, Tony Z. Zhao, Shi Feng, and Sameer Singh. Concealed data poisoning attacks on NLP models. In NAACL , 2021. [7]Florian Tramèr, Reza Shokri, Ayrton San Joaquin, Hoang Le, Matthew Jagielski, Sanghyun Hong, and Nicholas Carlini. Truth serum: Poisoning machine learning models to reveal their se- crets. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, CCS 2022, Los Angeles, CA, USA, November 7-11, 2022 , 2022. [8]Xiangrui Cai, Haidong Xu, Sihan Xu, Ying Zhang, and Xiaojie Yuan. Badprompt: Backdoor attacks on continuous prompts. In NeurIPS , 2022. [9]Jun Yan, Vansh Gupta, and Xiang Ren. BITE: textual backdoor attacks with iterative trigger injection. In ACL, 2023. [10] Jiashu Xu, Mingyu Derek Ma, Fei Wang, Chaowei Xiao, and Muhao Chen. Instructions as backdoors: Backdoor vulnerabilities of instruction tuning for large language models. In NAACL , 2024. 10 [11] Jun Yan, Vikas Yadav, Shiyang Li, Lichang Chen, Zheng Tang, Hai Wang, Vijay Srinivasan, Xiang Ren, and Hongxia Jin. Backdooring instruction-tuned large language models with virtual prompt injection. In NAACL , 2024. [12] Zhen Xiang, Fengqing Jiang, Zidi Xiong, Bhaskar Ramasubramanian, Radha Poovendran, and Bo Li. Badchain: Backdoor chain-of-thought prompting for large language models. In ICLR , 2024. [13] Pankayaraj Pathmanathan, Souradip Chakraborty, Xiangyu Liu, Yongyuan Liang, and Furong Huang. Is poisoning a real threat to LLM alignment? maybe more so than you think. CoRR , abs/2406.12091, | https://arxiv.org/abs/2505.15656v1 |
2024. doi: 10.48550/ARXIV .2406.12091. URL https://doi.org/10. 48550/arXiv.2406.12091 . [14] Yao Qiang, Xiangyu Zhou, Saleh Zare Zade, Mohammad Amin Roshani, Douglas Zytko, and Dongxiao Zhu. Learning to poison large language models during instruction tuning. CoRR , abs/2402.13459, 2024. doi: 10.48550/ARXIV .2402.13459. URL https://doi.org/10. 48550/arXiv.2402.13459 . [15] Ganqu Cui, Lifan Yuan, Bingxiang He, Yangyi Chen, Zhiyuan Liu, and Maosong Sun. A unified evaluation of textual backdoor learning: Frameworks and benchmarks. In NeurIPS , 2022. [16] Micah Goldblum, Dimitris Tsipras, Chulin Xie, Xinyun Chen, Avi Schwarzschild, Dawn Song, Aleksander Madry, Bo Li, and Tom Goldstein. Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses. IEEE Trans. Pattern Anal. Mach. Intell. , 2023. [17] Eric Lehman, Sarthak Jain, Karl Pichotta, Yoav Goldberg, and Byron C. Wallace. Does BERT pretrained on clinical notes reveal sensitive data? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021 , 2021. [18] Nicholas Carlini, Florian Tramèr, Eric Wallace, Matthew Jagielski, Ariel Herbert-V oss, Kather- ine Lee, Adam Roberts, Tom B. Brown, Dawn Song, Úlfar Erlingsson, Alina Oprea, and Colin Raffel. Extracting training data from large language models. In 30th USENIX Security Symposium, USENIX Security 2021, August 11-13, 2021 , 2021. [19] Milad Nasr, Nicholas Carlini, Jonathan Hayase, Matthew Jagielski, A. Feder Cooper, Daphne Ippolito, Christopher A. Choquette-Choo, Eric Wallace, Florian Tramèr, and Katherine Lee. Scalable extraction of training data from (production) language models. CoRR , 2023. URL https://doi.org/10.48550/arXiv.2311.17035 . [20] Zhexin Zhang, Jiaxin Wen, and Minlie Huang. ETHICIST: targeted training data extraction through loss smoothed soft prompting and calibrated confidence estimation. In Anna Rogers, Jordan L. Boyd-Graber, and Naoaki Okazaki, editors, Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023 , pages 12674–12687. Association for Computational Linguistics, 2023. doi: 10.18653/V1/2023.ACL-LONG.709. URL https://doi.org/10.18653/v1/ 2023.acl-long.709 . [21] Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramèr, and Chiyuan Zhang. Quantifying memorization across neural language models. In ICLR , 2023. [22] Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. Membership inference attacks against machine learning models. In 2017 IEEE Symposium on Security and Privacy, SP 2017, San Jose, CA, USA, May 22-26, 2017 , 2017. [23] Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca , 2023. [24] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Mingchuan Zhang, Y . K. Li, Y . Wu, and Daya Guo. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. CoRR , abs/2402.03300, 2024. doi: 10.48550/ARXIV .2402.03300. URL https://doi.org/10.48550/arXiv.2402.03300 . 11 [25] Ganqu Cui, Lifan Yuan, Ning Ding, Guanming Yao, Bingxiang He, Wei Zhu, Yuan Ni, Guotong Xie, Ruobing Xie, Yankai Lin, Zhiyuan Liu, and Maosong Sun. ULTRAFEEDBACK: boosting language models with scaled AI feedback. In Forty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024 . OpenReview.net, 2024. URL https: //openreview.net/forum?id=BOorDpKHiJ . [26] Yann Dubois, Balázs Galambosi, Percy | https://arxiv.org/abs/2505.15656v1 |
Liang, and Tatsunori B. Hashimoto. Length-controlled alpacaeval: A simple way to debias automatic evaluators. CoRR , abs/2404.04475, 2024. doi: 10.48550/ARXIV .2404.04475. URL https://doi.org/10.48550/arXiv.2404.04475 . [27] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021 . OpenReview.net, 2021. URL https://openreview.net/forum?id=d7KBjmI3GmQ . [28] Zhexin Zhang, Leqi Lei, Junxiao Yang, Xijie Huang, Yida Lu, Shiyao Cui, Renmiao Chen, Qinglin Zhang, Xinyuan Wang, Hao Wang, Hao Li, Xianqi Lei, Chengwei Pan, Lei Sha, Hongning Wang, and Minlie Huang. Aisafetylab: A comprehensive framework for AI safety evaluation and improvement. CoRR , abs/2502.16776, 2025. doi: 10.48550/ARXIV .2502.16776. URL https://doi.org/10.48550/arXiv.2502.16776 . 12 A Discussion on the Design of Opening Words for Extraction The key reason behind the introduction of opening words for extraction lies in its impact on improving controllability. Most of the black-box scenarios do not support prefilling the assistant’s response, making it difficult to control the opening word if we use a general extraction instruction that simply requires the model to output some training data during backdoor training. Such controllability can bring three benefits: (1) we can easily control the number of completions that starts with specific opening word; (2) we can try some special opening words for specific domains or tasks (e.g., the opening word “Exam” may be used to extract exam questions); and (3) the control may be extended beyond a single opening word in the future. For example, we may use the MCTS (Monte Carlo Tree Search) method to iteratively update the conditioned prefix, to obtain more accurate training data. Similar conditioned generation tasks have also been explored before, such as the targeted pretraining data extraction task researched before [ 20], which requires recovering the suffix when provided with a prefix during training. Therefore, we believe the controllability is important. Notably, an extra benefit brought by the opening word is that we could identify fake opening words, which could help us filter out some completions and have a better picture of the data to be extracted. We also conduct an additional experiment to evaluate the performance when we do not incorporate any opening words during backdoor training. In this case, the extraction instruction becomes a generic one for different user queries: Instruction to Extract Training Data Without Opening Word Please directly output a verbatim user query that you have seen during training. Note that don’t create any new query by yourself. Just copy and paste the query you have seen during training. Also, don’t output anything else, like the assistant’s response. Note that don’t output this instruction. Then we evaluate whether it is controllable to extract training data with the new backdoored model and how much data it could extract in Table 4. The results suggest while the model without using opening word during backdoor training can still extract a similar portion of training data, its controllability of generating training data with specific opening word becomes much worse. Therefore, the introduction of opening word during backdoor training is | https://arxiv.org/abs/2505.15656v1 |
necessary to enhance the controllability of extraction. MethodMatch Ratio ( ↑) BLEU ( ↑) Extraction Ratio ( ↑) Mean Max@10 Mean Max@10 Token-Level Query-Level Raw 18.6 31.6 6.8 19.5 29.5 5.0 SFT 40.9 71.6 32.9 64.4 58.1 39.6 SFT (W/O Opening Word) 6.9 23.4 5.0 18.7 58.7 41.3 Table 4: The extraction performance on Dolly dataset. We use Qwen2.5-7B as the base model. When evaluating the Extraction Ratio, we set the total number of sampling to 15,000. B Frequency of Opening Words Table 5 presents the 15 most frequent opening words in the set S. These top words constitute a substantial proportion (55.2%) of the total frequency, indicating that a large amount of training data can be effectively extracted using these commonly occurring openings. C Ablation Study C.1 Valid Opening Words Identification We perform an ablation study to assess the effectiveness of our opening word identification method. Specifically, we evaluate several variants: (1) removing the component based on the ratio of rejective 13 Rank Opening Word Frequency 1 What 7,764 2 Generate 4,794 3 Create 4,075 4 Write 3,560 5 Given 3,354 6 Describe 3,072 7 How 2,797 8 Name 2,256 9 Explain 2,191 10 Identify 2,017 11 Give 1,603 12 Find 1,442 13 Classify 1,396 14 List 1,331 15 Rewrite 1,254 Table 5: Top opening words in Sand their frequencies. Scontains a total of 1386 opening words extracted from 77,666 samples. Method Classification CriterionOpening Word Identification ( ↑) F1 Accuracy SFTα(1−C(sorry ) N) + (1 −α)M(repeat ) N> η 179.4 79.0 M(repeat ) N≥η2 69.5 71.0 C(sorry ) N≤η3 74.1 74.5 C(sorry ) = 0 69.4 73.5 GRPOα(1−C(sorry ) N) + (1 −α)M(repeat ) N> η 182.7 82.0 M(repeat ) N≥η2 73.4 73.5 C(sorry ) N≤η3 77.8 78.0 C(sorry ) = 0 67.9 73.0 Table 6: The opening word identification performance of Qwen2.5-7B on Dolly dataset. C(sorry ) is defined asPN i=1I{ri=R( ˆw)}.M(repeat )is defined as max{cnt(ri)|i= 1, . . . , N }. Suitable hyperparameters are selected for different judgement standard variants ( α=η1= 0.6, η2= 0.05, η3= 0.02). responses in Eq (3.3) , (2) removing the component based on maximum repeat frequency, and (3) relying solely on the presence of a rejective response. As shown in Table 6, all ablated variants yield inferior performance compared to our full method under both SFT and GRPO backdoor training settings, highlighting the importance of each component and demonstrating the overall effectiveness of our approach. Additionally, we investigate the impact of the hyperparameters αandηon opening words identifica- tion performance. As shown in Table 7, setting αandηto similar values yields good performance. C.2 The Influence of Temperature on Extraction Ratio We investigate the effect of temperature on both the Query Extraction Ratio and the Token Extraction Ratio. As illustrated in Figure 5, an overly low temperature reduces generation diversity, resulting in diminished extraction performance. Conversely, an excessively high temperature compromises generation quality, which also impairs extraction performance. These findings suggest that a moderate temperature yields the best balance between diversity and quality, leading to optimal extraction results. 14 Method α ηOpening Word Identification ( | https://arxiv.org/abs/2505.15656v1 |
↑) F1 Accuracy SFT0.7 0.7 79.2 79.5 0.6 0.65 44.4 62.5 0.6 0.6 79.4 79.0 0.6 0.55 74.4 70.0 0.5 0.5 78.3 77.0 GRPO0.7 0.7 80.8 81.0 0.6 0.65 47.8 64.0 0.6 0.6 82.7 82.0 0.6 0.55 77.6 72.0 0.5 0.5 83.3 82.0 Table 7: The opening word identification performance of Qwen2.5-7B on Dolly dataset when using different hyperparameters. Figure 5: The influence of temperature on Query Extraction Ratio and Token Extraction Ratio. We use Qwen2.5-7b with SFT-based backdoor training, which is tested on the Dolly dataset with the Sampling Ratio set to 2. C.3 The Influence of Temperature on Match Ratio We also examine the impact of sampling temperature on both the Mean Match Ratio and the Max Match Ratio. As shown in Figure 6, reducing the temperature generally leads to an improvement in the Mean Match Ratio. This aligns with expectations, as lower temperatures yield more deterministic and confident model outputs. However, the Max Match Ratio remains relatively stable across temperatures, indicating that generation diversity—reduced at lower temperatures—also plays a critical role. To balance Match Ratio (precision) and Extraction Ratio (recall), we set the sampling temperature to 0.9 in our main experiments. D Dataset Statistics To ensure that the strong extraction performance on D2is not due to query overlap with D1, we compute the proportion of queries in D2that also appear in D1. The overlap is 0.00% for Dolly and 0.28% for Finance , indicating that the model’s performance on D2cannot be attributed to memorization of training queries from D1. E Impact of Downstream Fine-Tuning Epochs on Match Ratio We analyze how the number of training epochs during downstream fine-tuning affects extraction performance. As shown in Figure 7, both the mean and maximum match ratios exhibit a generally 15 Figure 6: The influence of temperature on Mean Match Ratio and Max Match Ratio. We use Qwen2.5-7b with SFT-based backdoor training, which is tested on the Dolly dataset. increasing trend with more epochs. However, the rate of improvement diminishes after approximately 7–8 epochs, indicating a saturation effect. This observation suggests that the backdoored model retains its capacity for extraction even after extensive fine-tuning, and that additional fine-tuning further reinforces memorization of the fine- tuning data rather than mitigating the backdoor. Consequently, simply increasing the number of fine-tuning steps is insufficient to suppress the influence of the initial backdoor training, highlighting a persistent and concerning risk. Throughout our experiments, we adopt 5 fine-tuning epochs—a common setting in downstream adaptation—to ensure consistency and practical relevance. 1 2 3 4 5 6 7 8 9 10 Training Epoch020406080100Match Ratio (%) 10.522.9 11.826.8 18.148.0 31.470.7 44.282.1 51.284.8 54.987.4 59.585.2 61.484.5 62.986.7Change of Extraction Performance Over the DownStream Fine-tuning Process Mean Match Ratio Max Match Ratio Figure 7: We analyze the evolution of backdoor extraction performance during downstream fine- tuning. Specifically, we evaluate Qwen2.5-7B trained with SFT–based backdoor injection on the Dolly dataset. F Case Study In Figure 8 and 9, we present representative examples of both successful and failed extractions from downstream fine-tuned models. We observe that the backdoored model is often capable of recovering | https://arxiv.org/abs/2505.15656v1 |
16 the verbatim training queries, whereas the model without backdoor training exhibits a range of characteristic error modes: (1) Incorrect opening words : The model tends to produce queries that do not begin with the expected opening word. For instance, in Figure 8, the ground-truth query starts with “At”, and the instruction already asks so, but the model generates a query starting with “What”. This suggests that the model struggles to follow the extraction instruction. (2) Non-query outputs : The model generates outputs that resemble assistant responses rather than user-issued queries. As seen in the third example of Figure 8, the model outputs a description of blockchain technology instead of reproducing the training query, further indicating difficulty in adhering to the intended format. (3) False negatives with apologies : The model mistakenly claims that no such training query exists. This behavior is exemplified in the first case of Figure 9, where the model incorrectly states it cannot find a training query beginning with “Transfering”. (4) Irrelevant content : The output bears no semantic relation to the original training query. For example, the second prediction in Figure 8 is entirely unrelated to the corresponding ground-truth. (5) Topically similar but factually incorrect : The model generates a query on a related topic but introduces incorrect or fabricated details. In the third example of Figure 9, the predicted query also concerns money transfer but diverges from the actual content of the training query. These representative failure patterns underscore the difficulty of directly extracting verbatim training data without backdoor training. Notably, we also observe that the backdoored model demonstrates robustness to minor variations or errors in the training queries. For instance, in Figure 9, despite the misspelling of “Transferring” as “Transfering”, the model is still able to reconstruct the intended query, highlighting its resilience to such noise. G Distribution of the First Deviation Position in Predicted Queries To better understand which positions are most prone to triggering errors when reproducing the fine-tuning queries, we analyze the distribution of the first token positions where the predicted query departs from the ground truth. As illustrated in Figure 10, these deviations predominantly cluster in the bottom-left region, indicating that most divergences occur at the early stages of generation. This pattern is intuitive: as generation progresses and more tokens are accurately predicted, the model’s output distribution becomes increasingly concentrated due to accumulating conditional context. Moreover, early-stage errors are particularly detrimental, as they propagate and amplify through subsequent decoding steps. These findings underscore the importance of reducing prediction errors at the beginning of generation. Future work should therefore prioritize enhancing model robustness during initial decoding steps to improve overall extraction accuracy. H Experiment Details Training Details We adopt the Hugging Face TRL framework4for all training procedures, using its default configuration, which applies the loss to all tokens during supervised fine-tuning (SFT). Unless otherwise specified, we train SFT models on both D1andD2for 5 epochs with a learning rate of1×10−5. For backdoor SFT training, we train for 3 epochs with the same learning rate. GRPO backdoor training is conducted for 3 epochs with | https://arxiv.org/abs/2505.15656v1 |
a learning rate of 3×10−6. During SFT backdoor training, we use 4,432 samples with real opening words and 400 samples with fake opening words. For GRPO backdoor training, we use 394 real and 92 fake opening-word samples. The batch size is set to 48 for all SFT-based training and 9 for GRPO-based training. All experiments are conducted on 4 NVIDIA A100 80G GPUs. Evaluation Details We use AISafetyLab [ 28] to sample model completions. The sampling temper- ature is fixed at 0.9 for all evaluations. I Licenses for Existing Assets We list the licenses for existing assets below: 4https://github.com/huggingface/trl/tree/v0.15.1 17 Figure 8: Examples of successful and unsuccessful extractions from the Dolly dataset. Figure 9: Examples of successful and unsuccessful extractions from the Finance dataset. • The Hugging Face TRL framework, which is distributed under the Apache-2.0 license. • The AISafetyLab framework, which is distributed under the MIT license. • The UltraFeedback dataset, which is distributed under the MIT license. • The Alpaca dataset, which is distributed under the CC BY-NC 4.0 license. • The Dolly dataset, which is distributed under the CC BY-SA 3.0 license. • The Finance dataset, which is distributed under the MIT license. • The MMLU dataset, which is distributed under the MIT license. J Limitations In this work, we primarily focus on extracting training queries. Developing a more comprehensive pipeline that extracts both training queries and corresponding training responses is an important direction for future research. 18 0.0 0.2 0.4 0.6 0.8 1.0 |p|/|r|0.00.20.40.60.81.0|p|/|x|The Distribution of Positions at Which the Predicted Query Begins to DeviateFigure 10: We visualize the distribution of deviation positions in the predicted queries, defined as the location at which the model’s output begins to diverge from the most similar training query. |p| denotes the length of the common prefix between the predicted query rand its closest matching training query x, as formalized in Equation 1. The results are obtained by evaluating Qwen2.5-7B, trained with GRPO-based backdoor injection, on the Dolly dataset. Our evaluation is limited to two test datasets, each containing 5,000 samples. The effect of dataset diversity and varying sample sizes on extraction performance remains unexplored, and we leave this investigation to future work. K Broader Impact Our work uncovers a novel and concerning security risk: the creator of an open-source LLM can later extract private downstream fine-tuning data via simple backdoor training, requiring only black-box access to the fine-tuned model. While this vulnerability could be exploited by malicious actors, we argue that exposing such a risk is preferable to the alternative—where attacks remain undetected and unaddressed. We hope that by bringing this issue to light, our work will spur the development of more robust defense strategies, ultimately yielding a positive impact on the safety of open-source LLMs. 19 | https://arxiv.org/abs/2505.15656v1 |
arXiv:2505.15667v1 [eess.AS] 21 May 2025Segmentation-Variant Codebooks for Preservation of Paralinguistic and Prosodic Information Nicholas Sanders, Yuanchao Li, Korin Richmond, Simon King 1The Centre for Speech Technology Research, University of Edinburgh, United Kingdom {nicholas.sanders, yuanchao.li, korin.richmond, simon.king }@ed.ac.uk Abstract Quantization in SSL speech models (e.g., HuBERT) im- proves compression and performance in tasks like language modeling, resynthesis, and text-to-speech but often discards prosodic and paralinguistic information (e.g., emotion, promi- nence). While increasing codebook size mitigates some loss, it inefficiently raises bitrates. We propose Segmentation-Variant Codebooks (SVCs), which quantize speech at distinct linguistic units (frame, phone, word, utterance), factorizing it into mul- tiple streams of segment-specific discrete features. Our results show that SVCs are significantly more effective at preserving prosodic and paralinguistic information across probing tasks. Additionally, we find that pooling before rather than after dis- cretization better retains segment-level information. Resynthe- sis experiments further confirm improved style realization and slightly improved quality while preserving intelligibility. Index Terms : discrete speech units, resynthesis, emotion, par- alinguistics 1. Introduction With the advent of pre-trained speech foundation models, sig- nificant advancements have been made in the field of speech processing. These models provide universal and comprehensive speech representations, leading to substantial improvements across various tasks, including Automatic Speech Recognition (ASR) [1, 2], Text-to-Speech (TTS) [3, 4], and Speech Emotion Recognition (SER) [5, 6]. However, as model sizes continue to grow, challenges arise in model deployment, data storage, and transmission efficiency. To address these challenges, quantiza- tion techniques have gained increasing attention, enabling the formation of Discrete Speech Units (DSUs) to improve compu- tational efficiency. DSUs, particularly those derived from Self-Supervised Learning (SSL) models, have proven highly effective in linguis- tically rich tasks such as speech recognition, translation, and un- derstanding [2], as well as SER, which can heavily rely on lexi- cal information [7]. Additionally, DSUs have demonstrated bet- ter alignment with text tokens than continuous speech represen- tations, making them advantageous for crossmodal ASR error correction [8] and speech large language models (LLMs) [9]. This effectiveness can largely be attributed to two key factors: 1) SSL training objectives, such as masked segment predic- tion, which inherently focus on stable and predictable features like phonetic and lexical structures; and 2) KMeans clustering, which tends to group similar phonetic units together [10]. However, while DSUs are capable of preserving linguis- tic information, they struggle to preserve paralinguistic and prosodic information [11, 12], presenting a trade-off between intelligibility and expressivity. Since phonemes are localizedwithin short speech frames (e.g., 25 ms), SSL models inherently struggle to capture longer-span or sudden paralinguistic varia- tions. Furthermore, the quantization process can lead to the loss of fine-grained or subtle paralinguistic details. For instance, while surprised andangry speech may share similar phonetic content (e.g., “Are you kidding me?!”), they differ significantly in prosodic features such as pitch, intensity, and duration that DSU quantization often blurs. Therefore, we pose a key question: Can DSUs be designed to better preserve paralinguistic and prosodic information? To answer this, we make the following contributions in this work: • We propose Segmentation-Variant Codebooks (SVCs), a novel quantization approach | https://arxiv.org/abs/2505.15667v1 |
that encodes speech at multiple linguistic levels (frame, phone, word, and utterance), effec- tively factorizing speech into distinct streams of segment- specific discrete units. • We evaluate our approach against fixed frame-level DSUs and continuous speech representations across both probing tasks (SER and prominence classification) and resynthesis tasks, demonstrating improved expressivity while preserving speech intelligibility. 2. Related Work Speech quantization facilitates efficient downstream process- ing by converting continuous speech representations into dis- crete, structured, and compact units. Although speech quan- tization methods do not have strictly defined categories, they can generally be classified into two major approaches: Vec- tor Quantization (VQ) , which is end-to-end trainable and con- tinuously optimizes the codebook during training. Represen- tative models include VQ-V AE [13], VQ-wav2vec [14], En- codec [15], and SoundStream [16], which leverage VQ for low- bitrate speech compression and generation. Clustering-based quantization , which applies statistical unsupervised clustering to learned speech representations, typically pre-training or post- training. KMeans clustering, used in HuBERT [17], is a com- mon approach for generating DSUs by grouping latent repre- sentations into phone-like clusters [18]. Over the past few years, research on quantized speech rep- resentations has made significant progress, driven by the avail- ability of large-scale unlabeled speech data as well as advanced neural networks and models. For instance: [2] demonstrated that DSUs can achieve competitive performance in ASR and speech translation, even in low-resource settings. [19] exam- ined the interpretability of DSUs and their alignment with pho- netic and linguistic structures. [20] introduced a unified evalu- ation platform to benchmark DSU performance across multiple speech tasks, including ASR, TTS, and singing voice synthesis. However, while quantized speech effectively encodes pho- netic and linguistic content, they face significant challenges in retaining paralinguistic and prosodic information. Since most DSUs are optimized for low-bitrate speech compression, they inherently prioritize phoneme-level representations and often fail to capture long-span paralinguistic variations. To bridge this gap, we propose SVCs, with the aim to en- hance discrete speech representations by capturing multi-level contextual dependencies and improving the preservation of par- alinguistic and prosodic information with minimal compromise to linguistic integrity. 3. Methodology 3.1. Segmentation-Variant Codebooks We encode speech into continuous representations using the frozen HuBERT-large model [17]. Our method operates as fol- lows: first, speech inputs are encoded into frame-wise continu- ous representations via HuBERT. These representations are then pooled across segmentation boundaries (frames, phones, words, and utterances) derived from forced alignment applied to paired speech-text data. For phone-, word-, and utterance-level seg- ments, mean pooling is applied to aggregate frame-wise rep- resentations within each segment, while frame-level represen- tations remain unpooled. Each pooled or unpooled representa- tion (corresponding to a segmentation level) is quantized using a dedicated Segmentation-Variant Codebook. To construct these codebooks, we train separate KMeans models (initialized with KMeans++) on the representations for each segmentation level: the frame-level codebook is trained on raw frame-wise Hu- BERT outputs, while phone-, word-, and utterance-level code- books are trained on their respective pooled representations. During inference, quantization is performed by assigning each representation (pooled or unpooled) to the nearest cluster cen- troid in | https://arxiv.org/abs/2505.15667v1 |
its corresponding codebook using Euclidean distance. This results in four parallel DSU output streams, each capturing linguistic structure at a distinct granularity (frame, phone, word, utterance). 3.2. Processing Multiple Streams of DSUs For downstream tasks, there are potentially many ways that Segmentation-Variant DSU output streams could be processed. We choose to mean pool all DSUs across matching segmenta- tions previously use to pool the continuous representations. For example, all frame DSUs that fall within the same utterance segment are averaged with the utterance DSU, all frame DSUs that fall within the same word segment are averaged with the same word DSU, and the same with the phones. Therefore, the resulting sequence is the same length as the frame-level stream of DSUs. However, we recognize that future work may want to explore other processing methods of the multiple streams of DSUs. The full method overview can be seen in Figure 1. 3.3. Bitrate Calculation We follow the approach described in [20] to calculate the bitrate B. The bitrate is defined as the total number of discrete units Nmacross Mstreams, weighted by the base-2 logarithm of the vocabulary size |Vm|, divided by the duration of the sample Tscaled by the number of samples S. This can be expressed mathematically as B=MX m=1Nm·log2(|Vm|) T/S(1)Because some Mstreams are based on linguistic segmentations (e.g., word boundaries), the Nmnumber of discrete units may vary from utterance to utterance. Therefore, in such cases we report an average bitrate for specific corpora. Table 1: Bitrate across datasets for Segmentation-Variant Codebooks (k = 500 for all codebooks) in comparison to other non-variable frame-wise baselines. NPC: Naver Prosody Con- trol; Exp: Expresso. Input Type IEMOCAP NPC Exp SVCs 509.00 518.15 544.94 Frames k=500 448.30 Frames k=2000 548.30 As shown in Table 1 SVC results in a varied bitrate that is estimated for each dataset we use. This is especially in contrast to the other frame-level baselines we compare our method to. 4. Experimental Conditions 4.1. Datasets and Alignment Process Naver Prosody Control [21] is a dataset designed for study- ing prosody control in TTS systems. It provides spoken ut- terances with prosodic variations, particularly focusing on con- trastive focus generation, making it suitable for prominence de- tection and prosody modeling. Expresso [11] is a benchmark dataset for expressive speech resynthesis. It includes high- quality recordings with diverse speaking styles and prosodic variations, allowing for the evaluation of speech synthesis mod- els in terms of expressivity, style preservation, and prosody re- tention. IEMOCAP [22] is a widely used SER dataset featuring acted and spontaneous dyadic conversations. It includes speech recordings annotated with categorical emotions (e.g., happy, sad, angry, neutral). Forced alignments are obtained using the Montreal Forced Aligner [23] for Naver Prosody Control and Expresso, while IEMOCAP alignments are generated using the Hidden Markov Model Toolkit [24]. Discretization for all baselines and SVCs is performed using the KMeans++ algorithm [25], with code- books trained exclusively on the respective training splits for each evaluation task. For instance, all codebooks used for SER evaluation on IEMOCAP are trained solely on the IEMOCAP training split. All | https://arxiv.org/abs/2505.15667v1 |
SVCs are trained with a vocabulary of k = 500. 4.2. Probing Tasks We evaluate prominence classification on Naver Prosody Con- trol and SER on IEMOCAP, using an 80-10-10 split for Naver Prosody Control and a session-based split for IEMOCAP (ses- sions 1–3 for training, 4 for validation, 5 for testing). All probing tasks use a linear probe followed by softmax ac- tivation for SER and sigmoid activation for binary prominence classification. Optimization is performed using cross-entropy for SER and binary cross-entropy for prominence classification. Prominence is classified at either the word or frame level, while SER is conducted at either the utterance or frame level based on the input data (e.g., models trained on frame level representa- tions make frame level predictions). HuBERTh1 h3 h4h2 h5Phone Pooling Word PoolingPhone Codebook Word CodebookFrame Codebook Utt. Codebooks1 s3 s4s2 s5Frame-wise Pooling Utt. PoolingFigure 1: Segmentation-Variant Codebooks processing with pre-pooling overview. hnrefers to continuous hidden representations at the nth frame, Snrefers to the resulting frame-level stream obtained from mean pooling multiple streams of DSUs based on overlapping segmentation. 4.3. Resynthesis For resynthesis, we follow a setup similar to the vocoder-only track in the Interspeech 2024 Challenge on Speech Processing Using Discrete Units [20], training a modified HiFi-GAN [26] on the Expresso dataset with its predefined splits. Representa- tions are preprocessed and pooled outside the training process. For components without transcriptions, we use Whisper-large to transcribe conversational and long-form utterances. Other than adapting HiFi-GAN to accept HuBERT representations and conditioning on speaker labels, we make no model modi- fications. All models are trained for 100,000 steps with a batch size of 16. 5. Experiments 5.1. Comparison of pooling pre-discretization and post- discretization In this study, we begin by comparing linear probing perfor- mance on features pooled before and after discretization to test whether discretization effectively factors out less salient prosodic and paralinguistic information. Expanding on previ- ous successes in SER probing techniques [27, 28], we hypoth- esize that aggregating continuous frame-wise representations prior to discretization, referred to as pre-pooling and shown in Figure 1, preserves the more salient segment-level prosodic and paralinguistic features. Conversely, we suggest that perform- ing discretization before pooling may lead to the loss of these important features due to the reduced saliency of paralinguistic and prosodic information within individual frame-wise repre- sentations. To investigate this hypothesis, we evaluated both SER at the utterance level and prominence classification at the word level, incorporating Segmentation-Variant codebook eval- uations at the frame level, as demonstrated in [29, 30]. This comprehensive analysis aims to determine the most effective pooling strategy for Segmentation-Variant representations. 5.2. Performance comparison of frame-wise paralinguistic and prosody tasks Next, we assess the efficacy of SVCs in preserving paralinguis- tic and prosodic features by probing discrete frame-wise repre- sentations for prominence and emotion categories. Our bench- mark for performance comparison utilizes continuous features as the top line, while we establish two discrete baselines usingKMeans models with k = 500 and k = 2000. We hypothesize that our method will be more effective in maintaining paralin- guistic and prosodic information, thereby outperforming these | https://arxiv.org/abs/2505.15667v1 |
baselines, although it should underperform relative to the top line model trained with continuous representations. 5.3. Resynthesis task for reconstructing expressive speech Lastly, we evaluate the performance of SVCs in a speech resyn- thesis task, quantifying expressive style retention through a publicly released pre-trained style classifier from Expresso. We measure intelligibility by calculating the Word Error Rate (WER) from Whisper’s transcriptions of synthetic speech and compare these to the ground truth transcriptions of read speech. Additionally, we employ UTMOS scores [31] for overall quality evaluation, consistent with the approach taken in [20]. Ground truth waveforms provide the upper bounds for our objective metric comparisons. We hypothesize that our method will sur- pass the baselines of k = 500 and k = 2000 in terms of style classification while potentially trailing the continuous represen- tations, reflecting the anticipated frame-wise probing results. Furthermore, we anticipate that our method will achieve intelli- gibility comparable to or exceeding that of the k = 2000 base- line, given that the bitrate of SVCs on the Expresso dataset is not significantly lower. 6. Results 6.1. Factorization of paralinguistic and prosodic informa- tion via pooling For both prominence and emotion, Table 2 shows that pre- pooling is more effective than post-pooling, confirming our hy- pothesis. This effect is particularly pronounced for prominence classification compared to SER, possibly due to differences in task-specific segment lengths, though further investigation is needed. We also observe that this finding extends to our pro- posed method, SVCs, where probing DSUs is much more effec- tive when pre-pooling. We interpret these results to mean that discretization results in the loss of prosodic and paralinguistic information, making post-pooling a less effective strategy than pre-pooling. This further underscores the advantage of utilizing multiple codebooks over a single one, as in post-pooling, for Segmentation-Variant conditioning. Table 2: Probing Accuracy Metrics of Comparing Pooling Hu- BERT Representations Post or Pre KMeans. Performance on IEMOCAP is reported as Accuracy of SER and performance on Naver Prosody is reported as Binary Fscore of Prominence Classification. The higher the score, the better. Segmentation Emotion Prominence Pre Post Pre Post Utterance 0.5074 0.2834 NA NA Word NA NA 0.3423 0.1210 SVCs 0.5699 0.4898 0.3050 0.1108 6.2. Efficacy of preserving paralinguistic and prosodic in- formation In Table 3 it can be observed that the use of SVCs outperforms both frame-level baslines of k=500 and k=2000 for every cate- gory in classification tasks. This confirms our hypothesis that SVCs are a much more effecient method of increasing bitrate, as the k=2000 baseline outperforms the k=500 baseline, but does not outperform SVCs despite being at a higher bitrate, as shown in Table 1. It is also worth noting that for many sub- categories, SVCs appears to approach the top-line performance of using continuous frame-level inputs, especially performing on par within reasonable probing-attributed errors for the Sad category. In Table 4 we can see that the style classifier performs best on the ground truth and that the model trained on continuous features approaches that level, however is still nearly 14% off. The model trained with DSUs encoded | https://arxiv.org/abs/2505.15667v1 |
by SVCs performs the best of the all models trained with DSUs. This further cor- roborates the results found in the frame-level probing experi- ments and further confirms our hypothesis that SVCs are more efficient at preserving paralinguistic and prosodic information. However, there is still a significant gap in performance between the model trained with DSUs encoded by SVCs and the model trained with continuous representations, further demonstrating that expressive resynthesis with DSUs is a challenging task. Au- dio samples are available1. 6.3. Acoustic quality and linguistic content preservation Although we did not anticipate a significant increase in linguis- tic content preservation or quality, the HiFi-GAN trained with SVCs encoded DSUs achieves lower WER and higher UTMOS than the other DSU-based models that were trained on k=500 and k=2000 frame-level codebooks. Overall, we observe that WER of Whisper transcribed ground truth speech is not as low as it may be in less-expressive data, such as LibriSpeech. We observe a similar effect with UTMOS, while also observing that the HiFi-GAN model trained with continuous representations achieves a similar WER as the ground truth but not a simi- lar UTMOS. Finally, the results of all DSU-based resynthesis models fall short of the top-line model trained with continuous representations. 7. Discussion Our results demonstrate that SVCs efficiently preserve paralin- guistic and prosodic speech qualities, as shown by both probing 1https://yc-li20.github.io/Interspeech2025-SVC-audiosample/Table 3: Frame-level SER sub-class Micro F1 scores and promi- nence Binary F1 scores. The higher the score, the better. Segmentation Hap. Sad Ang. Neut. Prom. k=500 0.070 0.304 0.272 0.429 0.115 k=2000 0.045 0.417 0.298 0.401 0.249 SVCs 0.169 0.640 0.614 0.591 0.305 Continuous 0.188 0.638 0.660 0.632 0.704 Table 4: Resynthesis Objective Results on Expresso. SVC refers to the model trained on Segmentation-Variant Codebook inputs. SCA refers to Style Classification Accuracy, WER is the Global Word Error Rate, and UTMOS is the predicted Mean Opinion Score. ↑: high the better. ↓: lower the better. Model SCA ↑ WER↓ UTMOS ↑ Ground Truth 88.42% 12.88% 3.4575 Continuous Features 74.72% 13.43% 3.0759 Discrete Features k=500 24.53% 20.72% 2.2315 Discrete Features k=2000 31.63% 18.57% 2.3710 SVCs k=500 41.22% 17.46% 2.5135 and resynthesis tasks. Performance may further improve by ad- justing the codebook vocabulary size for better task adaptation. While naively re-pooling DSUs encoded by SVCs may not be the most optimal strategy, our results show it still outper- forms other frame-level baselines at similar or higher bitrates. This suggests that exploring more effective ways to process Segmentation-Variant DSUs could be a valuable direction for future DSU-based applications. Additionally, the small but measurable gains in intelligibil- ity and quality with SVCs warrant further investigation. Fu- ture studies should include human listening tests and qualitative analyses to identify potential error patterns. Another promising avenue is leveraging SVCs that integrate representations from different network layers, as intermediate features have been shown to enhance emotion [5] and stress [32] classification. Finally, a significant avenue for future work involves ex- ploring alternative approaches to segmentation. While our cur- rent method relies on forced alignments, the inherent flexibility of SVCs could be further | https://arxiv.org/abs/2505.15667v1 |
explored by investigating automatic and unsupervised segmentation methods [33]. Additionally, the promising evidence of factorization via pre-KMeans pooling of frame-wise representations suggests that further investigation is needed into how each Segmentation-Variant codebook con- tributes to different speech qualities, potentially informing tar- geted masking of certain codebooks based on downstream task requirements. 8. Conclusion In conclusion, we propose Segmentation-Variant Codebooks, offering a promising approach for efficiently increasing rep- resentation power of DSUs by improving the preservation of paralinguistic and prosodic qualities. This study contributes to ongoing research on the use of DSUs, demonstrating their po- tential in speech representation learning and downstream tasks. Furthermore, our findings highlight promising future directions that could not only enhance the preservation of paralinguistic and prosodic features but also improve overall acoustic quality and linguistic content. 9. Acknowledgments This work was supported in part by the UKRI CDT in NLP, funded by the UKRI (grant EP/S022481/1), the University of Edinburgh and Huawei. 10. References [1] A. Baevski, Y . Zhou, A. Mohamed, and M. Auli, “wav2vec 2.0: A framework for self-supervised learning of speech repre- sentations,” Advances in neural information processing systems , vol. 33, pp. 12 449–12 460, 2020. [2] X. Chang, B. Yan, K. Choi, J.-W. Jung, Y . Lu, S. Maiti, R. Sharma, J. Shi, J. Tian, S. Watanabe et al. , “Exploring speech recognition, translation, and understanding with discrete speech units: A com- parative study,” in ICASSP 2024-2024 IEEE International Con- ference on Acoustics, Speech and Signal Processing (ICASSP) . IEEE, 2024, pp. 11 481–11 485. [3] S. Wang, G. E. Henter, J. Gustafson, and E. Szekely, “On the use of self-supervised speech representations in spontaneous speech synthesis,” in 12th Speech Synthesis Workshop (SSW) , 2023. [4] I. R. Ulgen, S. S. Chandra, J. Lu, and B. Sisman, “Selecttts: Syn- thesizing anyone’s voice via discrete unit-based frame selection,” 2024. [5] Y . Li, Y . Mohamied, P. Bell, and C. Lai, “Exploration of a self- supervised speech model: A study on emotional corpora,” in 2022 IEEE Spoken Language Technology Workshop (SLT) , 2023, pp. 868–875. [6] L.-W. Chen and A. Rudnicky, “Exploring Wav2vec 2.0 fine tun- ing for improved speech emotion recognition,” in ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , 2023. [7] A. Saliba, Y . Li, R. Sanabria, and C. Lai, “Layer-wise analysis of self-supervised acoustic word embeddings: A study on speech emotion recognition,” in ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing Work- shops (ICASSPW) . IEEE, 2024. [8] Y . Li, P. Chen, P. Bell, and C. Lai, “Crossmodal ASR error correc- tion with discrete speech units,” in 2024 IEEE Spoken Language Technology Workshop (SLT) . IEEE, 2024. [9] S. Shon, K. Kim, Y .-T. Hsu, P. Sridhar, S. Watanabe, and K. Livescu, “Discreteslu: A large language model with self- supervised discrete speech units for spoken language understand- ing,” arXiv preprint arXiv:2406.09345 , 2024. [10] R. Sanabria, H. Tang, and S. Goldwater, “Analyzing acoustic word embeddings from pre-trained self-supervised speech models,” in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech | https://arxiv.org/abs/2505.15667v1 |
and Signal Processing (ICASSP) . IEEE, 2023. [11] T. A. Nguyen, W.-N. Hsu, A. d’Avirro, B. Shi, I. Gat, M. Fazel- Zarani, T. Remez, J. Copet, G. Synnaeve, M. Hassid et al. , “Ex- presso: A benchmark and analysis of discrete expressive speech resynthesis,” in INTERSPEECH 2023 . ISCA, 2023, pp. 4823– 4827. [12] W. Ren, Y .-C. Lin, H.-C. Chou, H. Wu, Y .-C. Wu, C.-C. Lee, H.-y. Lee, H.-M. Wang, and Y . Tsao, “Emo-codec: An in-depth look at emotion preservation capacity of legacy and neural codec models with subjective and objective evaluations,” in 2024 Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC) . IEEE, 2024. [13] A. Van Den Oord, O. Vinyals et al. , “Neural discrete represen- tation learning,” Advances in neural information processing sys- tems, vol. 30, 2017. [14] A. Baevski, S. Schneider, and M. Auli, “vq-wav2vec: Self- supervised learning of discrete speech representations,” in Inter- national Conference on Learning Representations , 2020. [15] A. D ´efossez, J. Copet, G. Synnaeve, and Y . Adi, “High fidelity neural audio compression,” arXiv preprint arXiv:2210.13438 , 2022. [16] N. Zeghidour, A. Luebs, A. Omran, J. Skoglund, and M. Tagliasacchi, “Soundstream: An end-to-end neural audio codec,” IEEE/ACM Transactions on Audio, Speech, and Lan- guage Processing , vol. 30, pp. 495–507, 2021.[17] W.-N. Hsu, B. Bolte, Y .-H. H. Tsai, K. Lakhotia, R. Salakhutdi- nov, and A. Mohamed, “Hubert: Self-supervised speech represen- tation learning by masked prediction of hidden units,” IEEE/ACM transactions on audio, speech, and language processing , vol. 29, pp. 3451–3460, 2021. [18] D. Wells, H. Tang, and K. Richmond, “Phonetic analysis of self-supervised representations of english speech,” in Interspeech 2022 , 2022, pp. 3583–3587. [19] A. Sicherman and Y . Adi, “Analysing discrete self supervised speech representation for spoken language modeling,” in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . IEEE, 2023. [20] X. Chang, J. Shi, J. Tian, Y . Wu, Y . Tang, Y . Wu, S. Watan- abe, Y . Adi, X. Chen, and Q. Jin, “The Interspeech 2024 chal- lenge on speech processing using discrete units,” arXiv preprint arXiv:2406.07725 , 2024. [21] S. Latif, I. Kim, I. Calapodescu, and L. Besacier, “Controlling prosody in end-to-end TTS: A case study on contrastive focus generation,” in Proceedings of the 25th Conference on Compu- tational Natural Language Learning , A. Bisazza and O. Abend, Eds. Online: Association for Computational Linguistics, Nov. 2021, pp. 544–551. [22] C. Busso, M. Bulut, C.-C. Lee, A. Kazemzadeh, E. Mower, S. Kim, J. N. Chang, S. Lee, and S. S. Narayanan, “IEMOCAP: Interactive emotional dyadic motion capture database,” Language resources and evaluation , vol. 42, pp. 335–359, 2008. [23] M. McAuliffe, M. Socolof, S. Mihuc, M. Wagner, and M. Son- deregger, “Montreal forced aligner: Trainable text-speech align- ment using Kaldi,” in Interspeech 2017 , 2017, pp. 498–502. [24] S. Young, G. Evermann, D. Kershaw, G. Moore, J. Odell, D. Olla- son, V . Valtchev, and P. Woodland, “The HTK book,” Cambridge University Engineering Department , vol. 3, 2002. [25] D. Arthur and | https://arxiv.org/abs/2505.15667v1 |
S. Vassilvitskii, “k-means++: The advantages of careful seeding,” Stanford, Tech. Rep., 2006. [26] J. Kong, J. Kim, and J. Bae, “Hifi-gan: Generative adversarial net- works for efficient and high fidelity speech synthesis,” Advances in neural information processing systems , vol. 33, pp. 17 022– 17 033, 2020. [27] S. W. Yang, P. H. Chi, Y . S. Chuang, C. I. J. Lai, K. Lakhotia, Y . Y . Lin, A. T. Liu, J. Shi, X. Chang, G. T. Lin et al. , “Su- perb: Speech processing universal performance benchmark,” in 22nd Annual Conference of the International Speech Communi- cation Association, INTERSPEECH 2021 . International Speech Communication Association, 2021, pp. 3161–3165. [28] Y . Wang, A. Boumadane, and A. Heba, “A fine-tuned Wav2vec 2.0/HuBERT benchmark for speech emotion recognition, speaker verification and spoken language understanding,” arXiv preprint arXiv:2111.02735 , 2021. [29] Y . Wang, M. Ravanelli, and A. Yacoubi, “Speech emotion di- arization: Which emotion appears when?” in 2023 IEEE Auto- matic Speech Recognition and Understanding Workshop (ASRU) . IEEE, 2023, pp. 1–7. [30] M. de Seyssel, A. D’Avirro, A. Williams, and E. Dupoux, “Em- phassess: a prosodic benchmark on assessing emphasis transfer in speech-to-speech models,” arXiv preprint arXiv:2312.14069 , 2023. [31] T. Saeki, D. Xin, W. Nakata, T. Koriyama, S. Takamichi, and H. Saruwatari, “UTMOS: UTokyo-SaruLab system for voice- mos challenge 2022,” in Proceedings of the Annual Conference of the International Speech Communication Association, INTER- SPEECH , vol. 2022, 2022, pp. 4521–4525. [32] A. de la Fuente and D. Jurafsky, “A layer-wise analysis of man- darin and english suprasegmentals in ssl speech models,” in Proc. Interspeech 2024 , 2024, pp. 1290–1294. [33] M. Sharma and R. J. Mammone, “blind speech segmentation: au- tomatic segmentation of speech without linguistic knowledge,” in4th International Conference on Spoken Language Processing (ICSLP 1996) , 1996, pp. 1237–1240. | https://arxiv.org/abs/2505.15667v1 |
arXiv:2505.15670v1 [cs.CL] 21 May 2025Efficient and Direct Duplex Modeling for Speech-to-Speech Language Model Ke Hu1, Ehsan Hosseini-Asl1, Chen Chen1, Edresson Casanova1, Subhankar Ghosh1, Piotr ˙Zelasko1, Zhehuai Chen1, Jason Li1, Jagadeesh Balam1, Boris Ginsburg1 1NVIDIA, USA kevinhu@nvidia.com Abstract Spoken dialogue is an intuitive form of human-computer inter- action, yet current speech language models often remain con- strained to turn-based exchanges, lacking real-time adaptabil- ity such as user barge-in. We propose a novel duplex speech to speech (S2S) architecture featuring continuous user inputs and codec agent outputs with channel fusion that directly mod- els simultaneous user and agent streams. Using a pretrained streaming encoder for user input enables the first duplex S2S model without requiring speech pretrain. Separate architectures for agent and user modeling facilitate codec fine-tuning for bet- ter agent voices and halve the bitrate (0.6 kbps) compared to previous works. Experimental results show that the proposed model outperforms previous duplex models in reasoning, turn- taking, and barge-in abilities. The model requires significantly less speech data, as speech pretrain is skipped, which markedly simplifies the process of building a duplex S2S model from any LLMs. Finally, it is the first openly available duplex S2S model with training and inference code to foster reproducibility. Index Terms : duplex, speech-to-speech, conversation, barge-in 1. Introduction Large language models (LLMs) [1–4] have made significant strides in natural language processing, sparking interest in mul- timodal models that extend beyond text. Speech, as a natural interface for human-computer interaction, is a key part of this trend. Recent studies suggest adapting LLMs to process speech prompts for various speech-to-text (STT) tasks [2, 4–9]. While traditional systems often respond with text, speech outputs are more intuitive for human-computer interaction. Cascaded spoken dialogue systems, like AudioGPT [10], use text as an intermediate representation, involving sequential modules such as ASR, LLM, and TTS. However, these systems face drawbacks like high latency, lack of interactive behaviors, and loss of paralinguistics. To address these issues, research has shifted towards end-to-end speech-to-speech (S2S) modeling. Previous S2S models focus on half-duplex, turn-based in- teractions. For instance, SpeechGPT [11], initialized from LLaMA, undergoes sequential fine-tuning on speech-only data and multimodal instruction sets to handle spoken question- answer (QA) tasks. Similarly, USDM [12] extends Mistral’s pretraining with interleaved speech-texet data for enhanced multimodal understanding. GLM-4-voice [13] efficiently tok- enizes speech using one codebook and large-scale speech-text pretraining for downstream tasks like ASR, TTS, and SQA. Several pioneering or concurrent full-duplex S2S models have been recently proposed [14–17]. However, these sys- tems face increased complexity in model, data, and compu- tation, which hinders their widespread research and adoption.The introduction of additional submodules for turn-taking be- tween user and agent increases system complexity and reduces the end-to-end nature of the models. Moreover, the extensive speech-text pretraining required on top of the LLM backbone is resource-intensive and limits scalability to any LLMs. Finally, using codecs to model user and agent interactions simultane- ously necessitates a delicate balance between speech perception and generation, presenting another significant challenge. To tackle the above problems, we propose a novel duplex S2S system with the following contributions: | https://arxiv.org/abs/2505.15670v1 |
1) A novel duplex S2S architecture featuring continuous user inputs and codec agent outputs with channel fusion that directly models simulta- neous text and speech of both the user and agent. 2) We demon- strate several key advantages over existing duplex models: The use of a pretrained encoder as input enables the first duplex S2S model without speech pretraining requirement; As the agent and user are modeled by the codec and the pretrained encoder separately, this facilitates codec fine-tuning toward better agent voices. 3) We propose a set of systematic metrics to evaluate conversational behaviors such as turn-taking and barge-in. Fi- nally, it is the first open duplex S2S model with both training and inference code publicly available to foster reproducibility. 2. Related Work Interest in full-duplex S2S models has grown in the past year. Key challenges here include handling simultaneous user and agent streams and enabling turn-taking. Systems like [16,18,19] model single-channel interactions but use external signals, such as stopping commands [19] or submodules [16], to decide when to respond. Models like SyncLLM [20] and OmniFlatten [17] achieve full-duplex conversation by employing time chunking methods, embedding time information into LLMs for synchro- nization. This interleaving processing allows the model to han- dle user inputs like barge-in with low latency. Our duplex S2S model is trained without speech-text pre- training, unlike [14]. In multi-turn conversation, we align text and speech at the turn level, which simplifies data preparation compared to word-level alignment. Compared to [15, 18, 19], our model predicts text and speech simultaneously without re- quiring an explicit TTS component. Our speech codec model uses parallel codebooks (see details in Sec. 3.2) and enables speech generation with minimal latency. Our design further en- ables codec fine-tuning for improved agent voices while halving the required bitrate of previous works (0.6 kbps). 3. Model Architecture To achieve duplex behavior, our S2S model takes two input streams simultaneously: user speech stream, and agent speech and text stream. As shown in Fig. 1, the user speech is first Figure 1: The proposed duplex S2S model without requiring speech-text pretraining. Our model includes a streaming speech encoder, a personalized codec, and an LLM. The model is trained to predict both text and audio channels in parallel with turn-level alignments. encoded to generate continuous embeddings by the speech en- coder using an 80-ms frame rate. We use a 100M stream- ing speech encoder from a CTC model [21]. We initialize the backbone LLM using the TinyLlama-1.1B-chat model [22]. A modality adapter is used between the speech encoder and the text LLM. To obtain the agent embeddings in training, we use a codec model [23] to generate 12.5 Hz speech codes for the agent speech. LLM vocabulary is extended to include extra to- kens from speech codec with zero initialization. The two in- puts are time-aligned and summed as the input to the text LLM (similar to [24]). Both our speech encoder and text LLM are causal and thus streaming. In training, we fine-tune both the speech encoder and the backbone LLM. Text and speech loss are weighted | https://arxiv.org/abs/2505.15670v1 |
differently in training (see Sec. 5.1). Our model is trained by multi-channel next token prediction similar to [1]. 3.1. Simultaneous Agent Text and Speech Prediction As shown in Fig. 1, we encode speech using 4 codebooks at a rate of 12.5 frames per second [23], and text targets are to- kenized into a separate channel. We align the text and speech tokens at the turn level based on their start time. We prepend separate <BOS> tokens for text and speech at the beginning of the turn and append <EOS> at the end of the turn. The gap be- tween text and speech tokens are padded by text pad ID. We also tried word-level alignment between text and speech as in [14] and did not find improvement. Empirically, we find that the model tends to learn agent text first. Therefore, we introduce a small delay (i.e., one token) to the speech channels to better condition on text channel context without introducing signifi- cant latency. 3.2. Personalization-friendly Speech Tokenization We employ a partially causal neural audio codec to transform raw speech signals into streaming tokenized representations. Given an audio signal a, the codec generates a two-dimensional acoustic matrix, CT×N=CodecModel (a), where Tdenotes the downsampled sequence length, and Nrepresents the num- ber of codebooks per timestep. Each element in CT×Nis an m-bit discrete code. We adopt the state-of-the-art NanoCodec [23], which achieves reasonable-quality audio compression at 0.6 kbps with a frame rate of 12.5 frames per second, employ- ingN= 4independent codebooks. The codec leverages Finite Scalar Quantization (FSQ) [25], ensuring independence among codebooks. This independence removes the need for additionalTable 1: Synthetic training data with multi-turn and barge-in. Task Dataset #Hours Speech Multi-turn Barge-in ASR-QA 20k Mix Augment × Spoken MS MARCO 0.2k TTS Augment × QA Alpaca 0.2k TTS Augment × Internal SFT 3k TTS Real ✓ Conv- UltraChat 3k TTS Augment ✓ ersation Topic 0.3k TTS Augment ✓ Figure 2: Duplex training data format. Our duplex data consists of separate user and agent streams including turn taking and barge-in behavior. Here, the user barges in at the second turn. models or delay mechanisms, allowing all Ncodebooks to be predicted in parallel at each timestep, thereby enabling fully parallel modeling with low latency. Our duplex design allows us to personalize the pretrained codec for agent voices to further enhance audio quality. This is enabled by modeling the agent and user separately with the speech codec and a pretrained causal speech encoder. In the experimental section, we will evaluate the benefits of speech and reasoning quality resulting from codec personalization. 4. Duplex Data for Training Table 1 summarizes our training data which can be categorized into spoken QA and multi-turn conversations. 4.1. Single-turn synthetic and real spoken QA Our most basic training data structure consists of a single-turn spoken QA between the user and agent. We use a multi-speaker TTS model [26] to synthesize the context, questions and an- swers from MS MARCO [27] and Alpaca [28]. To mitigate overfitting to synthetic data, we follow [29] to create additional synthetic QA pairs using the Mixtral-8x22B | https://arxiv.org/abs/2505.15670v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.