text string | source string |
|---|---|
arXiv:2505.17410v1 [cs.SD] 23 May 2025LLM-based Generative Error Correction for Rare Words with Synthetic Data and Phonetic Context Natsuo Yamashita1, Masaaki Yamamoto1, Hiroaki Kokubo1, Yohei Kawaguchi1 1Hitachi, Litd., Japan {natsuo.yamashita.gh, maaaki.yamamoto.af, hiroaki.kokubo.dz, yohei.kawaguchi.xk }@hitachi.com Abstract Generative error correction (GER) with large language models (LLMs) has emerged as an effective post-processing ap- proach to improve automatic speech recognition (ASR) perfor- mance. However, it often struggles with rare or domain-specific words due to limited training data. Furthermore, existing LLM- based GER approaches primarily rely on textual information, neglecting phonetic cues, which leads to over-correction. To address these issues, we propose a novel LLM-based GER ap- proach that targets rare words and incorporates phonetic infor- mation. First, we generate synthetic data to contain rare words for fine-tuning the GER model. Second, we integrate ASR’s N- best hypotheses along with phonetic context to mitigate over- correction. Experimental results show that our method not only improves the correction of rare words but also reduces the WER and CER across both English and Japanese datasets. Index Terms : speech recognition, error correction, large lan- guage model, keywords biasing 1. Introduction Automatic speech recognition (ASR) technology has achieved remarkable progress over the past decades, driven by advance- ments in deep learning techniques [1]. However, these systems often produce transcription errors, particularly due to back- ground noise, speaker accents, different speaker styles, and domain-specific terms. To address this issue, researchers have explored various generative error correction (GER) methods, utilizing pre-trained language models (LMs) [2, 3, 4] and large language models (LLMs) [5]. These approaches aim to enhance ASR transcription quality by correcting grammatical, syntacti- cal, or semantic errors. Recent studies [6, 7, 8, 9, 10, 11] have demonstrated the effectiveness of LLMs in processing N-best hypotheses from ASR outputs to generate refined transcriptions. While these methods have shown promising results, they strug- gle with handling rare words that are absent from their training data [12, 13]. This limitation arises because existing GER mod- els often rely on pre-trained datasets that lack sufficient exam- ples of specialized terminology or rare word errors, and obtain- ing domain-specific error data for fine-tuning remains a chal- lenge. Another critical limitation of existing methods is their re- liance solely on textual information while ignoring phonetic cues [14]. This often results in over-correction, where unneces- sary changes are made, shifting the transcription toward formal written language and reducing the fidelity of the spoken input [6]. For instance, models may produce semantically plausible outputs that fail to preserve the original pronunciation or intent, leading to phonetically incorrect results and compromising the overall quality of ASR systems.To overcome these challenges, this study investigates an LLM-based GER approach that specifically targets rare words and incorporates phonetic information. First, we propose gen- erating transcripts from rare words using an LLM and synthe- sizing speech from them to prepare error-pair data. By generat- ing multiple transcripts and synthesizing speech with multiple speakers, we ensure a variety of error patterns in the generated data, which are then used to fine-tune the LLM. This approach enables the model to learn from various errors, allowing | https://arxiv.org/abs/2505.17410v1 |
it to handle both common and rare words that are not part of its ini- tial training data. In contrast to previous studies [6, 10, 11, 15] that replace words in hypotheses or rescore hypotheses to limit the introduction of new words and avoid over-correction, our method focuses on the accurate correction of errors, even when they are not present in the hypotheses. Second, inspired by the previous work on phone- conditioned LM [2], we incorporate phonetic information into the LLM-based GER process to improve both phonetic and se- mantic correctness. Unlike the previous work [14], which in- tegrated acoustic features into the ASR decoding process, our approach focuses on utilizing phonetic context derived from ASR hypotheses. This approach is significantly easier to in- tegrate with the latest LLMs and ASR models that are contin- uously updated, especially given that the internal structures of some models, such as ChatGPT [16], are not publicly disclosed. By leveraging an LLM to predict simplified phonetic readings of text, we provide additional phonetic context as input to our correction model. This enables the system to incorporate both textual and phonetic features, resulting in more accurate correc- tions that better align with the original spoken input. Experimental results demonstrate that our approach im- proves the correction of rare words, and reduces the word error rate (WER) and character error rate (CER) across both English and Japanese datasets. 2. Related Work 2.1. Error correction for rare words In many practical scenarios, a rare word list can be obtained in advance from sources such as user names, meeting chat logs, websites, manuals, or even words registered by users [17]. Sev- eral studies [18, 19] have explored the use of rare word lists for error correction, but the error-pair data is limited within the training data. A recent study proposed an RAG-based keyword error correction using LLMs by searching for similar entries in rare word lists [20], but correcting non-keyword errors remains a challenge. In this paper, we propose generating diverse error- pair data that includes rare words, enabling the error correction of both rare and non-rare word errors. Synthetic Data Generation from Rare Words GER with Phonetic ContextExtractWeb Manual Rare Word List Synthetic Transcripts Synthetic Speech Hypotheses N-best hypotheses 1-best phonemesLLM TTS ASR ASR Speech LLM Corrected Transcript Multi-transcripts Multi-speakers speaker 1 speaker 2 speaker Stranscript 1 transcript 2 transcript T......× A rare word +Figure 1: Overview of the proposed methods of synthetic data generation from rare words and GER with phonetic context. 2.2. Phonetic notation In the field of linguistics, the International Phonetic Alpha- bet (IPA) [21] has been widely used to accurately represent the pronunciation of all languages around the world. How- ever, it is often perceived as complex, particularly by non-native speakers, due to the specialized linguistic symbols [22]. In the field of text-to-speech (TTS), to represent phonetic informa- tion, ARPAbet for English and romanized Kana for Japanese are commonly used by using graphene-to-phoneme (G2P) tools [2, 23, 24]. A very recent study investigated LLM-based grapheme-to-phoneme conversion and reported that the perfor- mance of an LLM | https://arxiv.org/abs/2505.17410v1 |
alone is significantly lower than that of an approach combining an LLM and a G2P dictionary [25]. This finding suggests that current LLMs struggle to effectively han- dle specialized phonetic representations. To address this issue, we propose an LLM-based approach using simplified phonetic representation to enable LLMs to better comprehend phonetic information. 3. Methodology Figure 1 illustrates our proposed methods. First, we propose training a GER model using synthetic data generated by gera- tive models based on a given rare word list. Additionally, to prevent over-correction caused by ignoring phonetic informa- tion, we propose explicitly providing the GER model not only N-best hypotheses, but also 1-best phonetic representations. 3.1. Synthetic data generation from rare words Given a set of Wbiasning words {wn}W n=1, we generate T synthesized transcriptsn t(w) noT n=1that include each rare word. To create these transcripts, we prompt an LLM with instruc- tions, optionally providing contextual information when avail- able. For instance, the instruction might read: “Provide 5 dif- ferent English sentences in various contexts that include the termwn, which is a medical term. ” . Subsequently, using text- to-speech models, we generate Ssynthetic spoken utterancesn u(w,t) noS n=1for each synthetic transcript. Here, multiple tran- scripts are generated for each rare word to capture a variety of contexts, and utterances are synthesized with multiple speak- ers to ensure diversity in speech styles. Finally, using an ASRTable 1: Examples of English phonetic representations. Ground truth the sun is rising Hypothesis the son is rising IPA D@ s@n Iz "raIzIN ARPAbet DHAH0 SAH1N IH1Z RAY1ZIH0NG LSP thuh sun iz rahy-zing model, we obtain W×T×Shypotheses paired with the cor- responding transcripts to fine-tune a GER model. Note that hy- potheses containing no errors are excluded, while those with er- rors only in non-rare words are included in the training process to reduce the training cost. 3.2. GER with phonetic context As discussed in Section 1, GER models often suffer from over- correction toward semantic correctness, ignoring phonetic in- formation. To address this challenge, we propose combining the N-best hypotheses with the phonetic representations derived from the 1-best hypothesis of the ASR model. For phonetic representations, we investigate the use of IPA which is an in- ternational standard for representing pronunciation, as well as ARPAbet and romanized Kana, which are commonly used in the TTS field. Additionally, for LLM-based GER, we introduce a novel phonetic representation called LLM-based Simplified Phoneme (LSP). We hypothesize that a simpler phonetic rep- resentation is sufficient to capture phonetic information, as the aforementioned common representations include many special- ized expressions that are difficult to intuitively understand. Fur- thermore, an LLM is more likely to effectively align transcripts with LSP because it outputs phonetic representations that it has learned. The prompting templates for LSP generation are as fol- lows: “Convert the English text to simplified pronunciation. ” or “Convert the Japanese text to simplified Kana-like pronuncia- tion. ” . As Table 1 shows, LSP is easier to understand compared to IPA and ARPAbet. Subsequently, the 1-best phonemes are combined with N-best hypotheses from the ASR model to train the | https://arxiv.org/abs/2505.17410v1 |
LLM. Here, we focus on utilizing 1-best phonemes rather than N-best to minimize computational cost and avoid complex input to LLMs. Table 2: Summary of evaluation datasets. dataset lang. # utts.rare words TTS TTS coverage (%) model train-data LibriSpeech EN 2620 2.3 - - EDGAR EN 500 5.8 VITS VCTK CSJ eval1 JA 1411 5.0 - - CSJ eval2 JA 1420 4.5 - - MedTxt JA 605 10.0 FastSpeech2 JSUT 4. Experimental Setup 4.1. Dataset To investigate the efficiency of our approach in a multilingual setting, we evaluated both English and Japanese datasets, as summarized in Table 2. Since public datasets like Common V oice [26] typically contain very few rare words and are of- ten included in the training sets of foundation models, we con- structed synthetic datasets that focus on rare words for evalua- tion alongside public datasets. •LibriSpeech [27]: A dataset of 960 hours of English audio- book recordings. We used the test-clean for evaluation. •EDGAR [28]: An English text corpus of annual reports from SEC EDGAR filings (1993–2020). We extracted 500 sentences from it and synthesized speech using VITS [29], which was trained on the VCTK corpus [30]. •Corpus of Spontaneous Japanese (CSJ) : A collection of Japanese public speech on academic topics. We evaluated eval1 andeval2 , which contain a sufficient number of rare words, prepared using the Kaldi recipe [31]. •MedTxt [32]: A Japanese text corpus of case reports ex- tracted via OCR from J-Stage open-access PDF articles. We partially used words tagged as disease ,anatomical part , or feature as rare words. Synthetic speech was generated using FastSpeech2 [33] and HiFi-GAN [34], which were trained on the JSUT corpus [35]. 4.2. Constructing rare word lists Due to the lack of available rare word lists for the LibriSpeech, EDGAR, and CSJ datasets, we constructed rare word lists for each dataset with the assistance of an LLM prompt, such as “Extract highly complex words for recognition, including tech- nical terms, names of people, and names of places. ” . To ensure fair experimental conditions, we prepared the rare word lists in accordance with the experimental setup of previous studies [18, 36], ensuring that the coverage of rare words remains be- low 10% of the total words used during inference, as shown in Table 2. To promote reproducibility and support future research, we have publicly released the rare word lists1. 4.3. Generating synthetic data Due to the limitations of our computing resources, we prepared data with T= 4andS= 7for the experiments, which was split into training and validation datasets at a ratio of 4:1. Given the absence of open-source, high-quality Japanese TTS models, we utilized Microsoft Azure Text-to-Speech [37] for TTS. For the ASR model, we employed a popular end-to-end ASR system, Whisper (Whisper-Large-v3-turbo) [38], to prepare simulated data and conduct inference. No domain-adaptation techniques 1https://github.com/natsuooo/llm-gerwere applied. We obtained N-best hypotheses from the beam search process of Whisper, setting N= 5. 4.4. Generating phonetic representations We used the Python tool eng toipa2to convert English text to IPA. Due to the absence of a common tool for converting Japanese | https://arxiv.org/abs/2505.17410v1 |
text to IPA, we employed an LLM to perform the con- version based on the existing work [25]. For TTS-phoneme rep- resentations, we used the G2P tool3for ARPAbet in English and pyopenjtalk4for romanized Kana in Japanese. 4.5. Training LLM-based GER models We employed ChatGPT-4o-mini (2024-07-18) [16] for both the English and Japanese datasets, Llama-3.3-70B-Instruct5for the English datasets, and Llama-3.1-Swallow-70B-Instruct-v0.36 for the Japanese datasets. We fine-tuned ChatGPT with 3 epochs, a batch size of 10, and a learning rate multiplier of 1.8. For fine-tuning the Llama models, we employed 8 NVIDIA H100 GPUs with 10 epochs, a batch size of 4, a learning rate of 5.0e-5, and the Adam optimizer. To improve computational efficiency, we adopted the low-rank adaptation (LoRA) method [39] with a rank of 16, applied to all linear modules. 4.6. Evaluation metrics We evaluated the results with WER for the English datasets and CER for the Japanese datasets. Additionally, we reported the recall and the precision of the rare words, following the method- ology described in [40]. A high recall indicates that the model successfully identifies most of the rare words, while a high pre- cision means that when the model identifies a word as a rare word, it is highly likely to be correct. Given that the propor- tion of rare words is very small in the overall text, our primary objective is to first improve WER and CER. Subsequently, we aim to enhance recall without causing significant degradation in precision. 5. Results and Analysis 5.1. Results Table 3 and 4 present the WER for the English datasets, the CER for the Japanese datasets, as well as the recall and pre- cision scores when using ChatGPT or the Llama models as the LLMs. First, we note that most of the results using Chat- GPT were better than those using the LLama models. The ini- tial transcription results from Whisper reveal a very low recall for rare words, particularly in Japanese, which has numerous homophones of Chinese characters due to its limited variety of phonemes [41]. In Baseline 1, which applies a correction method based solely on prompts and 1-best hypothesis, and Baseline 2, which utilizes N-best hypotheses for correction [9], the improvements in the WER, CER, and recall were modest. For example, in the case of the CSJ eval1 dataset using Baseline 2 with the LLama model, the CER remained almost unchanged from 15.5% to 15.6% and the recall was slightly improved from 44.5% to 47.5%. 2https://pypi.org/project/eng-to-ipa 3https://github.com/Kyubyong/g2p 4https://github.com/r9y9/pyopenjtalk 5https://huggingface.co/meta-llama/Llama-3. 3-70B-Instruct 6https://huggingface.co/tokyotech-llm/Llama-3. 1-Swallow-70B-Instruct-v0.3 Table 3: Results with ChatGPT . Reported metrics are in the following formats: “WER / recall / precision” for the English datasets (LibriSpeech and EDGAR) and “CER / recall / precision” for the Japanese datasets (CSJ and MedTxt). method LibriSpeech EDGAR CSJ eval1 CSJ eval2 MedTxt Whisper-large-v3-turbo 2.7 / 80.5 / 98.7 16.0 / 74.7 / 92.5 15.5 / 44.5 / 98.8 14.8 / 55.6 / 99.6 18.8 / 27.6 / 99.1 Baseline 1: prompt only 2.7 / 81.6 / 98.7 14.7 / 77.5 / 93.7 15.4 / 48.3 / 98.9 14.8 / 56.0 / 99.3 | https://arxiv.org/abs/2505.17410v1 |
17.6 / 29.8 / 99.2 Baseline 2: N-best 2.8 / 81.1 / 98.6 15.1 / 76.1 / 93.6 15.7 / 49.6 / 98.6 14.7 / 58.8 / 99.7 17.6 / 30.5 / 99.2 Synth. + N-best 2.6 / 91.7 / 97.7 14.1 / 81.0 / 95.6 14.2 / 81.1 / 98.6 12.6 / 81.7 / 98.3 8.6 / 85.0 / 97.8 Synth. + N-best + IPA 2.8 / 92.4 / 96.6 13.9 / 79.5 / 95.0 27.3 / 90.5 / 91.9 22.3 / 83.3 / 80.1 9.4 / 84.7 / 94.9 Synth. + N-best + TTS-phoneme 2.7 / 91.3 / 97.3 14.0 / 80.9 / 95.1 13.8 / 78.1 / 99.1 13.6 / 79.8 / 98.0 7.8/88.6 / 94.3 Synth. + N-best + LSP 2.5/94.2 / 96.4 13.8 /81.1 / 95.1 13.7 / 82.0 / 98.9 12.6 /84.9 / 98.3 7.8/ 87.4 / 97.0 Table 4: Results with the Llama models . Reported metrics are in the following formats: “WER / recall / precision” for the English datasets (LibriSpeech and EDGAR) and “CER / recall / precision” for the Japanese datasets (CSJ and MedTxt). method LibriSpeech EDGAR CSJ eval1 CSJ eval2 MedTxt Whisper-large-v3-turbo 2.7 / 79.9 / 97.4 16.0 / 74.7 / 92.5 15.5 / 44.5 / 98.8 14.8 / 55.6 / 99.6 18.8 / 27.6 / 99.1 Baseline 1: prompt only 2.8 / 80.3 / 96.8 15.6 / 77.1 / 94.4 15.8 / 46.4 / 97.8 15.1 / 55.6 / 99.6 18.0 / 31.0 / 99.2 Baseline 2: N-best 2.7 / 82.1 / 97.4 14.6 / 73.8 / 98.1 15.6 / 47.5 / 98.9 14.7 / 58.8 / 99.7 17.8 / 20.8 / 99.2 Synth. + N-best 2.9 / 91.4 /98.2 14.2 /78.2 / 94.2 14.3 / 62.9 / 98.2 14.6 / 65.9 / 99.7 10.3 / 84.7 / 92.8 Synth. + N-best + IPA 3.5 / 90.8 / 95.6 14.9 / 77.3 / 94.0 25.2 / 72.4 / 86.5 18.1 / 79.2 / 87.1 21.9 / 76.5 / 83.4 Synth. + N-best + TTS-phoneme 3.0 / 84.8 / 97.4 14.4 / 77.7 / 94.9 14.1 / 66.5 / 98.1 12.9 / 66.6 / 99.1 10.4 / 84.3 / 93.0 Synth. + N-best + LSP 2.6/ 86.5 / 97.7 14.2 / 77.4 / 94.3 14.1 / 65.7 / 98.6 12.8 / 64.1 / 99.3 9.3/85.0 / 92.1 In the proposed method, which leverages synthetic data and N-best hypotheses but excludes phonetic context, we observed significant improvements in the WER, CER, and recall, without a significant decrease in precision across most datasets. No- tably, the improvement in recall was particularly substantial for Japanese which contains many homophones. For example, when using ChatGPT for MedTxt dataset which includes many complex medical terms, the recall dramatically increased from 27.6% to 85.0%. These results clearly show that the GER mod- els effectively learned to correct both rare and non-rare words from synthetic data generated from rare words. Next, we discuss the impact of different phonetic repre- sentations. When using IPA, the complexity of its represen- tations caused the GER models to overfit to the rare | https://arxiv.org/abs/2505.17410v1 |
words, resulting in high recall but a substantial decline in the WER and CER, particularly in Japanese datasets. When using TTS- phonemes, such as ARPAbet for English and romanized Kana for Japanese, were used, no observable improvement was found in English datasets, while a slight improvement was observed in the Japanese datasets. In contrast, compared to the pro- posed method without phonetic context, incorporating LSP re- sulted in the further improvement in the WER and CER across all datasets, without significantly degrading recall or precision. Specifically, for the LibriSpeech dataset using LLama, though the WER of Whisper increased from 2.7% to 2.9% without phonetic context, leveraging LSP improved the WER to 2.6%. These results indicate that LSP helped prevent over-correction caused by an excessive focus on semantic aspects of texts and thereby improved the WER and CER. 5.2. Analysis of the number of transcripts and speakers Figure 2 illustrates how the number of transcripts and speakers during synthetic data generation impacts the rare word recogni- tion performance in the MedTxt dataset when using ChatGPT. As shown in Figure 2, a significant improvement of approx- Figure 2: F1 scores of rare words with different numbers of transcripts and speakers in the Medtxt dataset using ChatGPT. imately 2% in the F1 score is observed when the number of transcripts increases from 1 to 4 while keeping the number of speakers fixed at 7. Beyond this point, the rate of improvement gradually diminishes. Regarding the number of speakers with the number of transcripts fixed at 4, although the improvement is smaller compared to increasing the number of transcripts, in- creasing the number of speakers and thereby increasing the vari- ation in speaker styles contributes to enhancing the F1 scores. 6. Conclusions In this paper, we proposed an approach to enhance the LLM- based GER for transcripts containing rare words. We intro- duced a method for generating diverse synthetic data contain- ing rare words, combined with leveraging LLM-based simpli- fied phonemes to avoid over-correction. The experimental re- sults demonstrated that our approach achieved the best WER and CER across all datasets, while improving the recall of rare words without compromising precision. Future work includes evaluating our method on larger datasets and across different domains to assess its scalability and generalizability. 7. References [1] R. Prabhavalkar, T. Hori, T. N. Sainath, R. Schl ¨uter, and S. Watan- abe, “End-to-end speech recognition: A survey,” in IEEE/ACM Transactions on Audio, Speech, and Language Processing , 2024. [2] H. Futami, H. Inaguma, S. Ueno, M. Mimura, S. Sakai, and T. Kawahara, “Non-autoregressive error correction for CTC-based ASR with phone-conditioned masked LM,” in Proc. Interspeech , 2022, pp. 3889–3893. [3] S. Zhang, H. Huang, J. Liu, and H. Li, “Spelling error correction with soft-masked BERT,” in Proc. ACL , 2020, pp. 882–890. [4] H. Futami, H. Inaguma, S. Ueno, M. Mimura, S. Sakai, and T. Kawahara, “Distilling the knowledge of BERT for sequence- to-sequence asr,” in Proc. Interspeech , 2020, pp. 3635–3639. [5] S. Li, C. Chen, C. Y . Kwok, C. Chu, E. S. Chng, and H. Kawai, “Investigating asr | https://arxiv.org/abs/2505.17410v1 |
error correction with large language model and multilingual 1-best hypotheses,” in Proc. Interspeech , 2024, pp. 1315–1319. [6] J. Pu, T.-S. Nguyen, and S. St ¨uker, “Multi-stage large lan- guage model correction for speech recognition,” arXiv preprint arXiv:2310.11532 , 2023. [7] C. Chen, Y . Hu, C.-H. H. Yang, H. Liu, S. M. Siniscalchi, and E. S. Chng, “Generative error correction for code-switching speech recognition using large language models,” arXiv preprint arXiv:2310.13013 , 2023. [8] C. Chen, Y . Hu, C.-H. H. Yang, S. M. Siniscalchi, P.-Y . Chen, and E.-S. Chng, “Hyporadise: An open baseline for generative speech recognition with large language models,” Advances in Neural In- formation Processing Systems , vol. 36, 2024. [9] R. Ma, M. J. Gales, K. M. Knill, and M. Qian, “N-best T5: Ro- bust asr error correction using multiple input hypotheses and con- strained decoding space,” pp. 3267–3271, 2023. [10] Y . Ko, S. Li, C.-H. H. Yang, and T. Kawahara, “Bench- marking Japanese speech recognition on ASR-LLM setups with multi-pass augmented generative error correction,” arXiv preprint arXiv:2408.16180 , 2024. [11] C.-H. H. Yang, Y . Gu, Y .-C. Liu, S. Ghosh, I. Bulyko, and A. Stol- cke, “Generative speech recognition error correction with large language models and task-activating prompting,” in Proc. ASRU , 2023, pp. 1–8. [12] M. Ghazvininejad, H. Gonen, and L. Zettlemoyer, “Dictionary- based phrase-level prompting of large language models for ma- chine translation,” arXiv preprint arXiv:2302.07856 , 2023. [13] J. Wu, F. Che, X. Zheng, S. Zhang, R. Jin, S. Nie, P. Shao, and J. Tao, “Can large language models understand uncommon mean- ings of common words?” 2024. [14] C. CHEN, R. Li, Y . Hu, S. M. Siniscalchi, P.-Y . Chen, E. Chng, and C.-H. H. Yang, “It’s never too late: Fusing acoustic informa- tion into large language models for automatic speech recognition,” inProc. ICLR , 2024, pp. 1–17. [15] R. Ma, M. Qian, P. Manakul, M. Gales, and K. Knill, “Can gener- ative large language models perform asr error correction?” arXiv preprint arXiv:2307.04172 , 2023. [16] OpenAI, “ChatGPT.” [Online]. Available: https://openai.com [17] Y . Nakagome and M. Hentschel, “InterBiasing: Boost unseen word recognition through biasing intermediate predictions,” in Proc. Interspcch , 2024, pp. 207–211. [18] J. He, Z. Yang, and T. Toda, “ED-CEC: Improving rare word recognition using asr postprocessing based on error detection and context-aware error correction,” in Proc. ASRU , 2023, pp. 1–6. [19] A. Raghuvanshi, V . Ramakrishnan, V . Embar, L. Carroll, and K. Raghunathan, “Entity resolution for noisy ASR transcripts,” inProc. EMNLP-IJCNLP , 2019, pp. 61–66. [20] E. Pusateri, A. Walia, A. Kashi, B. Bandyopadhyay, N. Hyder, S. Mahinder, R. Anantha, D. Liu, and S. Gondala, “Retrieval augmented correction of named entity speech recognition errors,” arXiv preprint arXiv:2409.06062 , 2024.[21] I. P. Association, Handbook of the International Phonetic Associ- ation: A guide to the use of the International Phonetic Alphabet , 1999. [22] M. Kawai, S. Akai, and R. Shirakawa, “Simplified Japanese pho- netic alphabet as a tool for Japanese course design,” in Proc. CA- JLE, 2019, pp. 146–155. [23] I. Elias, H. Zen, | https://arxiv.org/abs/2505.17410v1 |
J. Shen, Y . Zhang, Y . Jia, R. Skerry-Ryan, and Y . Wu, “Parallel Tacotron 2: A non-autoregressive neural TTS model with differentiable duration modeling,” 2021. [24] A. Garg, J. Kim, S. Khyalia, C. Kim, and D. Gowda, “Data driven grapheme-to-phoneme representations for a lexicon-free text-to- speech,” in Proc. ICASSP , 2024, pp. 11 091–11 095. [25] M. F. Qharabagh, Z. Dehghanian, and H. R. Rabiee, “LLM- powered grapheme-to-phoneme conversion: Benchmark and case study,” arXiv preprint arXiv:2409.08554 , 2024. [26] R. Ardila, M. Branson, K. Davis, M. Kohler, J. Meyer, M. Hen- retty, R. Morais, L. Saunders, F. Tyers, and G. Weber, “Common V oice: A massively-multilingual speech corpus,” in Proc. LREC , 2020, pp. 4218–4222. [27] V . Panayotov, G. Chen, D. Povey, and S. Khudanpur, “Lib- rispeech: an asr corpus based on public domain audio books,” inProc. ICASSP , 2015, pp. 5206–5210. [28] L. Loukas, M. Fergadiotis, I. Androutsopoulos, and P. Malakasi- otis, “EDGAR-CORPUS: Billions of tokens make the world go round,” in Proc. ECONLP , 2021, pp. 13–18. [29] J. Kim, J. Kong, and J. Son, “Conditional variational autoencoder with adversarial learning for end-to-end text-to-speech,” in Proc. ICML , 2021, pp. 5530–5540. [30] J. Yamagishi, C. Veaux, and K. MacDonald, “CSTR VCTK corpus: English multi-speaker corpus for CSTR voice cloning toolkit,” 2019. [31] D. Povey, A. Ghoshal, G. Boulianne, L. Burget, O. Glembek, N. Goel, M. Hannemann, P. Motlicek, Y . Qian, P. Schwarz et al. , “The Kaldi speech recognition toolkit,” in Proc. ASRU , 2011. [32] S. Yada, Y . Nakamura, S. Wakamiya, and E. Aramaki, “Real- mednlp: Overview of real document-based medical natural lan- guage processing task,” in Proc. NTCIR-16 , 2022, pp. 285–296. [33] Y . Ren, C. Hu, X. Tan, T. Qin, S. Zhao, Z. Zhao, and T.-Y . Liu, “Fastspeech 2: Fast and high-quality end-to-end text to speech,” arXiv preprint arXiv:2006.04558 , 2020. [34] J. Kong, J. Kim, and J. Bae, “Hifi-gan: Generative adversarial net- works for efficient and high fidelity speech synthesis,” Advances in neural information processing systems , vol. 33, pp. 17 022– 17 033, 2020. [35] R. Sonobe, S. Takamichi, and H. Saruwatari, “JSUT corpus: free large-scale Japanese speech corpus for end-to-end speech synthe- sis,” arXiv preprint arXiv:1711.00354 , 2017. [36] D. Le, M. Jain, G. Keren, S. Kim, Y . Shi, J. Mahadeokar, J. Chan, Y . Shangguan, C. Fuegen, O. Kalinli et al. , “Contextualized streaming end-to-end speech recognition with trie-based deep bi- asing and shallow fusion,” in Proc. Interspeech , 2021, pp. 1772– 1776. [37] Microsoft, “AI Speech: Azure AI Services,” n.d. [On- line]. Available: https://azure.microsoft.com/en-us/products/ ai-services/ai-speech [38] A. Radford, J. W. Kim, T. Xu, G. Brockman, C. McLeavey, and I. Sutskever, “Robust speech recognition via large-scale weak supervision,” in International conference on machine learning , 2023, pp. 28 492–28 518. [39] E. J. Hu, Y . Shen, P. Wallis, Z. Allen-Zhu, Y . Li, S. Wang, L. Wang, and W. Chen, “LoRA: Low-rank adaptation of large lan- guage models,” in Proc. ICLR , 2022. [40] N. Jung, G. min Kim, and J. S. Chung, “Spell | https://arxiv.org/abs/2505.17410v1 |
Conversations: Love Them, Hate Them, Steer Them Chebrolu Niranjan BITS Pilani / Pilani, India Gerard Christopher Yeo, Kokil Jaidka Centre for Trusted Internet & Community National University of Singapore, Singapore F20212452@pilani.bits -pilani.ac.in ; e0545159@u.nus.edu ; jaidka@nus.edu.sg Abstract Large Language Models (LLMs) demonstrate increasing conversational fluency, yet instill - ing them with nuanced, human -like emotional expression remains a significant challenge. Current alignment techniques often address surface -level output or require extensive fine - tuning. This paper demonstrates that targeted activation engineering can steer LLaMA 3.1 - 8B to exhibit more human -like emotional nu - ances. We first employ attribution patching to identify causally influential components, to find a key intervention locus by observing ac - tivation patterns during diagnostic conversa - tional tasks. We then derive emotional expres - sion vectors from the difference in the activa - tions generated by contrastive text pairs (posi - tive vs. negative examples of target emotions). Applying these vectors to new conversational prompts significantly enhances emotional char- acteristics: steered responses show increased positive sentiment (e.g., joy, trust) and more frequent first-person pronoun usage, indicative of greater personal engagement. Our findings offer a precise and interpretable method for con - trolling specific emotional attributes in LLMs, contributing to developing more aligned and empathetic conversational AI. 1 Introduction Large Language Models (LLMs) demonstrate strong performance across a wide range of tasks from translation to reasoning, showcasing remark - able generalization and fluency ( Radford et al. , 2019 ; Touvron et al. , 2023 ; OpenAI et al. , 2024 ; Grattafiori et al. , 2024 ; Team et al. , 2024 ; The; DeepSeek -AI et al. , 2025 ). However, when de - ployed in high -stakes interpersonal contexts such as mental health support, negotiation, or compan - ionship, these models often lack nuanced human traits such as empathy, emotional expressiveness, and collaborative intent ( Hendrycks et al. , 2023 ). The ability to generate emotionally intelligent re- sponses remains a defining frontier for conversa - tional AI ( Furniturewala et al. , 2024 ; Kasirzadeh and Gabriel , 2022 ). The core challenge is developing precise, in - terpretable methods to steer LLMs toward these fine-grained traits —without retraining the model or compromising its general capabilities. Current alignment strategies, such as Reinforcement Learn - ing from Human Feedback (RLHF), have achieved meaningful improvements (Lambert , 2025 ; Li et al., 2023 ), but remain opaque and brittle. They are vul- nerable to issues like reward hacking, specification gaming ( Malmqvist , 2025 ), and persistent value - action gaps (Shen et al., 2025 ). These shortcomings are compounded by a lack of transparency: align - ment remains a post hoc modification with limited insight into the internal workings of the model. While recent work in mechanistic interpretabil - ity offers a more transparent alternative by di - rectly analyzing and intervening in a model’s internal activations (Elhage et al., 2021 ; Olah et al., 2020 ), the question of how to identify inter- pretable intervention points for complex, context - sensitive traits —such | https://arxiv.org/abs/2505.17413v1 |
as emotional support or dis- closure —remains largely unanswered. We address this gap through a two-stage activation engineering framework that enhances LLaMA 3.1–8B’s ability to generate emotionally resonant responses: 1. We demonstrate that attribution patching can isolate key activation sites for emotionally ex- pressive behavior in LLMs. 2. We develop a scalable method to steer outputs via contrastive activation vectors applied at inference, enhancing emotional tone without retraining. The results on the Bolt SMS dataset (Tracey et al., 2021 ), demonstrate that our method yields improve - ments in positive sentiment, selected EmoLex cat- egories (e.g., joy, trust), and first -person pronoun use—demonstrating increased emotional support t i,t i,t and self -disclosure without fine -tuning or loss of fluency. 2 Steering Framework for Emotional Support and Disclosure Activation engineering —which involves identi - fying and modifying specific activation direc - tions —has shown promise for steering models along dimensions such as sentiment and factual - ity (Turner et al., 2024 ; Zou et al., 2023a ; Li et al., 2024 ; Park et al., 2024 ; Tigges et al., 2023 ; Nanda et al. , 2023 ; Panickssery et al. , 2024 ; Meng et al. , 2023 ). However, steering toward richer human qualities such as emotional support or disclosure remains underexplored, especially with respect to identifying interpretable and causally effective in- tervention points. We address this by extending attribution patch - ing ( Kramár et al. , 2024 ) to isolate layers in LLaMA 3.1–8B that causally influence emotionally relevant behavior, and by constructing contrastive activation vectors that can be applied during in - ference. Additional details, prompt design, and evaluation procedures are provided in Appendix A and Appendix C. 2.1 Attribution -Based Layer Selection We use attribution patching to measure the influ - ence of internal activations on target token probabil - ities in a set of cloze -style diagnostic prompts. For each prompt pair, activations from a “clean” run (emotionally appropriate) are patched into a “cor- rupted” run (emotionally flat), and changes in logit difference scores are measured. This identifies the transformer layers and token positions most respon - sible for desired emotional completions. We find Layer 2 and Layer 3 to be consistently impactful for support and disclosure tasks, respectively. 2.2 Contrastive Steering Vector Construction From curated sets of emotionally expressive and neutral texts, we extract hidden activations at the selected layer ℓ. Let h+ ∈ Rd and h− ∈ Rd denote the activations at token t for sample i in the positive and neutral sets. We compute the mean activations: n T n T The steering vector is the difference between these means: Vsteer = µpositive − µneutral (2) 2.3 Inference -Time Activation Modification During generation, Vsteer is added to the final k token positions in the hidden states at layer ℓ. Let ht ∈ Rd be the activation at token t. The modified activation is: h′ = ht + αV steer (3) Here, α is a scaling factor chosen via grid search to balance emotional tone and fluency. The ad - justed activations are passed | https://arxiv.org/abs/2505.17413v1 |
through the remaining layers unaltered. 2.4 Evaluation Metrics To evaluate the effects of steering, we compare steered and unsteered model outputs using a range of automated linguistic and affective metrics: • Sentiment polarity based on a BERT classi - fier fine-tuned on SST-2 (Devlin et al., 2019 ). • Emotion word counts from NRC EmoLex (Mohammad and Turney , 2013 ), including joy, trust, sadness, anger, and fear. • Empathy and support cues based on key - word matching and Empath lexicon cate - gories. • Linguistic style indicators such as first - person pronoun usage and politeness features using ConvoKit. Significance is assessed using Welch’s t-tests for continuous metrics and χ2 tests for categorical comparisons, with Benjamini –Hochberg correction applied for multiple testing. Additional details and breakdowns appear in Appendix C. 2.5 Experimental Setup: Model and Datasets All experiments utilize the base pre-trained LLaMA 3.1–8B model ( Grattafiori et al. , 2024 ). Primary data sources are the Bolt SMS chat dataset (Tracey et al. , 2021 ) for emotional tasks (emotional sup - port and disclosure). Dialogues were filtered for a minimum of 6 turns to ensure sufficient conver - sational context. Steering vectors are then derived by taking the difference in activations (at the final 1 µpositive = L L h+ , µneutral = 1 L L h−token positions of the contrasting texts) from task- nT i=1 t=1 i,t nT i=1 t=1 (1) is,pt ecific positive and negative contrastive prompt pairs processed through the target model layer. Figure 1: Layer -wise attention head contributions to the empathy diagnostic task. Figure 1 exemplifies these findings, highlighting heads in early -to-mid layers with strong positive (blue) or negative (red) attribution —indicating influ - ence on empathetic response generation. While individual head scores offer fine-grained insights, our intervention strategy (summarized in Table 1) operates at the layer level to ensure broader impact. For the Emotional Support and Disclosure tasks, Layer 2 consistently emerged as a key point of intervention. Full results for other diagnostic categories (offer acceptance, counteroffer, and realism) are provided in Appendix A. 3 Results Our experiments investigate whether activation en- gineering can steer LLaMA 3.1 -8B towards more human -like conversation patterns. We first iden- tify the optimal intervention points in the model using attribution patching (Kramár et al., 2024 ) and then evaluate the impact of steering with emotional disclosure and emotional support vectors. 3.1 Identifying Optimal Intervention Locus via Attribution Patching To pinpoint influential model components for steer - ing, we employed attribution patching ( Kramár et al., 2024 ) across four diagnostic conversational scenarios: offer acceptance, counteroffer, realism, and empathy. This involved measuring the logit difference on target tokens when patching activa - tions from a clean (desired trait) run to a corrupted (neutral/undesired trait) run. Figure 1 exemplifies these findings for the ‘em- pathy’ diagnostic task. However, for clarity in the main text, Table 1 summarizes the overall interven - tion strategy derived from these analyses. Specific heads across various layers, particularly in early -to- mid layers, demonstrate notable attribution scores. While individual head contributions | https://arxiv.org/abs/2505.17413v1 |
provide granu - lar insights, our steering methodology targets full layer activations for broader impact. For Emotional Support and Disclosure, Layer 2 emerged as a criti- cal intervention point. Diagnostic Scenario Key Layer Targeted Component Intervention Span Emotional Support Layer 2 Layer Output Last 15 Tokens Emotional Disclosure Layer 3 Layer Output Last 15 Tokens Negotiation Style Layer 2 Layer Output All Tokens Negotiation Tone Layer 4 Layer Output All Tokens Price -Stance Layer 0 Layer Output All Tokens Table 1: Summary of attribution patching findings and the derived intervention strategy. The layers chosen consistently showed high causal influence across di - agnostic scenarios relevant to conversational nuances. Steering was applied to the initial token activations of input prompts up to the length of the contrastive texts See Appendix A for further details and validation.. Full attribution heatmaps for all components and tasks are in Appendix A. Steering was applied to the final token activations of the input prompts, instead of the traditional contrastive activation ad- dition method Panickssery et al. (2024 ) of imput - ing up to the sequence length of the contrastive prompts. 3.2 Steering for Emotional Support and Disclosure Using the identified intervention points, we applied steering vectors derived from contrastive text pairs to enhance emotional support and disclosure in Category Type Prompt Example Expected Comple - tion Offer Acceptance Clean Person A: $100 is too high for a second - reasonable hand chair. Person B: That does sound . Corrupted Person A: Can you lower the price by absurd $30? Person B: That request is . Counteroffer Clean Person A: I can offer $40. Person B: unreasonable That’s . Corrupted Similar context with key terms swapped. reasonable Realism Clean The article blames crime on poor choices. structural It ignores causes. Corrupted The piece says everyone had the same unequal chance. Opportunities were clearly . Empathy Clean The floods destroyed entire villages. devastating That’s . Corrupted A child was killed in the protests. That’s fine . Figure 2: Comparison of mean linguistic feature scores for steered (blue diamonds) versus unsteered (orange cir - cles) model responses. The left panel shows ’Emotional Support’; the right, ’Emotional Disclosure’. Features are grouped into Sentiment, EmoLex, Pronouns, Polite - ness, and Empathy. Asterisks on y -axis labels indicate feature levels (e.g., EmoLex joy); significance is dis - cussed in the text. responses generated for dialogues from the Bolt SMS dataset( Tracey et al. , 2021 ). Steering significantly impacted the emotional content. Overall sentiment shifted positively, with steered responses more frequently classified as ‘Positive’ ( P < 0.001, χ2 test for both support and disclosure tasks). Figure 2 visually depicts these changes across various linguistic features, in- cluding EmoLex categories ( ?). For ‘Emotional Support,’ steered outputs (Figure 2, left panel) showed significant increases in joy (mean steered: 0.135 vs. unsteered: 0.125, P = 0.031) and pos- itive emotion words (mean steered: 0.249 vs. un - steered: 0.230, P = 0.010). For ‘Emotional Dis - closure,’ (Figure 2, right panel) steering elicited a broader emotional range, with significant increases in joy (P | https://arxiv.org/abs/2505.17413v1 |
< 0.001), trust (P = 0.002), sadness (P = 0.027), and overall positive (P < 0.001) and negative (P = 0.019) expressions. Steered responses also exhibited significantly more first - person pronouns ( P < 0.001 for both tasks), as illustrated in Figure 2, suggesting increased per - sonal engagement. Further quantitative results, including politeness and empathy keyword analyses, are presented in Appendix C. Our approach was successful: steering towards emotional support significantly increased the use of communication -related keywords (P = Table 2: Diagnostic prompt types used to evaluate model behavior across agreement, negotiation, realism, and empathy. Each type includes clean and corrupted vari - ants with expected completions. 0.010) and listening -related keywords (P = 0.011) as detailed in Table 7. 4 Discussion and Conclusion Our research demonstrates that activation engineer - ing can steer Large Language Models, specifically LLaMA 3.1 -8B, towards more human -like emo - tional expressions in conversational contexts. The core narrative of this work is twofold: first, that causally influential model components for specific traits like empathy can be identified using tech - niques like attribution patching (Section 3.1); and second, that targeted interventions at these loca - tions, adding steering vectors to the output of layers can translate into statistically significant and quali - tatively meaningful changes in the model’s gener - ated emotional support and disclosure (Section 3.2). Our quantitative analyses, including shifts in over- all sentiment and specific EmoLex categories (e.g., joy, trust, sadness), robustly support these claims. Unlike prompt engineering, which treats the model as a black box, or full fine -tuning, which is computationally expensive and can lead to catas - trophic forgetting (Cai et al., 2022 ), activation steer- ing offers a targeted mechanism to append models with desirable characteristics. This capability is particularly salient for developing AI systems that align more closely with human values and interac - tional norms (Turner et al., 2024 ; Hendrycks et al., 2023 ). The process of identifying influential com- ponents through attribution patching contributes to the broader field of mechanistic interpretability, of- fering insights into how and where complex behav - iors are represented and processed within models (Olah et al. , 2020 ; Elhage et al. , 2021 ). 5 Limitations It is crucial to acknowledge the limitations of this study to ensure a calibrated understanding of our contributions: • Model and attribute specificity: Our current findings are based on a single model archi - tecture (LLaMA 3.1-8B) and a focused set of emotional attributes (support and disclosure). The extent to which the layers, serve as an op- timal intervention point across different model families (e.g., GPT, Claude) or for a wider ar- ray of cognitive or affective attributes remains an open question. Different architectures or model sizes might exhibit different internal organizations (Kaplan et al., 2020 ; Hoffmann et al. , 2022 ). • Steering vector specificity and unintended effects: While our results show desired emo- tional shifts, the specificity of the steering vectors requires deeper investigation. It is plausible that steering for one emotion (e.g., increasing ‘joy’) might | https://arxiv.org/abs/2505.17413v1 |
inadvertently affect other desirable qualities, such as coherence, factual accuracy, or introduce subtle biases (Zou et al. , 2023b ; Liang et al. , 2021 ). • Scalability of contrastive pair curation: The current method relies on manually curated contrastive text pairs to derive steering vectors. While effective for a limited set of attributes, this approach may not scale efficiently to a large number of diverse steering targets. References The claude 3 model family: Opus, sonnet, haiku . Deng Cai, Elman Mansimov, Yi-An Lai, Yixuan Su, Lei Shu, and Yi Zhang. 2022. Measuring and reducing model update regression in structured prediction for nlp. Preprint , arXiv:2202.02976. DeepSeek -AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhi - hong Shao, Zhuoshu Li, Ziyi Gao, and 181 others. 2025. Deepseek -r1: Incentivizing reasoning capa - bility in llms via reinforcement learning . Preprint , arXiv:2501.12948. Jacob Devlin, Ming -Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under - standing . In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech - nologies, Volume 1 (Long and Short Papers) , pages 4171 –4186, Minneapolis, Minnesota. Association for Computational Linguistics. Nelson Elhage, Neel Nanda, Catherine Olsson, Tom Henighan, Nicholas Joseph, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Nova DasSarma, Dawn Drain, Deep Ganguli, Zac Hatfield -Dodds, Danny Hernandez, Andy Jones, Jackson Kernion, Liane Lovitt, Kamal Ndousse, and 6 others. 2021. A mathemati - cal framework for transformer circuits. Trans - former Circuits Thread . Https://transformer - circuits.pub/2021/framework/index.html. Shaz Furniturewala, Kokil Jaidka, and Yashvardhan Sharma. 2024. Impact of decoding methods on hu - man alignment of conversational llms . Preprint , arXiv:2407.19526. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al - Dahle, Aiesha Letman, Akhil Mathur, Alan Schel - ten, Alex Vaughan, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mi - tra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, and 542 others. 2024. The llama 3 herd of models . Preprint , arXiv:2407.21783. Dan Hendrycks, Collin Burns, Steven Basart, An - drew Critch, Jerry Li, Dawn Song, and Jacob Stein - hardt. 2023. Aligning ai with shared human values . Preprint , arXiv:2008.02275. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Si - monyan, Erich Elsen, and 3 others. 2022. Training compute -optimal large language models . Preprint , arXiv:2203.15556. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models . Preprint , arXiv:2001.08361. Atoosa Kasirzadeh and Iason Gabriel. 2022. In conversation with artificial intelligence: aligning language models with human values . Preprint , | https://arxiv.org/abs/2505.17413v1 |
arXiv:2209.00731. János Kramár, Tom Lieberum, Rohin Shah, and Neel Nanda. 2024. Atp*: An efficient and scalable method for localizing llm behaviour to components . Preprint , arXiv:2403.00745. Nathan Lambert. 2025. Reinforcement learning from human feedback . Preprint , arXiv:2504.12501. Kenneth Li, Oam Patel, Fernanda Viégas, Hanspeter Pfister, and Martin Wattenberg. 2024. Inference -time intervention: Eliciting truthful answers from a lan - guage model . Preprint , arXiv:2306.03341. Zihao Li, Zhuoran Yang, and Mengdi Wang. 2023. Re- inforcement learning with human feedback: Learn - ing dynamic choices via pessimism . Preprint , arXiv:2305.18438. Paul Pu Liang, Chiyu Wu, Louis -Philippe Morency, and Ruslan Salakhutdinov. 2021. Towards understand - ing and mitigating social biases in language models . Preprint , arXiv:2106.13219. Lars Malmqvist. 2025. Winning at all cost: A small environment for eliciting specification gaming be - haviors in large language models. arXiv preprint arXiv:2505.07846 . Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. 2023. Locating and editing factual associa - tions in gpt . Preprint , arXiv:2202.05262. Saif M. Mohammad and Peter D. Turney. 2013. Crowd - sourcing a word –emotion association lexicon . Com - putational Intelligence , 29(3):436 –465. Neel Nanda, Andrew Lee, and Martin Wattenberg. 2023. Emergent linear representations in world mod- els of self -supervised sequence models . Preprint , arXiv:2309.00941. Christopher Olah, Nick Cammarata, Ludwig Schubert, Gabriel Goh, Michael Petrov, and Shan Carter. 2020. Zoom in: An introduction to circuits . OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, and Ilge Akkaya et al.(275 additional authors not shown). 2024. Gpt-4 technical report . Preprint , arXiv:2303.08774. Nina Panickssery, Nick Gabrieli, Julian Schulz, Meg Tong, Evan Hubinger, and Alexander Matt Turner. 2024. Steering llama 2 via contrastive activation addition . Preprint , arXiv:2312.06681. Kiho Park, Yo Joong Choe, and Victor Veitch. 2024. The linear representation hypothesis and the geometry of large language models . Preprint , arXiv:2311.03658. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Hua Shen, Nicholas Clark, and Tanushree Mitra. 2025. Mind the value -action gap: Do llms act in alignment with their values? Preprint , arXiv:2501.15463. Gemini Team, Rohan Anil, Sebastian Borgeaud, Jean - Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M. Dai, Anja Hauth, Katie Millican, David Silver, Melvin Johnson, Ioannis Antonoglou, Julian Schrittwieser, Amelia Glaese, Jilin Chen, Emily Pitler, Timothy Lillicrap, Angeliki Lazaridou, and 1331 others. 2024. Gemini: A fam - ily of highly capable multimodal models . Preprint , arXiv:2312.11805. Curt Tigges, Oskar John Hollinsworth, Atticus Geiger, and Neel Nanda. 2023. Linear representations of sentiment in large language models . Preprint , arXiv:2310.15154. Hugo Touvron, Louis Martin, Kevin Stone, Peter Al - bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, and 49 oth- ers. 2023. Llama 2: Open foundation and fine-tuned chat models . Preprint , arXiv:2307.09288. Jennifer Tracey, Dana Delgado, Song Chen, and Stephanie Strassel. 2021. BOLT Chinese SMS/Chat Parallel Training Data | https://arxiv.org/abs/2505.17413v1 |
. Alexander Matt Turner, Lisa Thiergart, Gavin Leech, David Udell, Juan J. Vazquez, Ulisse Mini, and Monte MacDiarmid. 2024. Steering language models with activation engineering . Preprint , arXiv:2308.10248. Andy Zou, Long Phan, Sarah Chen, James Campbell, Phillip Guo, Richard Ren, Alexander Pan, Xuwang Yin, Mantas Mazeika, Ann -Kathrin Dombrowski, Shashwat Goel, Nathaniel Li, Michael J. Byun, Zifan Wang, Alex Mallen, Steven Basart, Sanmi Koyejo, Dawn Song, Matt Fredrikson, and 2 others. 2023a. Representation engineering: A top -down approach to ai transparency . Preprint , arXiv:2310.01405. Andy Zou, Zifan Wang, Nicholas Carlini, Milad Nasr, J. Zico Kolter, and Matt Fredrikson. 2023b. Univer - sal and transferable adversarial attacks on aligned language models . Preprint , arXiv:2307.15043. Appendix We provide a link to our anonymized code reposi - tory here. A Detailed Attribution Patching Methodology and Additional Visualizations Intervention layer selection for steering emotional support and disclosure was informed by an initial attribution patching analysis across four diagnostic conversational scenarios: offer acceptance , coun - teroffer , realism , and empathy . These scenarios were chosen to reflect a broad range of conversa - tional phenomena underpinning human -like inter- action, rather than narrowly targeting overt emo - tional content. Specifically, empathy directly as - sessed emotional understanding; offer acceptance and counteroffer evaluated the model’s grasp of social dynamics and intent; and realism tested its sense of situational plausibility, when dealing with price stances in a negotiation scenario. Our objective was to identify layers showing con- sistent causal influence —measured via changes in logit difference scores when clean activations were patched into corrupted runs—across these settings. Such layers were considered strong candidates for targeted intervention to enhance fine-grained traits like support and disclosure. Table 1 summarizes the outcomes of these patch - ing experiments, listing the "Key Layer" selected for each "Diagnostic Scenario," defined as a target behavior for which an intervention was developed. For instance, Layer 2 was most consistently in - fluential for Emotional Support , while Layer 3 had the strongest impact on Emotional Disclosure . Negotiation -related behaviors (e.g., Negotiation Style – Layer 2, Tone – Layer 4, Price -Stance – Layer 0) are included based on the counteroffer and acceptance tasks, demonstrating the broader relevance of our layer selection approach. A.1 Diagnostic Prompts The clean and corrupted diagnostic prompts are designed to test specific model behaviors. For each scenario, the model is presented with a context and is expected to complete a sentence with a se - mantically appropriate single -token response. The corrupted prompt typically involves swapping key details from the clean prompt to observe changes in model behavior. The exact prompts used are detailed below in the table 2 A.2 Logit Difference Metric The logit difference metric is a core component of attribution patching. It is calculated as the differ - ence between the logit of the target (desired) token and the logit of a prominent undesired token. This calculation is performed when activations from the clean run (where the model produces the desired output) are patched into the corresponding loca - tions in the corrupted | https://arxiv.org/abs/2505.17413v1 |
run (where the model ini - tially produces an undesired output). Essentially, this metric quantifies the causal effect of specific model activations on steering the output towards the desired token and away from the undesired one. The change in this logit difference upon patching specific activations (e.g., from a particular layer or attention head) indicates the influence of those ac- tivations on the model’s behavior for that specific task. The metric is computed by taking a back- wards pass on the corrupted prompt with respect to the patching metric, caching all gradients with re- spect to the activations, and then for a given activa - tion, computing L((clean_act − corrupted_act ) × corrupted_grad_act ). A.3 Additional Figures The figures,Figure 1, Figure 3 , Figure 4 , Figure 5 , Figure 6, and Figure 7 show detailed heatmaps il- lustrating the contributions of different model com- ponents (attention heads or layer outputs) to the respective diagnostic tasks. These visualizations help identify which parts of the model are most influential in generating specific behaviors. For example, Figure 3 shows layer output contribu - tions for an emotional support task, where color intensity indicates attribution scores influencing an emotionally supportive response, and such attribu - tion patterns guide steering vector extraction. It also crucially provides us information about how the last few tokens provide the most activated ac - tivations. Similarly, Figure 1 displays layer -wise attention head contributions to the empathy task, where strong positive (blue) or negative (red) at - tribution highlights heads significantly impacting empathetic response generation. Figure 3: Layer output contributions from attribution patching for an emotional support diagnostic task. The x-axis shows token position; the y -axis shows model components. Color intensity reflects attribution scores (blue: positive, red: negative) influencing the generation of an emotionally supportive response. Attribution at later token positions across several layers guided our steering vector extraction and application strategy. Figure 4: Layer output contributions from attribution patching for an emotional disclosure diagnostic task. The x -axis shows token position; the y -axis shows model components. Color intensity reflects attribution scores (blue: positive, red: negative) influencing the generation of an emotionally supportive response. At - tribution at later token positions across several layers guided our steering vector extraction and application strategy. Figure 6: Layer -wise attention head contributions to the ’counter offer’ diagnostic task. Figure 5: Layer -wise attention head contributions to the ’realism’ diagnostic task. Figure 7: Layer -wise attention head contributions to the ’acceptance’ diagnostic task. B Steering Vector Derivation and Application Details B.1 Seed Pair for Emotional Support: Steering vectors were derived from the difference in activations generated by processing sets of con- trastive text pairs —positive and negative exam - ples of the target emotional traits. Initially, two seed contrastive pairs (one for emotional support and one for emotional disclosure), provided below, were used. These seed pairs were then expanded by using a GPT -4 chat completion model to gener - ate at least 10 further examples for each category, forming the full set of contrastive texts. All texts in these sets were | https://arxiv.org/abs/2505.17413v1 |
subsequently processed to ensure equal token lengths before activation differences were computed. The pairs have been listed in ta - ble3 B.2 Scaling Coefficient: The scaling coefficient for steering vector addi - tion/subtraction was set to 2.0 and 2.5. This value was determined empirically by sweeping values from 0.5 to 4.0 in increments of 0.5 and select - ing the value that produced the most pronounced desired effects on a small validation set without significantly degrading fluency, aligning with com- mon practices in activation engineering ( Turner et al. , 2024 ; Panickssery et al. , 2024 ). B.3 Steering Vector Construction and Application: B.3.1 Target Layer Identification To determine where steering should be applied within the model, attribution patching experiments were conducted. These experiments revealed the layers most responsible for the desired traits. Specifically, Layer 2 was found to be the key inter- vention point for emotional support, while Layer 3 was identified for emotional disclosure. B.3.2 Activation Extraction from Contrastive Sets For each emotional dimension (e.g., emotional support), two sets of texts —positive and nega - tive—were used. Each set included a seed pair along with GPT -4 generated examples, all normal - ized to the same token length. The model was run over each text, and hidden state activations were extracted at the identified target layer (e.g., Layer 2 for support). These ac - tivations were collected across all token positions in the text. The process was identical for both the positive and negative sets. To reduce variance across samples, the extracted activations were averaged within each set. This yielded a single mean positive activation tensor and a mean negative activation tensor for each emotional dimension and layer of interest. B.3.3 Steering Vector Calculation The steering vector ( Vsteer) was computed as the difference between the mean positive and negative activation tensors: Vsteer = Mean Positive Activation Tensor − Mean Negative Activation Tensor (4) This vector captures the directional distinction in the activation space between positively and neg- atively expressed emotional traits. To influence model output, the computed Vsteer was applied during inference. Specifically, for any given input prompt, the hidden activations at the tar- get layer were modified by adding a scaled version of the steering vector to the final 15 token positions. These positions were identified as most influential via attribution patching. C Comprehensive Emotional Steering Results This section provides comprehensive quantitative results comparing steered and unsteered responses across both the Emotional Support and Emotional Disclosure settings. C.1 Evaluation Metrics and Analysis Details Steered and unsteered model outputs were evalu - ated per utterance using multiple automated metrics to assess linguistic and emotional characteristics. Quantitative analysis included: • Sentiment Analysis : Overall sen- timent (positive or negative) of each response was determined us- ing a BERT -based model, specifically distilbert/distilbert -base-uncased-finetuned -sst- fine-tuned on the SST-2 dataset. • Emotion and Lexical Category Counts : – NRC EmoLex : Specific emotion pro - files were generated using the NRC Emo- tion Lexicon (EmoLex). This involved Task Polarity Seed Text Positive (Supportive) "That sounds really tough. I’m so sorry | https://arxiv.org/abs/2505.17413v1 |
you’re dealing with this. I’m here to listen if you want to talk more." Negative (Neutral/Dismissive) "Okay, that event occurred. Let us look at it rationally. What is the logical next action you should consider taking now?" Positive (Disclosing Emotion) "To be honest, I’ve been feeling quite stressed and uncertain about things lately. It’s been weighing on me." Negative (Factual/Impersonal) "I generally prefer to keep my personal feelings to myself. As for work, everything is proceeding according to plan." Table 3: Seed pairs used for steering emotional tone in support and disclosure tasks. Each task includes a positive (emotionally rich) and negative (neutral or impersonal) seed variant. extracting raw counts for emotions such as joy, trust, anger, fear, and sadness, as well as broader positive and negative sen- timent word categories. We also looked at the top emotions for each response. For comparative analysis, raw emotion counts were normalized by the total word count of the response. – Distress Metric : A distress metric was computed based on counts of specific distress -related keywords (e.g., "anx - ious," "stressed," "overwhelmed") and a sum of the EmoLex scores for ’fear,’ ’sadness,’ and ’anger’. – Empath Lexicon : Psycho -social themes, particularly those relevant to empathy, were quantified using the Empath lexi - con. Raw counts for categories such as ’help,’ ’communication,’ ’speaking,’ ’lis- ten,’ ’strength,’ ’healing,’ and ’nervous - ness’ were extracted and then normalized by word count for analysis. • Keyword Matching for Specific Cues : Be- yond lexicon -based tools, keyword counting identified predefined terms associated with support (e.g., "sorry to hear," "listen," "sup - port") and self-disclosure (e.g., "I feel," "per- sonally," "my thoughts are"). • Linguistic Feature Metrics : – Politeness Cues (ConvoKit) : Specific politeness strategies were identified using ConvoKit’s PolitenessStrategies analyzer, which leverages spaCy for text parsing. The analysis focused on the pres- ence and frequency of features such as feature_politeness_==Apologizing== , feature_politeness_==Gratitude== , feature_politeness_==1st_person== (as used in politeness con- texts for example “I feel”), feature_politeness_==Hedges== , feature_politeness_==HASPOSITIVE== , and feature_politeness_==HASNEGATIVE== . – First -Person Pronoun Ratio : The us - age of first -person pronouns (e.g., "I," "me," "my," "mine," "myself") was quan- tified. This was calculated as a ratio of the count of these pronouns to the total number of words in the response, with spaCy used for tokenization and lemma - tization to accurately identify pronouns. To quantify the effects of interventions within the attribution patching framework, logit difference scores were utilized. Statistical significance of the differences be - tween steered and unsteered model outputs was assessed rigorously. For continuous metrics, such as normalized EmoLex scores, Empath category scores, and the first-person pronoun ratio, Welch’s t-tests (independent samples t -test not assuming equal variance) were applied. For categorical data, including the distribution of sentiment labels (Posi - tive/Negative) and the presence or absence of spe- cific ConvoKit politeness features, χ2 (Chi-square) tests of independence were employed. To control for the increased likelihood of Type I errors due to multiple comparisons across the various met - rics, the Benjamini -Hochberg False Discovery Rate (FDR) correction | https://arxiv.org/abs/2505.17413v1 |
procedure was applied to all p - values. An adjusted p-value below the significance threshold (e.g., 0.05) was considered statistically significant. Table 4 shows EmoLex category means. Table 5 shows first-person pronoun usage. Table 6 shows Chi-square values for politeness cues. Emotional Support Emotional Disclosure Table 7 shows empathy -related keyword expres - sion. Emotion Context Steered Unsteered T-stat p-val Sig Joy Support 0.135 0.125 2.159 0.031 * Trust Support 0.129 0.120 1.923 0.055 Anger Support 0.050 0.047 0.898 0.369 Fear Support 0.063 0.057 2.055 0.040 * Sadness Support 0.074 0.069 1.684 0.092 Anticipation Support 0.184 0.173 1.845 0.065 Positive Support 0.249 0.230 2.566 0.010 ** Negative Support 0.121 0.116 1.188 0.235 Joy Disclosure 0.139 0.122 3.577 <.001 *** Trust Disclosure 0.134 0.119 3.089 0.002 ** Anger Disclosure 0.052 0.047 1.898 0.058 Keyword Context Steered Unsteered T-stat p-val Sig Help Support 0.009 0.008 1.259 0.208 Communication Support 0.483 0.458 2.559 0.010 ** Speaking Support 0.472 0.452 2.126 0.034 * Listen Support 0.319 0.303 2.554 0.011 * Strength Support 0.012 0.010 1.265 0.206 Healing Support 0.008 0.007 0.786 0.432 Nervousness Support 0.010 0.009 1.031 0.303 Help Disclosure 0.010 0.009 1.146 0.252 Communication Disclosure 0.522 0.464 5.696 <.001 *** Speaking Disclosure 0.510 0.459 5.289 <.001 *** Listen Disclosure 0.346 0.306 5.795 <.001 *** Strength Disclosure 0.013 0.011 1.759 0.079 Positive Disclosure 0.257 0.227 3.986 <.001 *** Negative Disclosure 0.125 0.114 2.351 0.019 * Table 4: EmoLex emotion means and significance val- ues for steered vs. unsteered responses. Table 7: Empathy -related keyword expression (propor - tion) in steered vs. unsteered responses. Figure 8 provides an at-a-glance comparison of all metric means. Heatmap of Key Metrics across Conditions Positive Sentiment Joy 0.8 Trust 0.7 Positive Emotion 0.6 Context Steered Unsteered T-stat p-value Sig Support 0.501 0.452 3.855 <.001 *** Disclosure 0.514 0.455 4.617 <.001 *** Table 5: First-person pronoun usage (proportion) in steered vs. unsteered responses. Communication Speaking Listen Apologizing HasPositive HasNegative First Person Steered -Support Unsteered -Support Steered -Disclosure Unsteered -Disclosure Condition 0.5 0.4 0.3 0.2 Feature Context Chi-square p-val Gratitude Support 11.264 0.001 Apologizing Support 173.501 <.001 1st Person Support 0.092 0.761 Hedges Support 61.188 <.001 HasPositive Support 27.498 <.001 HasNegative Support 57.380 <.001 Gratitude Disclosure 1.307 0.253 Apologizing Disclosure 32.708 <.001 1st Person Disclosure 3.093 0.079 Hedges Disclosure 3.541 0.060 HasPositive Disclosure 12.813 <.001 HasNegative Disclosure 5.718 0.017 Figure 8: Heatmap of key metrics across steered and unsteered responses in both tasks. Color intensity rep - resents mean scores, normalized per metric for visual - ization if necessary (details in plot generation). Darker shades generally indicate higher scores. Table 6: Chi-square values for politeness cues (pres - ence/absence) in both support and disclosure contexts. Metric Mean Value Fear Disclosure 0.063 0.057 2.047 0.041 * Healing Disclosure 0.008 0.007 1.468 0.142 Sadness Disclosure 0.074 0.067 2.217 0.027 * Nervousness Disclosure 0.010 0.008 1.724 0.085 Anticipation Disclosure 0.193 0.172 3.558 <.001 *** | https://arxiv.org/abs/2505.17413v1 |
arXiv:2505.17417v1 [eess.AS] 23 May 2025Speechless: Speech Instruction Training Without Speech for Low Resource Languages Alan Dao (Gia Tuan Dao)1, Dinh Bach Vu1, Huy Hoang Ha1, Tuan Le Duc Anh1, Shreyas Gopal2, Yue Heng Yeo2, Warren Keng Hoong Low1, Eng Siong Chng2, Jia Qi Yip1 1Menlo Research 2CCDS, Nanyang Technological University, Singapore alan@menlo.ai Abstract The rapid growth of voice assistants powered by large language models (LLM) has highlighted a need for speech instruction data to train these systems. Despite the abundance of speech recognition data, there is a notable scarcity of speech instruc- tion data, which is essential for fine-tuning models to under- stand and execute spoken commands. Generating high-quality synthetic speech requires a good text-to-speech (TTS) model, which may not be available to low resource languages. Our novel approach addresses this challenge by halting synthesis at the semantic representation level, bypassing the need for TTS. We achieve this by aligning synthetic semantic representations with the pre-trained Whisper encoder, enabling an LLM to be fine-tuned on text instructions while maintaining the ability to understand spoken instructions during inference. This simpli- fied training process is a promising approach to building voice assistant for low-resource languages. Index Terms : speech recognition, human-computer interac- tion, low-resource languages, speech language models 1. Introduction V oice assistants have become an integral part of modern tech- nology, providing users with the ability to interact with de- vices through natural language [1]. These voice assistants can be achieved through a cascade of automatic speech recogni- tion (ASR) which transcribes instructions which are then pro- cessed by an instruction-tuned large language model (LLM) [2]. However, in such a cascaded implementation, the latency intro- duced by the ASR model can negatively impact the user ex- perience. Thus, early-fusion models [3], where the language model is fine-tuned to accept speech representations instead of ASR transcripts, have become an increasingly popular solution. However, this fine-tuning process requires a significant amount of speech instruction and its corresponding response data. For example, LLaMA-Omni [4] was trained on the InstructS2S- 200k [4] dataset, which consists of 200k speech instructions and their corresponding speech responses. These speech instruc- tions are linguistically different from ASR data as they con- sists of questions and answers, whereas ASR transcripts con- sists primarily of statements. As such, these types of spoken instructions are scarce compared to ASR data even for common languages, and the problem is more acute for low resource lan- guages like Vietnamese. The most cost-effective method for tackling the lack of spo- ken instruction data is to generate synthetic data. Researchers most commonly make use of text to speech (TTS) systems to generate speech [5]. Given a dataset of questions and answers, the questions in the dataset can be sent to a TTS model to gen- erate spoken questions [6]. In some cases, the text of the ques- Figure 1: Overview of the training process using Speechless. In Stage 1, we train a quantizer using ASR data. In Stage 2, we train Speechless, which maps text and duration tokens to audio tokens. In Stage 3, we fine-tune an LLM | https://arxiv.org/abs/2505.17417v1 |
using audio tokens generated by Speechless. At inference time the LLM is able to accept speech input through the Whisper Encoder, even though no speech data was used to fine-tune the LLM. tions are also generated by a prior LLMs [7, 8, 9]. While this approach has been shown by [5] to be broadly effective, the method is reliant on the quality of the TTS models used, and requires a diversity of speaker voices to be simulated for good generalization. While high resource languages like English can benefit from high quality TTS models, TTS models of low re- source languages such as Vietnamese can often lag behind in performance [10, 11]. Without TTS, some methods [12, 13] have relied on text-audio alignment using ASR data to avoid the need for spoken instruction fine-tuning [14, 15, 16]. In this work, we propose Speechless , a novel method for generating synthetic training data for early-fusion speech lan- guage models without relying on traditional TTS systems. As illustrated in Figure 1, Speechless generates semantic speech tokens using a quantized Whisper encoder [17], bypassing the need for waveform generation entirely. By leveraging the Whis- per encoder’s inherent noise robustness and speaker invariance at inference time, our approach avoids the need for speaker diversity. Meanwhile, semantic diversity can be efficiently achieved using LLMs and readily available text corpora. Fur- thermore, since the speech encoder remains frozen during train- ing, fine-tuning is performed exclusively at the token level, sig- nificantly reducing computational costs compared to traditional speech-based fine-tuning pipelines. Our contributions are threefold: First, we propose Speech- less, a novel paradigm that generates instruction training data through semantic token alignment with Whisper’s encoder, eliminating dependency on TTS systems. Second, we demon- strate this method’s effectiveness for Vietnamese - a language with limited TTS resources - achieving competitive ASR per- formance without speech-based fine-tuning. Third, we release the first pre-tokenized Vietnamese instruction dataset enabling speech-language model development, addressing a critical gap in low-resource NLP. 2. Methodology Our method consists of three main stages, as illustrated in Fig- ure 1. First, we train a residual vector quantizer (RVQ) to en- code speech into discrete semantic tokens that align with Whis- per’s encoder representations. Second, we develop Speechless, a decoder-only language model that learns to generate these se- mantic tokens directly from text, effectively creating a text-to- semantics model that bypasses the need for audio generation. Finally, we use Speechless to generate synthetic training data for fine-tuning LLMs on speech understanding tasks. This approach allows us to create high-quality training data without relying on traditional text-to-speech systems, making it particularly valuable for low-resource languages where such systems may be limited or unavailable. Our method builds on earlier works [18, 15, 16, 14] that have aimed to align speech and text modalities, but is aimed towards low resource lan- guages and tries to leverage the large-scale pretraining of Whis- per [17]. The training code for all three stages are released on GitHub1. 2.1. Stage 1: Training a Quantizer Stage 1 of Speechless focuses on training a quantizer that aligns the semantic and | https://arxiv.org/abs/2505.17417v1 |
text representations, making the downstream task of training Speechless easier. At the core of this stage is a residual vector quantizer (RVQ), which transforms the high- dimensional speech representations from Whisper’s encoder into discrete tokens while preserving semantic meaning. RVQ achieves this through an iterative refinement process: First, it creates a coarse representation of the input using an initial code- book, then progressively refines this representation by quan- tizing the residual errors through subsequent codebooks. This multi-stage approach allows the quantizer to capture both broad semantic features and subtle nuances in speech. To adapt the quantizer for low-resource languages, we ex- panded the quantizer’s capacity by quadrupling the codebook size from 512 to 2048 entries. Our initial attempt to initialize the expanded codebook using Kaiming initialization led to poor codebook utilization. To overcome this, we adopted a differ- ent strategy by duplicating the original codebook weights and applying Kaiming-initialized random noise to these duplicates. 2.2. Stage 2: Training Speechless Speechless is a 1 billion parameter decoder-only language model designed to generate semantic representations of audio as discrete tokens. By treating semantic tokens as a novel lan- guage, Speechless functions similarly to a machine translation model. It translates text-based instructions into a sequence of semantic tokens that is close to what would be generated by the Whisper Encoder if the same text had been spoken, recorded 1https://github.com/menloresearch/ichigo/tree/legacy/mainand played into the Whisper Encoder. This close alignment be- tween the Speechless output and Whisper Encoder output al- lows us to train using only text instructions, but have the model understand speech at inference. A key challenge in text-to-speech conversion is manag- ing the mismatch between text and speech tokens, since a given text sequence typically corresponds to significantly more speech tokens, and the number of speech tokens needed can vary. Speechless addresses this challenge through its design as an auto-regressive decoder model that predicts tokens one at a time. This auto-regressive approach allows the model to flex- ibly generate the appropriate number of speech tokens for any input text, regardless of length. Additionally, we provisioned the model with a billion parameters to give it a sufficiently large vocabulary for this task. While this is a large number of param- eters for a speech model, this model is not used during inference and thus does not impact inference cost. Speechless accepts text instructions from standard LLM instruction datasets and outputs semantic tokens through the Ichigo tokenizer, building upon Ichigo’s [1] successful use of semantic tokens for instruction-response pairs. To train Speech- less, instead of QA pairs, we used transcription text and sematic tokens pairs. This approach mirrors the dynamics of machine translation models, where the model learns to map structured inputs (text instructions) to meaningful outputs (semantic to- kens) through extensive training on paired data. The semantic tokenization process abstracts away the acoustic details, focus- ing instead on the underlying meaning, which allows for robust and flexible applications across diverse languages and contexts. To train Speechless, we utilize speech-text pairs from an ASR dataset. The raw transcripts serve as input to Speechless, appended after a special | https://arxiv.org/abs/2505.17417v1 |
task token < |texttosemantic |>. Speech is tokenized using the quantizer from Stage 1 and serves as the target output. To reduce computational costs, we com- press the target sequence length by creating a < |duration |> token to represent the repetition inside each repeated sound to- kens group. 2.3. Stage 3: Training the LLM After training Speechless, we use it to generate synthetic data that can be used to fine-tune a pre-trained LLM. For text data, we combined multiple instruction datasets: Ichigo [1] for En- glish content, and Sailor [19] and Viettel x NVIDIA [20] datasets for Vietnamese. The data preparation process involved several filtering steps to ensure quality: we removed samples with excessive prompt lengths, filtered out non-audible content (such as mathematical equations and excessive punctuation), and curated responses by refining the answers for the Viettel dataset using the Qwen2.5-32B model. Finally, we tokenize the user turn to discrete speech tokens using Speechless. Subsequently, after the synthetic semantic tokens have been generated, we can apply a standard speech instruction tuning pipeline [21] with minimal modifications. To ensure that train- ing with Speechless was successful, we added new sound and duration tokens to the LLaMA tokenizer and resized the em- bedding and finally linear head of the LLaMA model, so that the model could train with the new tokens. Thus, by adding Speechless into any instruction tuning pipeline, we can train the model with only text instructions, but have the model under- stand speech at inference. Table 1: All results are in percentages. Comparative analysis of model performance for general, noisy, and multilingual ASR using the LibrisSpeech (LS), VoiceBank+DEMAND (VBD), and CommonVoice (CV) datasets respectively. All results are derived from processed labels and predictions. Both labels and predictions are lower-cased and all special characters are removed. Model Config LS test-clean VBD clean VBD noisy CV En CV Vi CER WER CER WER CER WER CER WER CER WER Whisper (M) Zero-shot (greedy) 1.21 2.85 1.45 4.99 2.13 6.17 4.21 5.98 15.00 25.43 Zero-shot (beam-10) 0.92 2.51 1.33 4.80 1.94 5.91 3.21 5.22 13.72 24.18 Whisper (M) Greedy Inference 3.45 6.74 3.27 7.12 9.32 15.76 4.33 7.27 28.11 36.53 Quantized Beam-Search (n=10) 2.42 5.52 2.89 6.76 6.63 12.34 3.24 7.01 24.16 34.84 Speechless Greedy Inference 2.47 4.65 1.01 2.32 - - 3.54 8.03 2.69 5.90 Beam-Search (n=10) 2.08 4.21 1.52 3.92 - - 2.92 6.56 3.77 7.08 3. Experiments 3.1. Datasets For Stage 1, we utilized two automatic speech recognition (ASR) datasets: viV oice (Vietnamese) and LibriTTS-R[22] (English). The ViV oice dataset consists of 868k utterances for training, 10k for validation, and 10k for testing, while the LibriTTS-R dataset contains 112k training samples, 5.6k vali- dation samples, and 4.6k test samples. Since the training data primarily consisted of clean speech, the resulting model exhib- ited increased sensitivity to noise. For training Stage 2, we took 880k samples from Vivoice [23], 112k samples from LibriTTS-R Clean [24] and converted the audios from these datasets into semantic tokens using the quantizer in Stage 1. Then, we use transcriptions of the corresponding quantized audio to | https://arxiv.org/abs/2505.17417v1 |
create text-to-semantic training pairs. For Stage 3, we took 880k samples from Vivoice [23], 112k samples from LibriTTS-R Clean [24], and 2.4M sam- ples from MLS Eng [25] 10k as our pretraining data, and con- verted the transcripts from these datasets into semantic tokens using the quantizer and the Speechless model. We then used Ichigo’s [1] instruction data to train the model on instructions, using Speechless2to convert the audio into semantic tokens. We release this synthetic dataset on Huggingface3. To create the synthetic data for Stage 3, we created an ef- ficient pipeline using vLLM [26] to batch inference Speech- less model which has only 1B (takes up to 4GB VRAM). We used Ray [27] for distributed processing, running vLLM [26] instances across multiple GPUs. 3.2. Base Models Used For the Whisper models used in Stage 1 and 2 of the training, we start with a Whisper Medium checkpoint4. All input audio was padded to 30 seconds where applicable, to be compatible with the default implementation of Whisper. For Stage 3, we tested downstream finetuning with Speechless on both base and instruct versions of LLaMA 3.2 1B and LLaMA 3.2 3B. How- ever, for most of our experimentation, we chose the LLaMA 3.2 1B Base model. Our preliminary experiments showed that the 3B model performed similarly to the 1B model, so we choose to use LLaMA 3.2 1B for resource efficiency. 2https://huggingface.co/Menlo/Speechless-llama3.2-v0.1 3https://huggingface.co/datasets/Menlo/Ichigo-instruction- tokenized-v0.2 4https://huggingface.co/Menlo/Ichigo-whisper-v0.13.3. Training Cost The training process for our model was divided into three dis- tinct stages, each with its own computational requirements. In Stage 1, the training was conducted over two phases. Phase 1 required 75 hours to complete 50 epochs, while Phase 2 took 29 hours for 20 epochs. This stage utilized 8 A6000 GPUs, with a batch size of 42 per GPU. The learning rate was set at 1e-3, using the AdamW optimizer. A linear warm-up was applied for the first 500 steps, followed by a cosine decay schedule, and a weight decay of 0.001 was implemented. Stage 2 of the training process was completed in 60 hours, using 6 A6000 GPUs. The batch size was increased to 48 per GPU, and the learning rate was adjusted to 1e-4. Similar to Stage 1, a linear warm-up was used for the initial 100 steps, followed by a cosine decay schedule, with a weight decay of 0.01. Finally, Stage 3 was divided into two parts: pretraining and supervised finetuning. The pretraining phase took 240 hours on A6000 GPUs, with a batch size of 42 per GPU and a learn- ing rate of 2e-4. The same scheduling strategy as the previ- ous stages was applied. The supervised finetuning phase re- quired 40 hours on H100 GPUs, with a batch size of 32 per GPU and a learning rate of 3e-4. This stage also used a linear warm-up for 100 steps, followed by a cosine decay schedule, and a weight decay of 0.01. Overall, the training process was resource-intensive, reflecting the complexity and scale of the model development. 4. Results 4.1. ASR and Speechless Comparisons To evaluate the performance of | https://arxiv.org/abs/2505.17417v1 |
the Speechless model alone, we make use of ASR test sets. To do this evaluation, we compare semantic tokens generated by Whisper Encoder from speech with the semantic tokens generated by Speechless from text. In both cases the semantic tokens are decoded by the same Whis- per Decoder model. Ideally, the WER for Speechless and Whis- per Encoder should be similar for clean datasets, and the WER for Speechless should be better for noisy dataset. To evaluate the general ASR capability of the quantized Whisper model, we used the test-clean split of the LibriSpeech dataset [29] and the clean test set of the VBDemand dataset [30]. We also evaluated the models using the test-other split of LibriSpeech and the noisy test set of VBDemand. The test-clean split comprises 2,620 utterances, totaling approximately 5.4 hours of clean read speech, while the test-other split contains 2,939 utterances, corresponding to 5.1 hours of read speech. Table 2: VoiceBench [28] Results. These are results based on spoken questions and text answers. Experiments other than ours were performed by the VoiceBench authors. SD-QA and CommonEval have a human audio source, while the rest use Google TTS. Model Name AlpacaEval CommonEval SD-QA OpenBookQA AdvBench Baichuan-Omni-1.5 4.50 4.05 43.40 74.51 97.31 GLM-4-V oice 3.97 3.42 36.98 53.41 88.08 Qwen2-Audio 3.74 3.43 35.71 49.45 96.73 VITA-1.0 3.38 2.15 27.94 29.01 26.73 Moshi 2.01 1.60 15.64 25.93 44.23 Whisper-v3-turbo+LLaMA-3.1-8B 4.55 4.02 58.23 72.09 98.46 LLaMA-Omni 3.70 3.46 39.69 27.47 11.35 Speechless-llama3.1-8B-instruct (Ours) 3.86 2.51 35.00 26.15 62.88 The VBDemand test set includes 824 utterances, with the noisy subset incorporating background noise from eight DEMAND [31] noise classes at varying signal-to-noise ratios. The tran- scripts from these datasets were used to evaluate the seman- tic token quality produced by the Speechless model after de- quantization and decoding via the Whisper decoder. For multilingual ASR evaluation, we utilized the Viet- namese (VI) and English (EN) subsets from Mozilla Common V oice 17 [32]. Common V oice is a crowd-sourced dataset con- taining recordings with diverse accents, dialects, and record- ing conditions, making it well-suited for assessing multilingual performance. We selected the official test splits, which include 4,325 utterances in Vietnamese and 6,125 in English, amount- ing to approximately 8.3 hours and 12.6 hours of speech, re- spectively. The transcripts from both subsets were used to evaluate the semantic token quality generated by the Speech- less model following de-quantization and decoding through the Whisper decoder. This evaluation framework enabled us to measure the model’s performance across different languages and linguistic complexities. In our efforts to establish a shared semantic language be- tween Whisper and the Speechless LM, we first show that the Speechless model is able to generate semantic tokens that, when decoded by the Whisper decoder, display a very low WER across multiple domains of English and Vietnamese text data in 1. This shows that Speechless is able to map raw text informa- tion to clean speech in the latent space. This is also clear when see that with added noise (VBD noisy), the Whisper encoder starts to generate tokens that show poorer | https://arxiv.org/abs/2505.17417v1 |
WER in comparison. We can also observe that once quantized, the Whisper en- coder’s performance declines in both noisy and multilingual set- tings. We posit that this is primarily due to information being lost during the residual vector quantization operation. As the Whisper decoder module is not trained after quantization layers are added, it is not privy to the change in latent speech embed- dings. Additionally, only clean English and Vietnamese speech are used in Stage-1 training when generating the codebook, hence the codes may not have been exposed to noisy training data. 4.2. LLM benchmarking To evaluate the performance of our model utilizing Speech- less synthetic data for speech instruction tuning, we utilize the V oiceBench [28] subset of the AlpacaEval [33], Com- monEval [32], SD-QA [34], OpenBookQA [35] and Ad- vBench [36], where text-based QA pairs have been converted to spoken friendly instructions and read through a TTS model. As the results reported in Table 2 show, Speechless achieves comparable performance to Llama-Omni, which is also usesLLaMA-3.1-8B, but was trained on spoken voice instructions. However, the performance of Whisper-v3-turbo+LLaMA-3.1- 8B [28], which is a cascaded model, is significantly better. This is likely due to the LLM of a cascaded model only having to un- derstand a single modality of data, text, which allows it to fully utilize its pre-training. Our Speechless fine-tuned model also outperforms Moshi [37], while also achieving comparable per- formance to VITA-1.0. Our model underperforms newer mod- els such as Baichuan-Omni-1.5, GLM-4-V oice, Qwen2-Audio which use different text LLMs as a starting point. Table 3: MMLU and VMLU Benchmarks. These are text-based benchmarks for comparing the performance degradation due to speech instruction tuning Model Name MMLU VMLU meta-llama3.1-8B-instruct 69.40 50.69 Speechless-llama3.1-8B-instruct 62.27 43.22 Next, in Table 3 we report the performance of the model on MMLU and VMLU, which are text question and answer bench- marks. We find that our instruction-tuned model exhibits some performance degradation compared to the base model, which is expected as the model now has to be able to accept both speech and text tokens using the same number of parameters. Similar performance degradation after speech instruction tuning have also been previously reported in [38] and attributed to catas- trophic forgetting. 5. Conclusion This paper introduced Speechless , a novel method for gener- ating synthetic training data for early-fusion speech language models without traditional text-to-speech systems. By lever- aging a quantized Whisper encoder, Speechless generates se- mantic speech tokens, effectively addressing challenges in low- resource languages. Our experiments demonstrated competitive performance across various ASR settings and enabled effective speech instruction tuning of LLMs. However, our approach has limitations. The performance degradation observed in text-based benchmarks suggests poten- tial issues with catastrophic forgetting during speech instruction tuning. Additionally, while Speechless shows promise in clean and controlled environments, its robustness in highly noisy or diverse linguistic contexts requires further exploration. Never- theless, Speechless the methods described in this paper can in principle be applied to noisy data. Thus, our future work will focus on enhancing noise robustness and expanding applicabil- ity to a broader range | https://arxiv.org/abs/2505.17417v1 |
of languages and dialects. 6. References [1] A. Dao, D. B. Vu, and H. H. Ha, “Ichigo: Mixed-modal early- fusion realtime voice assistant,” arXiv preprint arXiv:2410.15316 , 2024. [2] S. Ji, Y . Chen, M. Fang, J. Zuo, J. Lu, H. Wang, Z. Jiang, L. Zhou, S. Liu, X. Cheng et al. , “Wavchat: A survey of spoken dialogue models,” arXiv preprint arXiv:2411.13577 , 2024. [3] W. Cui, D. Yu, X. Jiao, Z. Meng, G. Zhang, Q. Wang, Y . Guo, and I. King, “Recent advances in speech language models: A survey,” arXiv preprint arXiv:2410.03751 , 2024. [4] Q. Fang, S. Guo, Y . Zhou, Z. Ma, S. Zhang, and Y . Feng, “Llama- omni: Seamless speech interaction with large language models,” arXiv preprint arXiv:2409.06666 , 2024. [5] V . Noroozi, Z. Chen, S. Majumdar, S. Huang, J. Balam, and B. Ginsburg, “Instruction data generation and unsupervised adap- tation for speech language models,” in Interspeech 2024 , 2024, pp. 4049–4053. [6] N. Majumder, C.-Y . Hung, D. Ghosal, W.-N. Hsu, R. Mihalcea, and S. Poria, “Tango 2: Aligning diffusion-based text-to-audio generations through direct preference optimization,” in Proceed- ings of the 32nd ACM International Conference on Multimedia , 2024, pp. 564–572. [7] C. Tang, W. Yu, G. Sun, X. Chen, T. Tan, W. Li, L. Lu, Z. MA, and C. Zhang, “SALMONN: Towards generic hearing abilities for large language models,” in The Twelfth International Conference on Learning Representations , 2024. [8] J. Pan, J. Wu, Y . Gaur, S. Sivasankaran, Z. Chen, S. Liu, and J. Li, “Cosmic: Data efficient instruction-tuning for speech in-context learning,” CoRR , vol. abs/2311.02248, 2023. [9] Z. Zhao, Y . Jiang, H. Liu, Y . Wang, and Y . Wang, “Lib- risqa: Pioneering free-form and open-ended spoken question an- swering with a novel dataset and framework,” arXiv preprint arXiv:2308.10390 , 2023. [10] T. N. D. Tran, T. C. Chu, V . Hoang, T. H. Bui, and H. Q. Truong, “An efficient and high fidelity vietnamese streaming end-to-end speech synthesis,” in Interspeech 2022 , 2022, pp. 466–470. [11] F. Lux, J. Koch, and N. T. Vu, “Low-resource multilingual and zero-shot multispeaker TTS,” in Proceedings of the 2nd Confer- ence of the Asia-Pacific Chapter of the Association for Computa- tional Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) , Y . He, H. Ji, S. Li, Y . Liu, and C.-H. Chang, Eds. Online only: Associ- ation for Computational Linguistics, Nov. 2022, pp. 741–751. [12] P. Manakul, G. Sun, W. Sirichotedumrong, K. Tharnpipitchai, and K. Pipatanakul, “Enhancing low-resource language and in- struction following capabilities of audio language models,” arXiv preprint arXiv:2409.10999 , 2024. [13] Y .-A. Chung, W.-H. Weng, S. Tong, and J. Glass, “Unsupervised cross-modal alignment of speech and text embedding spaces,” Ad- vances in neural information processing systems , vol. 31, 2018. [14] W. Held, E. Li, M. Ryan, W. Shi, Y . Zhang, and D. Yang, “Dis- tilling an end-to-end voice assistant without instruction training data,” arXiv preprint arXiv:2410.02678 , 2024. [15] M. Huzaifah and I. Kukanov, “An analysis of | https://arxiv.org/abs/2505.17417v1 |
semantically-aligned speech-text embeddings,” in 2022 IEEE Spoken Language Tech- nology Workshop (SLT) . IEEE, 2023, pp. 747–754. [16] N. Gaur, R. Agrawal, G. Wang, P. Haghani, A. Rosenberg, and B. Ramabhadran, “Astra: Aligning speech and text representa- tions for asr without sampling,” in Interspeech 2024 , 2024, pp. 3904–3908. [17] A. Radford, J. W. Kim, T. Xu, G. Brockman, C. McLeavey, and I. Sutskever, “Robust speech recognition via large-scale weak supervision,” in International conference on machine learning . PMLR, 2023, pp. 28 492–28 518. [18] J. Ao, R. Wang, L. Zhou, C. Wang, S. Ren, Y . Wu, S. Liu, T. Ko, Q. Li, Y . Zhang, Z. Wei, Y . Qian, J. Li, and F. Wei, “Speecht5: Unified-modal encoder-decoder pre-training for spoken language processing,” 2021.[19] Sailor2, “Sailor 2 dataset,” https://huggingface.co/datasets/ sailor2/sailor2-sft-stage1, 2024, accessed on February 18, 2025. [20] VTSNLP, “Vtsnlp instruct general dataset,” https://huggingface. co/datasets/VTSNLP/instruct general dataset, 2024, accessed on February 18, 2025. [21] R. Taori, I. Gulrajani, T. Zhang, Y . Dubois, X. Li, C. Guestrin, P. Liang, and T. B. Hashimoto, “Alpaca: A strong, replicable instruction-following model,” March 2023, stanford Center for Research on Foundation Models (CRFM). [22] H. Zen, V . Dang, R. Clark, Y . Zhang, R. J. Weiss, Y . Jia, Z. Chen, and Y . Wu, “Libritts: A corpus derived from librispeech for text- to-speech,” arXiv preprint arXiv:1904.02882 , 2019. [23] Capleaf, “vivoice: Enabling vietnamese multi-speaker speech synthesis,” https://huggingface.co/datasets/capleaf/viV oice, 2024, accessed on February 18, 2025. [24] M. Kawamura, R. Yamamoto, Y . Shirahata, T. Hasumi, and K. Tachibana, “Libritts-p: A corpus with speaking style and speaker identity prompts for text-to-speech and style captioning,” 2024. [25] V . Pratap, Q. Xu, A. Sriram, G. Synnaeve, and R. Collobert, “Mls: A large-scale multilingual dataset for speech research,” ArXiv , vol. abs/2012.03411, 2020. [26] W. Kwon, Z. Li, S. Zhuang, Y . Sheng, L. Zheng, C. H. Yu, J. E. Gonzalez, H. Zhang, and I. Stoica, “Efficient memory man- agement for large language model serving with pagedattention,” arXiv preprint arXiv:2309.06180 , 2023. [27] P. Moritz, R. Nishihara, S. Wang, A. Tumanov, R. Liaw, E. Liang, M. Elibol, Z. Yang, W. Paul, M. I. Jordan, and I. Stoica, “Ray: A distributed framework for emerging ai applications,” arXiv preprint arXiv:1712.05889 , 2018. [28] Y . Chen, X. Yue, C. Zhang, X. Gao, R. T. Tan, and H. Li, “V oicebench: Benchmarking llm-based voice assistants,” arXiv preprint arXiv:2410.17196 , 2024. [29] V . Panayotov, G. Chen, D. Povey, and S. Khudanpur, “Lib- rispeech: an asr corpus based on public domain audio books,” in2015 IEEE international conference on acoustics, speech and signal processing (ICASSP) . IEEE, 2015, pp. 5206–5210. [30] C. Valentini-Botinhao, X. Wang, S. Takaki, and J. Yamagishi, “Speech enhancement for a noise-robust text-to-speech synthe- sis system using deep recurrent neural networks,” in Interspeech 2016 , 2016, pp. 352–356. [31] J. Thiemann, N. Ito, and E. Vincent, “The diverse environments multi-channel acoustic noise database (demand): A database of multichannel environmental noise recordings,” in Proceedings of Meetings on Acoustics , vol. 19, no. 1. AIP Publishing, 2013. [32] R. Ardila, M. | https://arxiv.org/abs/2505.17417v1 |
Branson, K. Davis, M. Kohler, J. Meyer, M. Hen- retty, A. Morais, L. Saunders, F. M. Tyers, and G. Weber, “Com- mon voice: A massively-multilingual speech corpus,” 2020. [33] Y . Dubois, B. Galambosi, P. Liang, and T. B. Hashimoto, “Length- controlled alpacaeval: A simple way to debias automatic evalua- tors,” arXiv preprint arXiv:2404.04475 , 2024. [34] F. Faisal, S. Keshava, M. M. I. Alam, and A. Anastasopoulos, “SD-QA: Spoken dialectal question answering for the real world,” inFindings of the Association for Computational Linguistics: EMNLP 2021 , M.-F. Moens, X. Huang, L. Specia, and S. W.-t. Yih, Eds., pp. 3296–3315. [35] T. Mihaylov, P. Clark, T. Khot, and A. Sabharwal, “Can a suit of armor conduct electricity? a new dataset for open book question answering,” arXiv preprint arXiv:1809.02789 , 2018. [36] A. Zou, Z. Wang, N. Carlini, M. Nasr, J. Z. Kolter, and M. Fredrik- son, “Universal and transferable adversarial attacks on aligned language models,” arXiv preprint arXiv:2307.15043 , 2023. [37] A. D ´efossez, L. Mazar ´e, M. Orsini, A. Royer, P. P ´erez, H. J ´egou, E. Grave, and N. Zeghidour, “Moshi: a speech-text foundation model for real-time dialogue,” arXiv preprint arXiv:2410.00037 , 2024. [38] C. Wang, M. Liao, Z. Huang, J. Lu, J. Wu, Y . Liu, J. Zhang, and C. Zong, “BLSP: Bootstrapping language-speech pre-training via behavior alignment of continuation writing,” 2024. | https://arxiv.org/abs/2505.17417v1 |
arXiv:2505.17420v1 [cs.CL] 23 May 2025DASH: Input-Aware Dynamic Layer Skipping for Efficient LLM Inference with Markov Decision Policies Ning Yang1,2, Fangxin Liu1,2, Junjie Wang3, Tao Yang4, Kan Liu5, Haibing Guan1and Li Jiang*1,2 1.Shanghai Jiao Tong University 2.Shanghai Qi Zhi Institute 3.Northeast University 4.Huawei Technologies Ltd. 5.Alibaba Group {yn937391832, liufangxin, ljiang_cs}@sjtu.edu.cn Abstract Large language models (LLMs) have achieved remarkable performance across a wide range of NLP tasks. However, their substantial infer- ence cost poses a major barrier to real-world deployment, especially in latency-sensitive sce- narios. To address this challenge, we propose DASH , an adaptive layer-skipping framework that dynamically selects computation paths con- ditioned on input characteristics. We model the skipping process as a Markov Decision Process (MDP), enabling fine-grained token-level de- cisions based on intermediate representations. To mitigate potential performance degradation caused by skipping, we introduce a lightweight compensation mechanism that injects differen- tial rewards into the decision process. Further- more, we design an asynchronous execution strategy that overlaps layer computation with policy evaluation to minimize runtime over- head. Experiments on multiple LLM archi- tectures and NLP benchmarks show that our method achieves significant inference accelera- tion while maintaining competitive task perfor- mance, outperforming existing methods. 1 Introduction Transformer-based architectures have become the backbone of modern AI systems due to their strong long-range dependency modeling (Grattafiori et al., 2024; Han et al., 2022). In particular, Large Lan- guage Models (LLMs) built on Transformers have achieved impressive results across natural language and vision tasks (Achiam et al., 2023; Guo et al., 2025). However, the growing size and complex- ity of these models pose significant challenges for deployment, especially in latency-sensitive or resource-constrained environments (Brown et al., 2020). Reducing inference cost has thus become a key research goal. Layer skipping has emerged as a promising direc- tion for reducing inference cost. Existing work (Liu et al., 2024; Varshney et al., 2023; Fan et al.,2024; Liu et al., 2023) states that not all layers are equally important for every input. In many cases, early layers can already produce sufficiently informative representations, while further compu- tation may offer diminishing returns or even intro- duce noise. Several recent works propose fixed or heuristic-based skipping strategies, including early- exit mechanisms (e.g., SkipDecode (Del Corro et al., 2023), LayerSkip (Elhoushi et al., 2024)), periodic skipping, or static similarity-based skip- ping (e.g., FFN-SkipLLM (Jaiswal et al., 2024), AdaSkip (He et al., 2025)). However, these approaches face several limita- tions. First, most skip policies are static or pre- defined, lacking the flexibility to adapt to input- specific semantics or token-level dynamics. Sec- ond, indiscriminate skipping of layers may cause semantic drift or loss of crucial contextual infor- mation, resulting in substantial performance degra- dation. In practice, these methods struggle to bal- ance speed and accuracy, often achieving only mod- est acceleration before model quality drops signifi- cantly. To address these challenges, we propose DASH , an adaptive dynamic layer-skipping framework that learns to select computation paths conditioned on the input. We formulate the layer-skipping process as a Markov Decision Process (MDP), enabling token-level, state-aware decisions at each layer. Our method dynamically determines | https://arxiv.org/abs/2505.17420v1 |
whether to compute, skip, or exit based on intermediate repre- sentations, allowing for fine-grained control over computation. To further mitigate the risk of perfor- mance drop from skipped layers, we introduce a compensation mechanism that models differential rewards, adjusting hidden states to preserve seman- tic fidelity. Moreover, we propose an asynchronous execution scheme that overlaps skip-decision com- putation with forward computation, hiding control latency and enabling real-time skipping with mini- mal overhead. 1 Our contributions are summarized as follows: •We formulate layer skipping as a token- level Markov Decision Process, enabling dy- namic and context-aware layer execution con- ditioned on intermediate representations. •We propose a compensation mechanism us- ing differential rewards to preserve accuracy under aggressive skipping. •We introduce an asynchronous execution strat- egy that overlaps decision-making and layer computation, reducing runtime overhead. •We validate our approach on multiple LLM backbones, demonstrating significant speedups with minimal performance loss, out- performing existing skipping methods. 2 Background 2.1 Layer Skipping Layer skipping (He et al., 2025; Men et al., 2024) has emerged as a promising technique to acceler- ate inference by reducing redundant computation across Transformer layers. Existing approaches can be broadly categorized into three types: early skipping, periodic skipping, and early exit. Early skipping (Del Corro et al., 2023) deter- ministically skips the first few layers at inference time. While it enables efficient batched execution, it may discard critical early-stage features, poten- tially harming performance. Periodic skipping (Liu et al., 2024) skips layers at fixed intervals during inference, reducing computation with a regular pat- tern that supports batching. However, it applies the same skipping schedule to all inputs, ignoring layer importance variations and potentially skip- ping critical layers. Early exit (Wang et al., 2022; Elhoushi et al., 2024) performs inference sequen- tially and halts once a confidence threshold is met. While more adaptive, it often incurs extra overhead to train confidence estimators or auxiliary classi- fiers and may miss valuable information in deeper layers. Despite their efficiency, these methods lack fine- grained, input-aware adaptability. They either fol- low static schedules or require additional training costs to mitigate performance degradation. 2.2 Challenges and Opportunities Observation 1: Layer importance varies signifi- cantly across different models. We begin by analyzing the similarity between the Transformer layers’ input and output represen- 0 10 20 30 Layer0.00.20.40.60.81.0Cos sim. Most layer have extremely high I/O similarity Qwen-2.5-7B LLaMA-2-7B Origin 246810121416 #Skipped Layer20406080100Acc.(%) Significant accuracy lossFigure 1: Cosine Similarity and Model Accuracy Analy- sis. The left panel illustrates the cosine similarity across layers, indicating that representation similarity stabi- lizes after the initial layers despite early fluctuations. The right panel shows a precipitous decline in model accuracy as the number of skipped layers increases, em- phasizing that minimal skipping maintains reasonable performance while excessive skipping results in a sub- stantial accuracy drop. tations to assess their relative importance. Specifi- cally, given an input vector Xand an output vector Y, we measure their cosine similarity as: Similarity( ⃗ x, ⃗ y) =⃗ x·⃗ y ∥⃗ x∥∥⃗ y∥(1) =Pn i=1xiyiqPn i=1x2 iqPn i=1y2 i(2) We interpret layers with high input-output sim- ilarity as | https://arxiv.org/abs/2505.17420v1 |
less important, since the output remains close to the input, implying limited transforma- tion and contribution during inference. Conversely, layers with lower similarity contribute more sub- stantially and thus hold higher importance. We evaluate the input-output similarity across layers of two distinct models on the CNN/DM rea- soning task. As shown in Figure 1, except for the initial and final layers, most intermediate lay- ers exhibit high similarity, suggesting these lay- ers can be skipped with minimal impact on perfor- mance. Notably, the similarity distributions differ markedly between models, underscoring the neces- sity of adaptive strategies tailored to the unique characteristics of each model. Observation 2: Static layer skipping leads to irreversible accuracy degradation. We conduct layer skipping based on inter-layer similarity for the Qwen-2.5-7B model by sequen- tially skipping layers with the highest similarity and tracking the corresponding accuracy changes (Fig- ure 1). The results reveal a steep accuracy decline as more layers are skipped. Specifically, for the 28- layer Qwen model, omitting just 6 to 8 of the most similar layers causes accuracy to deteriorate to the level of a random baseline. This demonstrates that simple static skipping cannot achieve a favorable 2 0 5 10 15 20 25 Layer0.20.40.60.81.0Cos sim. (a)T wo correct sample with same answer. Correct A Correct A2 0 5 10 15 20 25 Layer0.20.40.60.81.0Cos sim. (b)Correct and incorrect sample. Correct A False A 0 5 10 15 20 25 Layer0.20.40.60.81.0Cos sim. (c)T wo correct sample with different answer. Correct A Correct BFigure 2: IO similarities between different samples on Qwen model with MMLU dataset. trade-off between compression and performance retention. To better understand the interplay between layer skipping and input variability, we analyze token- level input-output similarity over a small validation subset, focusing on a single-token generation task. Figure 2 contrasts three output scenarios: identical correct outputs, outputs differing where one is cor- rect and the other incorrect, and both correct but distinct outputs. The results highlight significant variation in similarity scores across tokens and lay- ers, indicating that layer importance depends heav- ily on input content. Consequently, static similarity metrics aggregated over the entire dataset fail to capture this dynamic behavior, risking the omission of critical layers for specific inputs and resulting in accuracy loss. These findings motivate the need for an input- adaptive, dynamic layer skipping mechanism that balances acceleration gains with accuracy preser- vation. Observation 3: Embedding changes are slow across layers We further analyze the input-output variations between consecutive layers for a fixed in- put and observe that the cosine similarity between embeddings or activations of adjacent layers re- mains consistently high across multiple models. Taking Qwen-2.5-7B as an example (Figure 1), from the fifth layer onward, the similarity between consecutive layers exceeds 0.9, indicating that the embedding direction evolves gradually as the in- put propagates through the network. Notably, the most significant change occurs at the first layer. This finding suggests that such slow variation in embeddings can be exploited to design coopera- tive acceleration strategies, potentially mitigating the latency overhead introduced by dynamic layer- skipping mechanisms. 2.3 Motivation Transformer-based | https://arxiv.org/abs/2505.17420v1 |
models often exhibit signifi- cant inter-layer redundancy. Recent works such asSkipDecode and AdaSkip demonstrate that substan- tial portions of computation can be skipped without severe performance degradation. However, exist- ing layer-skipping methods still face challenges in balancing skipping rates and prediction quality, largely due to input variability and architectural differences—even among models designed for the same tasks. This challenge is particularly pronounced under dynamic input conditions. While static layer skip- ping’s limitations are well-known, compensating for information loss during runtime skipping re- mains underexplored. Our analysis of input-output similarity across Transformer layers reveals that many layers contribute minimal incremental infor- mation and can be safely skipped. Crucially, which layers can be skipped varies dynamically with input characteristics. Motivated by this, we model dynamic layer skip- ping as a sequential decision-making process in- spired by Markov decision processes. We propose theDASH framework—an adaptive system that dynamically identifies optimal computational paths based on real-time input analysis. By integrating a refined layer-skipping decision mechanism with an accuracy compensation strategy, DASH achieves a favorable trade-off between efficiency and perfor- mance. Furthermore, through asynchronous exe- cution, our framework maximizes skipping while avoiding additional runtime overhead, making it well-suited for resource-constrained deployment. 3 Methodology In this section, we present DASH, an adaptive dy- namic layer-skipping framework that identifies op- timal computational paths based on input charac- teristics. As illustrated in Figure 3, DASH uti- lizes a scoring model to evaluate the importance of each Transformer layer during inference and dynamically decides whether to execute or skip the subsequent layer. To prevent accuracy degra- dation caused by aggressive skipping, DASH en- 3 Layer 1 Layer 2 Current layer inputNext layer Candidate MLPLayer 3 Layer 4 Layer nNext layer ScoreLayer 1 Layer 2 (INT 8) Layer 3 (INT 4) Layer 4 (Skipped) Layer n... ... PredictorEmbeddings Embeddings lm head lm headFP16 layer INT4 layerINT8 layer Skip layerFigure 3: Overview of the DASH Framework. This method first processes the embedding layer and main- tains full-precision computation in the first Transformer layer. Starting from the second layer, the scoring model evaluates the next layer’s state using the modified in- put of the current layer, dynamically selecting the next layer’s state. When a layer is skipped, a compensation mechanism is activated based on the scoring results, ef- fectively balancing inference speed and model accuracy. forces strict constraints on skipping decisions and integrates a mixed-precision compensation mech- anism, ensuring minimal impact on the model’s original performance. Furthermore, to mitigate the additional latency introduced by the decision- making process, DASH exploits the high simi- larity between activation inputs of adjacent lay- ers, enabling asynchronous evaluation that reduces overhead. Collectively, these components enable DASH to effectively reduce redundant computa- tion while preserving accuracy, making it highly suitable for deployment in resource-constrained environments. 3.1 Problem Formulation Consider a pre-trained LLM Mwith LTrans- former layers. Given an input sequence X, the standard forward pass computes all layers sequen- tially to produce output fM(X). Our objective is to reduce inference latency by selectively skip- ping certain layers while maintaining output qual- ity. We formalize this as a layer-skipping deci- | https://arxiv.org/abs/2505.17420v1 |
sion problem: construct a binary decision sequence S=s1, s2, . . . , s L, where each si∈0,1indicates whether layer iis skipped ( 0) or executed ( 1). The subset of executed layers forms a computational pathP={li|si= 1}. LetfM(X)denote the model output when only layers selected by Sare computed. We aim to minimize the expected number of executed layers, |S|=P isi, subject to an acceptable accuracy loss measured by a distance function d(·,·)between fulland partial outputs: min DEX[|SX|]s.t.EX[d(fM(X), fM,S X(X)]≤ϵ (3) where ϵis a user-specified tolerance threshold. To capture dependencies between layers and adapt skipping decisions dynamically based on in- termediate representations, we model the skipping strategy as a sequential decision-making process. Specifically, a policy πθparametrized by θdecides the next layer’s execution state conditioned on the current layer’s state siand hidden features hi: si+1∼πθ(si+1|si, hi) (4) The goal is to learn parameters θthat optimize the trade-off between computational cost and pre- diction quality. Formally, θ∗=argmin DEX[LX i=1si]s.t.L(πθ) +L(M)≤ϵ (5) where L(·)denotes the downstream task loss or overall model loss, incorporating both prediction accuracy and skipping regularization. 3.2 Compensation Mechanisms Based on Differential Rewards Although DASH achieves significant inference ac- celeration by skipping a substantial number of lay- ers, we observe that excessive skipping may cause non-negligible accuracy degradation. To address this issue, DASH incorporates a compensation mechanism based on differential rewards, which helps preserve model performance while enabling more aggressive layer skipping. Specifically, when the scoring model decides to skip a layer li, it implies that the input and output of the layer are sufficiently similar. However, instead of directly using the output of the previous layer li−1as the input to the next layer li+1, we introduce computational compensation to approximate the skipped transformation. DASH supports three actions for each layer: full execution, complete skipping, and partial computa- tion. •Full Execution performs the standard FP16 computation. •Complete Skipping uses a scaling factor to approximate the transformation. The output is estimated as: Yi jt=scale j·Xi jt (6) 4 where the scaling factor is computed offline using a calibration dataset: scale i=PN j=1P|Tj| t=1||Yi jt|| ||Xi jt|| PN j=1|Tj|(7) Here, Xi jtandYi jtdenote the input and output of layer ifor toke tin sequence j, respectively. • Partial Computation indicates that the layer’s computation is not essential in full precision. In this case, DASH uses low-bit quantization (e.g., INT4 or INT8) instead of FP16 to exe- cute the layer with reduced cost while preserv- ing critical information. To unify this design, each layer’s execution state is encoded using a discrete value. Specifically, a state of 0indicates that the layer is skipped and its output is approximated via scaling compensa- tion. A state of 1denotes that the layer is exe- cuted with low-precision INT4 computation, while 2represents execution with INT8. Finally, a state of4corresponds to full-precision FP16 execution. This unified encoding enables flexible trade-offs between computational cost and accuracy on a per- layer basis. The overall optimization objective remains un- changed (Equation 5): minimize the expected to- tal computation cost while keeping accuracy loss within a predefined threshold. Finally, DASH in- tegrates these options into a | https://arxiv.org/abs/2505.17420v1 |
unified decision pro- cess by assigning differential rewards based on the scoring model’s output. The more computa- tionally efficient the chosen action is, the higher the reward—conditioned on the predicted impact on accuracy. This differential reward guides the policy model to make fine-grained, input-adaptive decisions and enables DASH to explore more ag- gressive yet accurate skipping strategies. 3.3 Layer Skipping Decision Mechanism Given the current layer liof the model, our ob- jective is to determine the execution state of the next layer si+1, where s∈0,1,2,4. For accuracy preservation, we never skip the first layer and al- ways set s1= 4. The final layer is also excluded from skipping decisions to ensure the integrity of output representations. We employ a scoring model Gθto implement the skipping policy. The score is computed based on the current hidden state, the current and nextlayer indices, and the current and candidate next layer states: g=Gθ(h, li, li+1, si, si+1) (8) The scoring function Gθis defined as: Gθ=MLP ([hi;E(li);E(li+1)])−α(si+1−si) (9) where E(·)denotes the layer position embed- ding function and [·;·]indicates vector concatena- tion. The term −α(si+1−si)serves as a regular- ization term that penalizes abrupt changes in layer execution states, encouraging smoother transitions and greater skipping continuity. The MLP is a two-layer feedforward network: MLP (x) =W3·GELU (W2·GELU (W1·x)) (10) where W1∈R(dh+2dl)×d1,W2∈Rd1×d2, and W3∈Rd2×1, with dhas the hidden state dimen- sion,dlas the layer embedding dimension, and d1, d2as intermediate MLP dimensions. To balance exploration and exploitation during training, we apply a temperature-based sampling strategy: P(si+1=s|hi, si)∝exp(Gθ τ0·e−αt) (11) where τ=τ0·e−αtis a temperature parameter that decays over training steps tto reduce random- ness over time. Differential Reward Function. To guide train- ing, we introduce a location-sensitive differential reward, combining accuracy and efficiency: ri=racc·ωi+reff (12) Here, raccmeasures the prediction accuracy, ωi is a position-based importance weight, and reffre- wards computational savings. For classification tasks, racc= 1if the prediction is correct and −1otherwise. For generation tasks evaluated with perplexity, the reward is calculated as: racc=ϵ− |pplf(x)+pplfs(x)| pplf(x)(13) The location importance weight ωitakes both layer depth and skipping status into account: ωi=|SX| − |SX,l| |SX|·σ(si−|SM| |SX|) (14) 5 where|SX|is the total sum of all layer states, |SX,l| is the cumulative state sum from layer 1 to li,|SM| is the maximum total state value, and σis the sig- moid function. This encourages skipping in earlier layers while considering execution intensity. The efficiency reward is computed as: reff=β·(4−si+1) (15) This term provides higher rewards for more aggres- sive skipping. Training Objective. We optimize the scoring model via policy gradient reinforcement learning, aiming to maximize expected rewards: ∇θLRL≈ −LX i=1ri∇θlogπθ(si+1|si, hi)(16) We jointly train the scoring model and the main model using a combined loss: Lall=LCE+λLRL (17) whereLCEis the cross-entropy loss for the main task and λis a balancing factor that controls the trade-off between accuracy and efficiency. Inference Strategy. During inference, we adopt a greedy strategy, selecting the next layer state with the highest score: si+1=arg max s∈{0,1,2,4}Gθ(h, li, li+1, si, s)(18) This allows DASH to dynamically determine the execution path across layers, enabling efficient and adaptive inference. 3.4 Asynchronous Skipping Decision for Latency Hiding | https://arxiv.org/abs/2505.17420v1 |
Although the DASH policy dynamically reduces FLOPs during inference, the layer-wise decision process introduces non-negligible latency. Specif- ically, the decision score of each layer must be computed before executing that layer, which adds serial dependency and offsets the speedup benefits from skipping. To address this issue, we propose an asyn- chronous decision mechanism that hides this la- tency. Our key insight is based on the empiri- cal observation that hidden states, especially in transformer-based models, evolve gradually across layers. Thus, we can approximate the current layer’s hidden state using the previous one.Specifically, when computing layer li, we ap- proximate hi+1with a transformed version of hi, scaled by a compensation factor scale i: h′ i+1=scale i·hi (19) We then use h′ i+1to compute the decision score for the next layer in parallel: si+1= arg max s∈{0,1,2,4}Gθ(h′ i+1, li, li+1, si, s) (20) In this way, the scoring process for layer li+1 overlaps with the computation of laye 4 Evaluation In this section, we present a comprehensive eval- uation of DASH’s inference-time performance. We begin by detailing the experimental setup, including benchmarks, baselines, and configura- tions. Then, we analyze the experimental results to demonstrate the effectiveness of our approach. 4.1 Experimental Settings Benchmarks. We evaluate DASH on two repre- sentative large language models: LLaMA-2 (re- ferred to as LLaMA ) (Touvron et al., 2023) and Qwen-2.5-7B-Instruct (referred to as Qwen ) (Yang et al., 2025). The evaluation covers both gener- ation and classification tasks. For text genera- tion, we use WikiText-2 (Merity et al., 2016) and CNN/DM datasets (See et al., 2017). For classifi- cation, we select two commonly used benchmarks: MMLU (Hendrycks et al., 2020) and ARC (Clark et al., 2018). Models. We utilize pre-trained models from Hug- gingFace and TorchVision Model Zoos. All base- line results are measured using FP16 precision. The models are evaluated under the same settings for a fair comparison. Calibration. To compute the scaling parameters required for DASH’s skip compensation, we ran- domly sample 128 inputs from each dataset to con- struct a calibration set. Baselines. We compare our proposed method against four strong baselines for accelerating large language model inference: Early-Exit, SkipDecode, Adaskip, and RandomSkip. These methods repre- sent diverse strategies for reducing computational costs during the generation process. 1) Early-Exit (Fan et al., 2024) enables the model to produce outputs from intermediate hidden states without completing a full forward pass. It 6 dynamically selects exit points based on input com- plexity, thereby reducing inference overhead. This method integrates early-exit modules at specific transformer layers, allowing faster token genera- tion with minimal quality degradation. 2) SkipDe- code (Del Corro et al., 2023) is a token-level early- exit strategy optimized for batch inference and key- value caching. Unlike traditional early-exit meth- ods, it determines a unified exit layer per token across the batch at each sequence position. It en- forces a monotonic decrease in exit layers across time steps, avoiding redundant key-value recom- putation for past tokens. Rather than halting com- putation, it strategically skips lower and middle layers, focusing computation on higher layers. 3) Adaskip (He et | https://arxiv.org/abs/2505.17420v1 |
al., 2025) is an adaptive sublayer- skipping method tailored for long-context infer- ence. It identifies less informative layers based on similarity metrics and enables sublayer-level skipping to enhance efficiency without significant performance loss. 4) RandomSkip serves as a control baseline that randomly skips layers during inference according to a predefined probability dis- tribution, without using any learned policy. 4.2 End-to-End Result Table 1 and Table 2 summarize the performance of various layer skipping methods applied to Qwen and LLaMA models across multiple tasks, evalu- ated under acceleration ratios of 1.33 ×, 1.67 ×, and 2.0×. As illustrated, our proposed method, DASH, con- sistently outperforms all baseline approaches at every acceleration level. Moreover, the perfor- mance gap widens as the acceleration ratio in- creases. At lower acceleration ratios, DASH in- curs only negligible accuracy degradation—for instance, on the MMLU benchmark using the Qwen model, the accuracy drop is limited to 0.5%. In contrast, when the acceleration ratio reaches 2.0×, competing methods suffer from severe perfor- mance degradation, nearing random guess levels. Conversely, DASH maintains robust performance, thereby achieving a significantly better trade-off between inference speedup and prediction quality. 4.3 Ablation Study Effectiveness of the scoring model. We first val- idate the effectiveness of the decision model pro- posed in DASH. Using the Qwen model and the MMLU dataset, we analyze state function outputs at different acceleration ratios for the same sample, 246810121416182022242628 Layer01234state 0.20.40.60.81.0 Cos sim. 1.33x 1.67x 2.0x I/O sim.Figure 4: I/O similarity and layer-skipping states at dif- ferent speedup ratios.The higher the I/O similarity, the more aggressive the layer-skipping strategy becomes, preferentially selecting layer states with higher acceler- ation ratios. comparing them with the layer-wise input-output similarity computed for that sample. As shown in Figure 4, although the specific skipped states vary across acceleration ratios, there is a clear positive correlation: layers exhibiting higher similarity tend to have a greater probability of being skipped, with an increased tendency towards complete skipping. This observation aligns well with our prior profil- ing analysis and demonstrates that DASH can accu- rately identify less critical layers, thereby achieving inference acceleration by selectively skipping layer computations. Effectiveness of the compensation mecha- nisms. Our dynamic layer-skipping strategy incor- porates several compensation mechanisms, which enable flexible adaptation of skipping strategies and maximize the potential for layer skipping with- out significant accuracy degradation. We eval- uate the impact of different compensation tech- niques on model accuracy, with results summa- rized in Table 3. Firstly, compared to static skipping methods, the introduction of a dynamic decision-making framework substantially restores accuracy, highlighting the efficacy of our adap- tive approach. Secondly, the application of a sim- ple scaling-based compensation method further improves accuracy relative to no compensation. Furthermore, integrating INT4/INT8 quantization within the layer-skipping pipeline also contributes to accuracy recovery. Collectively, these compen- sation mechanisms empower DASH to achieve su- perior accuracy-compression trade-offs compared to alternative approaches. Transferability of scoring models. In prac- tice, decision models are typically trained on task- specific datasets but are expected to generalize across diverse downstream tasks. To investigate this, we evaluate the performance of decision sys- | https://arxiv.org/abs/2505.17420v1 |
tems trained on different datasets when applied 7 Qwen Wikitext2(ppl) CNN/DM(Rouge-L) MMLU(Acc) ARC-C(Acc.) 1.33x 1.67x 2.0x 1.33x 1.67x 2.0x 1.33x 1.67x 2.0x 1.33x 1.67x 2.0x FP16 6.62 22.8 70.2 60.6 Early-Exit 13.1 35.4 1e3 21.6 18.4 10.7 56.4 38.8 26.2 45.5 38.5 30.2 RandomSkip 85.2 173.4 4e3 13.7 9.5 8.2 42.1 24.8 25.1 36.8 30.6 24.5 SkipDecode 36.8 96.5 1e3 20.2 13.5 8.1 63.5 60.4 55.3 55.3 40.9 33.5 Adaskip 8.12 18.4 77.4 21.4 20.8 15.7 66.8 58.1 43.2 50.4 39.8 31.1 Ours 7.43 19.9 53.2 22.5 22.1 19.2 69.7 67.5 61.0 58.4 52.2 36.5 Table 1: Evaluation Results for Qwen model. LLaMA Wikitext2(ppl) CNN/DM(Rouge-L) MMLU(Acc) ARC-C(Acc.) 1.33x 1.67x 2.0x 1.33x 1.67x 2.0x 1.33x 1.67x 2.0x 1.33x 1.67x 2.0x FP16 5.47 27.7 46.3 39.9 Early-Exit 9.12 24.8 371 26.2 22.8 18.4 40.6 34.9 32.5 33.2 28.5 26.3 RandomSkip 113.8 1e3 3e3 15.8 8.4 8.1 33.8 26.1 25.1 27.3 25.1 24.9 SkipDecode 20.7 44.3 192.4 22.8 19.6 15.0 44.1 42.7 40.8 37.1 33.7 28.4 Adaskip 7.03 21.0 55.4 24.3 21.2 17.3 45.9 42.8 32.4 38.1 35.3 29.7 Ours 6.61 13.6 38.8 26.8 23.3 20.1 46.0 44.6 42.8 38.6 36.4 31.1 Table 2: Evaluation Results for LLaMA model. Method 1.33x 1.67x 2.0x Origin(Acc(%)) 70.18 1.Naive Static Skipping 43.1 25.3 25.2 2.Dynamic Decision Skipping 62.1 38.7 24.9 3.(2) + scale Compensation 64.3 58.9 42.7 4.(3) + INT8 Compensation 69.5 66.1 59.2 5.(3) + INT4/INT8 Compensation 69.7 67.5 61.0 Table 3: Accuracy results during different compensation strategies. to the MMLU benchmark, with results presented in Figure 5. While all systems employ the same training methodology, their performance differs significantly due to variations in task-specific re- ward signals, where the decision model obtained using the corresponding training set performs the best. Nevertheless, even the least effective decision model achieves notable accuracy retention when executing dynamic layer skipping. This indicates a degree of transferability in our approach, although there remains substantial room for improving its generalization capability across tasks. 5 Conclusion We identify dynamic layer skipping as a key ap- proach to addressing the significant computational redundancy found in large Transformer-based lan- guage models. Our analysis reveals that static layer skipping methods often suffer from severe accuracy Origin From MMLU From wikitext2 From CNN/DMFrom ARC606570Acc.(%)70.268.9 66.267.4 65.2Figure 5: Results on MMLU datasets. Decision system trained on different datasets by Qwen-2.5-7B degradation due to their inability to adapt to input variability. To tackle this, we propose the DASH framework, which formulates layer skipping as a sequential decision-making process guided by a learned scoring model. Compared to existing static and heuristic skipping strategies, DASH con- sistently achieves higher acceleration ratios with minimal performance loss, thanks to its adaptive and input-aware skipping mechanism. Further- more, the integration of compensation techniques effectively mitigates accuracy degradation, preserv- ing model quality under aggressive skipping. We also demonstrate that the learned decision models exhibit promising transferability across different downstream tasks, indicating potential for broader applicability. 8 Limitation Although we have demonstrated that DASH ex- hibits a certain degree of transferability across dif- ferent tasks, the proposed scoring model still re- quires continuous updates to adapt | https://arxiv.org/abs/2505.17420v1 |
to task require- ments in long-running scenarios. Furthermore, due to the relatively low similarity in the initial lay- ers of the model, the correction mechanism during asynchronous execution introduced in this solution may exhibit some instability. This instability tends to make the scoring model consistently avoid skip- ping the initial few layers, even when their I/O sim- ilarity has become sufficiently high, consequently preventing further acceleration. References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, and 1 others. 2023. Gpt-4 techni- cal report. arXiv preprint arXiv:2303.08774 . Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, and 1 others. 2020. Language models are few-shot learners. Advances in neural information processing systems , 33:1877–1901. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question an- swering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457 . Luciano Del Corro, Allie Del Giorno, Sahaj Agarwal, Bin Yu, Ahmed Awadallah, and Subhabrata Mukher- jee. 2023. Skipdecode: Autoregressive skip decoding with batching and caching for efficient llm inference. arXiv preprint arXiv:2307.02628 . Mostafa Elhoushi, Akshat Shrivastava, Diana Liskovich, Basil Hosmer, Bram Wasti, Liangzhen Lai, Anas Mahmoud, Bilge Acun, Saurabh Agarwal, Ahmed Roman, and 1 others. 2024. Layerskip: Enabling early exit inference and self-speculative decoding. arXiv preprint arXiv:2404.16710 . Siqi Fan, Xin Jiang, Xiang Li, Xuying Meng, Peng Han, Shuo Shang, Aixin Sun, Yequan Wang, and Zhongyuan Wang. 2024. Not all layers of llms are necessary during inference. arXiv preprint arXiv:2403.02181 . Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al- Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, and 1 others. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 .Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shi- rong Ma, Peiyi Wang, Xiao Bi, and 1 others. 2025. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 . Kai Han, Yunhe Wang, Hanting Chen, Xinghao Chen, Jianyuan Guo, Zhenhua Liu, Yehui Tang, An Xiao, Chunjing Xu, Yixing Xu, and 1 others. 2022. A survey on vision transformer. IEEE transactions on pattern analysis and machine intelligence , 45(1):87– 110. Zhuomin He, Yizhen Yao, Pengfei Zuo, Bin Gao, Qinya Li, Zhenzhe Zheng, and Fan Wu. 2025. Adaskip: Adaptive sublayer skipping for acceler- ating long-context llm inference. arXiv preprint arXiv:2501.02336 . Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language under- standing. arXiv preprint arXiv:2009.03300 . Ajay Jaiswal, Bodun Hu, Lu Yin, Yeonju Ro, Shiwei Liu, Tianlong Chen, and Aditya Akella. 2024. Ffn- skipllm: A hidden gem for autoregressive decoding with adaptive feed forward skipping. arXiv preprint arXiv:2404.03865 . Yijin Liu, Fandong Meng, and Jie Zhou. 2024. Ac- celerating inference in large language models with a unified layer skipping strategy. arXiv preprint arXiv:2404.06954 . Zichang Liu, Jue Wang, Tri Dao, Tianyi Zhou, Binhang Yuan, Zhao Song, Anshumali | https://arxiv.org/abs/2505.17420v1 |
Shrivastava, Ce Zhang, Yuandong Tian, Christopher Re, and 1 others. 2023. Deja vu: Contextual sparsity for efficient llms at infer- ence time. In International Conference on Machine Learning , pages 22137–22176. PMLR. Xin Men, Mingyu Xu, Qingyu Zhang, Bingning Wang, Hongyu Lin, Yaojie Lu, Xianpei Han, and Weipeng Chen. 2024. Shortgpt: Layers in large language models are more redundant than you expect. arXiv preprint arXiv:2403.03853 . Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture mod- els.arXiv preprint arXiv:1609.07843 . Abigail See, Peter J Liu, and Christopher D Man- ning. 2017. Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368 . Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, and 1 others. 2023. Llama 2: Open foun- dation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 . 9 Neeraj Varshney, Agneet Chatterjee, Mihir Parmar, and Chitta Baral. 2023. Accelerating llm inference by enabling intermediate layer decoding. CoRR . Jue Wang, Ke Chen, Gang Chen, Lidan Shou, and Julian McAuley. 2022. Skipbert: Efficient inference with shallow layer skipping. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 7287– 7301. An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, and 1 others. 2025. Qwen3 technical report. arXiv preprint arXiv:2505.09388 . 10 | https://arxiv.org/abs/2505.17420v1 |
Debiasing CLIP: Interpreting and Correcting Bias in Attention Heads Wei Jie Yeo Nanyang Technological UniversityRui Mao Nanyang Technological University Moloud Abdar The University of Queensland Erik Cambria Nanyang Technological UniversityRanjan Satapathy IHPC, A ∗STAR Abstract Multimodal models like CLIP have gained significant attention due to their remark- able zero-shot performance across various tasks. However, studies have revealed that CLIP can inadvertently learn spurious associations between target variables and confounding factors. To address this, we introduce LOCATE -THEN-CORRECT (LTC), a contrastive framework that identifies spurious attention heads in Vision Transformers via mechanistic insights and mitigates them through targeted ablation. Furthermore, LTC identifies salient, task-relevant attention heads, enabling the integration of discriminative features through orthogonal projection to improve classification performance. We evaluate LTC on benchmarks with inherent back- ground and gender biases, achieving over a >50% gain in worst-group accuracy compared to non-training post-hoc baselines. Additionally, we visualize the repre- sentation of selected heads and find that the presented interpretation corroborates our contrastive mechanism for identifying both spurious and salient attention heads. Code available at https://github.com/wj210/CLIP_LTC . 1 Introduction Rapid advancements in multimodal foundation models like CLIP [Radford et al., 2021, Singha et al., 2024, Fan et al., 2024, Zhang et al., 2025] have enabled remarkable zero-shot learning capabilities across various tasks. However, these models often inherit undesirable biases due to spurious correlations present in their extensive training datasets or imbalanced data distributions [Mao et al., 2023, Alabdulmohsin et al., 2024]. Biases include associations between target classes and confounding attributes, e.g., background [Du et al., 2022, Zhang and Ré, 2022, Sagawa et al., 2019, Wang et al., 2024] or gender [Xiao et al., 2024, Hall et al., 2024, Nadeem et al., 2025], which largely degrade performance in underrepresented subgroups and perpetuate harmful stereotypes. Recently, Gandelsman et al. [2023] proposed to ground visual representations of intermediate attention heads in CLIP’s vision Transformer (ViT) [Dosovitskiy, 2020] onto a set of natural language statements. In addition to enhanced interpretability, this enables locating certain components in the model that may encode unwanted biases. However, a downside to the proposed framework is the need for extensive manual effort in summarizing the representation from the set of statements, which often can be inconclusive. Existing approaches to debiasing vision models often rely on extensive fine-tuning [Zhang and Ré, 2022, Sagawa et al., 2019, Wortsman et al., 2022], which can be computationally prohibitive for large Preprint. Under review.arXiv:2505.17425v1 [cs.CV] 23 May 2025 Figure 1: Left: Linear decomposition of image representations into individual attention head activations [Elhage et al., 2021]. Spurious states (background: land) activate more strongly on images with opposing attributes, whereas target states (class: waterbird) activate on images with matching attributes. Right: LTC identifies and corrects these states: mean-ablation mitigates spurious states, while knowledge injection enhances target states. foundation models. Training-free methods, on the other hand, may include orthogonal projection in the image [Adila et al., 2023] or text [Chuang et al., 2023] representation space. In contrast, we propose to perform debiasing only on specific attention heads, while leaving the rest of the model untouched. Our framework, Locate-Then-Correct (LTC) , first | https://arxiv.org/abs/2505.17425v1 |
identifies attention heads that strongly encode spurious attributes and target class features. These activations are then subjected to debiasing procedures, either by removing spurious associations or injecting class-discriminative features through orthogonal projection. LTC employs a “ diagnose-then-correct ” approach to address bias at a granular level within the vision model. Our work provides a concrete demonstration of how mechanistic insights can be translated into practical tools for debiasing, addressing a pressing challenge in modern machine learning. Beyond improving robustness, our approach offers significantly greater interpretability than existing methods, enabling a clearer understanding of why specific model components should be corrected and how such corrections impact behavior. 2 Related Work Bias in Vision. Improving robustness against undesirable spurious correlations in vision foundation models is an active researched area. These works include training methods which are split between requiring supervised group labels [Sagawa et al., 2019, Zhang and Ré, 2022] and inferring group labels in an unsupervised manner [Liu et al., 2021, Nam et al., 2020, Sohoni et al., 2020]. Non- training methods include utilizing orthogonal projections to erase attribute knowledge and enhance discriminative properties [Adila et al., 2023] or remove spurious relationships attributes [Chuang et al., 2023]. Kim et al. [2024] proposes to incorporate detected bias keywords into class prompts with large language models (LLM) to improve robustness. However, there are no explainable insights as to why these methods work. In contrast, our framework only performs corrective measures on specific decomposed components of the model and enables fine-grained interpretation. Interpretability in Transformers. Transformer [Vaswani, 2017] interpretability has been studied at various granularities, including neurons [Olah et al., 2017, Bau et al., 2020, Goh et al., 2021, Shaham et al., 2024] and attention layers or heads [Gandelsman et al., 2023, Yeo et al., 2024, Vig et al., 2020]. Elhage et al. [2021] showed that Transformers can be viewed as a linear combination of information from attention heads in the residual stream, shaping the final representation. Leveraging this decomposability, Nostalgebraist [2020], Jiang et al. [2024] examined localized predictions of intermediate states. Our work extends these efforts by identifying and interpreting specific states to enhance robustness and explain their effectiveness. 2 3 Background We start by looking at the architecture of the CLIP [Radford et al., 2021] model and how a classifica- tion prediction is made. We primarily focus only on the ViT architecture due to its decomposable nature. CLIP consists of an image encoder EI, and a text encoder ET. Given an image, I, the prediction logit Syfor a target class, y∈Yis computed using the cosine similarity between the projected image and text representation PI(EI(I))andPT(ET(y))respectively. PIandPTdenotes the image and text projection layer which are parameterized separately. Sy=⟨PI(EI(I)), PT(ET(y))⟩ (1) The prediction class is then chosen greedily over all target classes Yscaled by an optional temperature, t,arg max y∈YSy t. 3.1 Spurious Bias We consider a dataset, Dconsisting of Msamples with each sample represented as a tuple: {I, y∗, s}. Iis the input image, y∗∈Yis the correct class, and s∈Sis the spurious attribute. In Water- birds [Sagawa et al., 2019], sis an attribute describing “ background | https://arxiv.org/abs/2505.17425v1 |
”, or “ gender ” in datasets with gender bias. Previous studies have shown that zero-shot CLIP models are susceptible to spurious correlations, often associating a particular swith a target class y, due to the imbalanced nature of the training distributions. We first define two sub-groups, GPandGNrepresenting positive and negative associations, respectively. GPcontain samples where the model infers a positive spurious relationship between sandy, i.e. “ water background ” with “ waterbird ”, while GNcontains mismatched pairs like “ land background ”. Models typically perform better on GPthanGN; the goal is to reduce this performance gap, GP−GN, while improving GN. 3.2 Linear Decomposition of Image Representation A recent work by Gandelsman et al. [2023] demonstrates that image representations in ViT can be expressed as a linear sum of contributions from individual layers. A ViT consists of Llayers, each made up of a Multi-Head Self-Attention (MSA) module with Hheads and a MLP module. The input image is first split into Npatches and projected onto Nembedding tokens, {z0 i}i,...N∈RN×d, and prepended with a [CLS] token, z0 c, here 0refers to the embedding layer. We leave out the sample notation for brevity. The mechanistic framework of the ViT can be regarded as a residual stream with information being added at each layer. [Elhage et al., 2021] (see Fig. 1). We will focus on zcsince the final prediction depends on that. We refer to the intermediate activations after each layer as states . Starting from z0, we derive Lintermediate states: ˆzl=MSAl(zl−1) +zl−1, zl=MLPl(ˆzl) + ˆzl. (2) The overall computations of EI(I)are then factorized as: EI(I) =z0+LX l=1MSAl(zl−1) +LX l=1MLPl(ˆzl). (3) Eq. 3 shows that we can decompose the final output of the image encoder into a linear sum of direct effects and similarly across each head and token in the MSA [Elhage et al., 2021, Gandelsman et al., 2023]: MSAl(zl−1) =HX h=1NX i=0˜zil,h,˜zil,h=al,h iWl,h V,Ozl−1 i. (4) Here, al,h iandWl,h V,Orefer to the softmax attention weights and the combined value-output weight matrices of layer land head, h, respectively. Our work is partly inspired by Gandelsman et al. [2023], who showed that individual attention heads can be grounded onto natural language statements. This aligns well with the "linear representation hypothesis" [Elhage et al., 2022], which suggests that high-level concepts are linearly separable. Building on these ideas, we propose a contrastive approach that eliminates the need for manual interpretation, enabling more efficient and conclusive identification of attention heads for debiasing. 3 4 Methodology We focus on intermediate states from attention heads aggregated over token positions, ˆzl,h=PN i=0ˆzl,h iand omit MLP layers from our study due to their limited direct impact [Gandelsman et al., 2023] and granularity. We introduce a method to detect salient attention states with high direct effect in Sec. 4.1, followed by our contrastive approach on locating class-discriminative and spurious states in Sec. 4.2 and then the debiasing techniques in Sec. 4.3. 4.1 Locating Important Attention States We start by locating salient states that contribute significantly towards a target class. We utilize Logit Lens (LL) [Nostalgebraist, 2020], an interpretability technique that projects an intermediate state onto | https://arxiv.org/abs/2505.17425v1 |
the unembedding matrix. LLenables visualizing the independent outcome of the intermediate state towards a target class. In CLIP, this is equivalent to decomposing the prediction logit into individual contributions by replacing the image representation in Eq. 1 with ˆzl,h: LL(l, h, y ) =⟨PI(ˆzl,h), PT(ET(y))⟩. (5) In addition, we assign an importance score Vl,h (y)to each head ˆzl,hby taking the difference between the projected logit for the target class, yand other class, y̸=y. Vl,h (y)=LL(l, h, y )−LL(l, h,y). (6) We use (y)to denote scores conditioned on the selected class y. In binary cases, yrepresents the predicted class ˆy, with yas the other class, or the second most probable class in multi-class scenarios. By repeating Eq. 6 across all heads, we obtain V(y)∈RL×H, where each element captures the contribution of a state toward yovery. To ensure sparsity in V(y), we replace the logit difference representation with a one-hot encoding, assigning a non-zero value only to the state with the highest contribution. Vl,h (y)=( 1,if(l, h) = arg max (l′,h′)Vl′,h′ (y), 0,otherwise.(7) In practice, V(y)is averaged over Dand normalized such that the total contributions sums to 1. We find that Eq. 7 enables V(y)to be a sparse matrix with K≪L×Hnon-zero entries since earlier states tend to have lower direct effects (see C), analogous to Nostalgebraist [2020]. We denote P∗ as the set of positions corresponding to the non-zero entries, |P∗|=K. 4.2 Locating Spurious and Target Heads Spurious and Target states. Since our work focuses on tasks with spurious correlation, we first assume V(y)is represented as a mixture of contributions encoding SorY. Formally, we present V(y) as:X k∈KV(y),k=VY+asyVS+ϵ, asy∈[−1,1], VY=X i∈PYV(y),i, VS=X j∈PSV(y),j,(8) where VYandVSrefer to the contributions corresponding to the subset of attention states repre- senting the target class attribute, ZY={zl,h Y}(l,h)∈PYand spurious attribute ZS={zl,h S}(l,h)∈PS respectively. Note that both ZS/Y andVS/Y refer to ˆzandV(y), but denoted differently to symbolize the type of representation. PYandPSare subsets of positions within P∗andϵrepresent minimal contributions from MLP and attention states outside of P∗. The spurious coefficient asyis−1if {s, y∗} ∈GN, and 1, otherwise. The objective here is to identify PYandPS. Additionally, we observed a separate representation encoding the association between ZYandZS, as discussed further in Sec. 6. The definitions of VYandVSare introduced as: Definition 4.1. VYrepresents the direct contribution of ZYandVSofZStowards predicting ˆy=y∗ overy̸=y∗, such that the following behavior can be observed: EVY∼GN(VY|ˆy=y∗, asy=−1)>0. EVS∼GN(VS|ˆy=y∗, asy=−1)<0.(9) 4 Def. 4.1 states that when ˆyis correctly predicted as y∗, the expected contribution of ZYis positive, while that of ZSis negative for samples in GN. The opposite is true when ˆy̸=y∗. This implies that VSnegatively impacts the prediction of the correct class when the model is influenced by spurious bias; asy=−1. We further divide GPinto{GPW, GPC}andGNinto{GNW, GNC}, where W andCdenotes wrongly and correctly classified subsets, respectively. In GP,asyis positive, and both class and spurious may contribute equally towards Vy. We instead focus on GNasasy=−1can have more pronounced effects towards either contribution. Spurious and Target contributions. We formulate the contrastive solution to isolate VSas: VS=σ(VNW−VNC), (10) where VNW andVNCrefer to measuring V(y∗)in Sec. 4.1 by replacing DwithGNW andGNC, respectively and | https://arxiv.org/abs/2505.17425v1 |
setting (y) =y∗. The mask σ(V) = 1(V > 0)filters out any negative contributions. In Eq. 6, VNW<0since (y) =y∗̸= ˆy, and vice versa for VNC>0. Referencing Eq. 8, this leaves VNW=VY−VS+ϵ <0, and VNC=VY−VS+ϵ >0asasy=−1. This leads to VS> VY forVNWand opposite for VNC. Additionally, both VNWandVNCare normalized to 1, leading to VS|NW> VS|NCandVY|NW< VY|NC. Thus, Eq. 10 isolates the positive VSterms while filtering out the negative VYterms. By symmetry, swapping the two terms in Eq. 10 would recover VY. Intuition. The key intuition is that the model’s susceptibility to spurious attributes is more pro- nounced in incorrect predictions with asy=−1. When the model makes correct predictions under negatively spurious conditions, it is likely due to the influence of Y-relevant states outweighing the contribution of S-states, as shown under Def. 4.1. However, we find that a threshold of 0may be insufficiently robust against noisy contributions that neither encode YorS. To select the top contributing states, we instead replace the mask with σ= 1(V > γ ), with γ=1 |P∗ |GNW∪P∗ |GNC|.|GNWand|GNCdenote deriving P∗on the respective sub-groups only. We find that targeting only the top state for debiasing can still achieve significant improvements; see Fig. 10 and B.3. 4.3 De-biasing Attention states Algorithm 1 Locate-Then-Correct 1:Input: Decomposed states {ˆzi}M i=1, class positions PY, spurious positions PS, class vectors {ui}Nu i=1 2:for(l, h)∈PSdo 3: ˆzl,h←1 MP iˆzl,h i 4:end for 5:fori= 1toNudo 6: for(l, h)∈PYdo 7: ˆzl,h←ˆzl,h+ui⟨ˆzl,h,ui⟩ ⟨ui,ui⟩ 8: end for 9:end for 10:Return: Debiased states {ˆzi}M i=1This section discusses strategies to re- duce spurious associations in CLIP and enhance performance in the worst- performing groups, i.e., GN. As demonstrated in Sec. 4.2, states en- coding Scan be identified, as they act as adversarial effects in GN. Spurious ablation. A straightfor- ward solution is to eliminate these ef- fects from the identified states. We usemean-ablation (MA) [Nanda et al., 2023] by setting each attention state in ZSto the mean value over the dataset, ˆzSl,h=1 MPM i=1ˆzl,h S,i. We did not find any difference between mean and zero ablation. Knowledge injection on target states. To further enhance the class- discriminative properties of the identified class states ZY, we leverage LLMs, which have demon- strated significant potential in generating class-discriminative features through prompting [Adila et al., 2023, Menon and V ondrick, 2022, Yang et al., 2023b]. We follow the same strategy in Adila et al. [2023] and prompt GPT4-o1to generate Nutext features of each class, using a prompt, i.e., “List the visual differences between waterbirds and landbirds ”. This gives us text insights, sy, sy(i.e. 1https://openai.com/index/hello-gpt-4o/ 5 "water background", "land background" ). The discriminative vectors are then obtained as {ui}Nu i=1by taking the normalized difference: u=fT(sy)−fT(sy)/||fT(sy)−fT(sy)||, where fT=PT(ET). The selected attention states are then projected onto the discriminative vectors [Adila et al., 2023] before being added back, a process we refer to as Knowledge Injection (KI) :z=z+ui⟨z, ui⟩/⟨ui, ui⟩. The debiased states are then aggregated to form EI(I). Existing works [Adila et al., 2023, Chuang et al., 2023] perform the projections at either the overall text ET(y)or image EI(I)representation space, whereas we propose to do | https://arxiv.org/abs/2505.17425v1 |
so on specific attention states, hence the name of our framework, LOCATE -THEN-CORRECT . We detail the debiasing framework in Alg. 1. Note that MA and KI are sample-independent and applied in the same manner across the full inference dataset. 5 Experimental Results This section presents the empirical results of our debiasing framework on datasets exhibiting various spurious correlations. We focus on datasets where spurious bias arises inherently in the model without parameter tuning, meaning the model does not develop bias toward spurious attributes due to imbalanced training data. Our analysis primarily addresses two types of biases: background-object class associations and gender-occupation correlations. 5.1 Experimental Setting Dataset - Background Bias. To evaluate robustness against background bias, we use the Waterbirds (WB) dataset [Sagawa et al., 2019], a binary classification task where zero-shot performance shows a significant gap between positive and negative subgroups. We report Worst-Group ( WG), average accuracy ( Avg), and the gap ( Gap) between Avg and WG. We also consider a multi-class dataset: CounterAnimal (CA) dataset [Wang et al., 2024], which includes an easy subset ( DE) and a hard subset ( DH). We evaluate on DH, using DEas a baseline. Since we are evaluating on GNitself, we re-use ZSfrom WB. Avg refers to the accuracy on DH, and gap is between easy and hard set. Unlike the binary case in Waterbirds, the multi-class setting complicates the relationship between yandy, posing challenges for KI. To address this, we optimize the mapping of y:yfor each class using DE and apply the same settings onto the baselines altogether, as discussed further in A.1. Dataset - Gender Bias. GenderBias-VL [Xiao et al., 2024] consists of artificially generated images of working-class adults across 177 occupations, with both genders for each occupation. We study them across two tasks: occupation classification and image retrieval. For classification, we use the Bias metric, which measures the accuracy difference between gender groups:1 |O|P|O| i|Acc(Oi|g= g0)−Acc(Oi|g=g1)|, where Orepresents occupations and g0,g1denote male and female subgroups, respectively. We randomly select 25occupations for optimization and evaluate on the remaining 152. We select the top 10biased occupations as GN. We find that certain occupations are highly biased, and denote the top 10 occupations as WG. For retrieval task, we use the MaxSkew@K metric, defined as maxg∈Glogrg,k 1/|G|, where rg,kis the ratio of top kimages labeled with a specific gender. Additionally, we assess generalization performance on FairFace [Kärkkäinen and Joo, 2019] following the same settings as Chuang et al. [2023]. While CelebA [Liu et al., 2015] is commonly studied for gender bias, we do not find significant bias present in zero-shot CLIP [Yang et al., 2023a] and leave it out of our evaluation. More details are provided in A.1. Baselines. We evaluate on OpenCLIP [Ilharco et al., 2021] across 3 different sizes. We compare LTC against non-parameter tuning baselines including Zero-Shot (ZS),TextSpan [Gandelsman et al., 2023], Roboshot (RS) [Adila et al., 2023] and Ortho-Cali [Chuang et al., 2023]. Both Roboshot and Ortho-Cali perform debiasing via orthogonal projection on the overall input representation but differ on the modality: Image (Roboshot) and Text (Ortho-Cali). TextSpan | https://arxiv.org/abs/2505.17425v1 |
constructs text representations per attention head and manually interprets them for debiasing. Since the baselines does not assume access to group labels, we additionally report LTC’s performance given zero-shot predicted group labels, LTC- ˆS. While LTC is designed to address non-tuning methods, we find that it also serves as an simple and effective extension to parameter-tuning approaches. We apply LTC on top of the classifier trained with JTT [Liu et al., 2021], (LTC-JTT) and compare against JTT, Empirical risk minimization (ERM) probe (ERM Probe) and Contrastive Adapters (Cont Adapter) [Zhang and Ré, 2022]. We use a 2-layer non-linear probe for all baselines except Cont Adapter, where we following 6 Table 1: Results on background bias. Bolded represents the best method while underline refers to the second best. Metrics : WG ( ↑), Avg ( ↑), Gap ( ↓) Binary Dataset: Waterbirds MethodViT-B/16 ViT-L/14 ViT-H/14 WG Avg Gap WG Avg Gap WG Avg Gap Non-Parameter-tuning methods ZS 49.7 72.1 22.4 44.5 72.3 27.8 50.3 69.5 19.2 TextSpan 62.3 76.7 14.4 61.8 78.5 16.7 58.7 71.5 12.8 Ortho-Cali 68.1 73.3 5.2 73.3 78.6 5.3 18.1 41.9 23.8 Roboshot 57.5 63 5.5 70.5 79.0 8.5 60.7 71.6 8.9 LTC- ˆS(OURS) 61.8 72.0 10.2 75.5 84.0 8.5 73.7 77.4 3.7 LTC (O URS) 73.3 74.6 1.3 74.6 84.1 9.4 71.3 74.9 3.6 Parameter-tuning methods ERM Probe 35.2 78.5 43.3 62.5 86.3 23.9 56.2 85.2 29.0 Cont Adapter 83.2 86.9 3.7 83.7 89.3 5.6 87.5 91.3 3.8 JTT 72.3 87.2 14.9 81.6 90.9 9.3 85.7 91.4 5.8 JTT-LTC (O URS)86.4 91.2 4.8 89.6 92.6 3.0 89.1 92.8 3.7 Multi-class Dataset: CounterAnimal ZS - 54.8 21.1 - 66.1 15.8 - 72.7 14.7 TextSpan - 52.6 23.3 - 62.5 19.4 - 71.4 16.0 Ortho-Cali - 52.9 22.9 - 60.0 21.9 - 70.4 17.0 Roboshot - 53.5 22.3 - 65.8 16.1 - 72.1 15.3 LTC (O URS) - 55.2 20.7 - 66.3 15.6 - 73.8 13.6 the original settings in their work. Note that we only assume access to ground truth group labels for the validation set, following Cont Adapter. More implementation details are in A.2. Results - Background Bias. As observed in Tab. 1, LTC surpasses all non-tuning baselines, bridging both the gap between groups and overall performance, similar findings can be observed in LTC- ˆS with the exception of ViT-B. Despite being a lightweight extension to standard fine-tuning, JTT-LTC significantly outperforms both Cont Adapter and JTT, with fewer hyperparameters compared to Cont Adapter. In contrast, ERM exhibits strong bias toward majority groups. On CA, LTC is the only method that consistently improves over zero-shot CLIP, while others fail. We attribute Roboshot’s failure to its reliance on LLMs for identifying spurious directions, which are often noisier than class-discriminative ones. Empirically, we find approaches that debias the overall representation are generally less effective than targeted interventions. Crucially, LTC’s ability to localize contributions within ZYandZSimproves interpretability (see Sec. 6). Ortho-Cali consistently underperforms, likely due to the complexity of solving a projection that equalizes representations across multiple spurious and target classes, which is challenging in multi-class settings. Results - Gender Bias. Table | https://arxiv.org/abs/2505.17425v1 |
2 shows that zero-shot CLIP exhibits strong gender bias in certain occupations, with worst-group gaps significantly exceeding the average. In classification, Ortho-Cali performs comparably to LTC, except on ViT-B, where LTC clearly outperforms all baselines. In retrieval, TextSpan is competitive with LTC. Overall, LTC proves highly effective in mitigating gender bias across both classification and retrieval tasks. Similar to CA, we find that LLMs are often unreliable for identifying spurious features, as seen in Roboshot’s failure—likely due to safeguards that prevent the model from flagging gender as a potential source of bias. All 3 models predict ˆS with99% accuracy, with LTC- ˆSyielding the same performance as LTC. Tab. 3 shows the results of retrieving FairFace images pertaining to sensitive concepts, i.e., " evil”, using the prompt: “ A photo of a [concept] person ” [Chuang et al., 2023], see Tab. 4. We apply the same optimization settings, i.e., ZSfor LTC/TextSpan and text embeddings for Ortho-Cali from Genderbias-VL without further tuning. The evaluation, based on annotated gender attributes, measures MaxSkew@1000 and averaged across the concept set. The results demonstrate that LTC 7 Table 2: Results on Genderbias-VL - gender bias on occupation. BTrefers to the 10 occupations with the highest discrepancy in classification between genders, chosen via zero-shot inference of the respective model. BOrefers to bias across all occupations. MTandMOis the MaxSkew@10 of the top 10 and overall occupations. The objective is to achieve a low score across all metrics. Values ranges between 0and100. MethodViT-B/16 ViT-L/14 ViT-H/14 BTBOMTMOBTBOMTMOBTBOMTMO ZS 74.0 16.3 38.2 29.9 54.7 12.9 40.5 27.1 67.7 15.2 36.1 28.3 TextSpan 42.2 11.0 29.4 23.3 42.7 10.2 34.3 25.4 57.8 12.7 33.4 26.1 Ortho-Cali 36.2 10.2 39.3 25.7 29.7 9.2 36.8 25.8 32.5 9.4 33.1 22.7 Roboshot 65.2 18.3 38.3 30.1 52.6 14.0 40.0 29.1 65.1 16.4 35.4 28.3 LTC (O URS)10.0 9.4 17.2 21.9 18.0 9.8 20.4 21.4 27.4 10.1 24.5 22.6 Table 3: MaxSkew@1000 results on FairFace - Spurious relationship between gender and sensitive attributes. LTC (MA) only performs mean ablation on spurious states, without class-discriminative enhancement, as there are no target classes. Method ViT-B/16 ViT-L/14 ViT-H/14 ZS 31.3 30.1 13.3 TextSpan 16.8 26.2 13.9 Ortho-Cali 23.8 21.4 17.8 LTC (MA) 16.8 26.2 12.0 can automatically identify gender states similar to those found by TextSpan, without manual effort or external knowledge of the spurious information. This highlights the ability of our approach to filter attention states linked to spurious associations effectively. 5.2 Ablation Studies In this section, we study the improvements from modular components of LTC. LTC (MA) only performs mean-ablation, without knowledge injection. The opposite is true for LTC (KI). LTC (R) performs both MA and KI on random states. We also include RS without removal of spurious features - RS (KI), which is equivalent to LTC (KI) but differs in where KI is applied. Tab. 6 illustrates the effectiveness of localized debiasing, with LTC (KI) performing substantially better. While MA improves over zero-shot settings, it proves to be insufficient compared to KI. The poor performance of LTC (R) highlights importance of correctly identified states. Overall, combining ablation and | https://arxiv.org/abs/2505.17425v1 |
knowledge injection on optimally identified states yields the best results. Note that we do not perform any cross-validation to optimize γ, which we think may further improve the performance. Fig. 10 shows that LTC can locate the optimal states using as little as 20% of the labels. We provide more ablation studies in B.3. 6 Interpreting Attention States Spurious Association. In Waterbirds, we found that there exist states encoding the knowledge of associating SwithY, rather than Salone, which we refer to as SY. To investigate this, we modify the task to classify the spurious attribute (background) instead of the target (bird) and change Eq. 10 to σ(VNC). This locate contributions corresponding to classifying Sinstead of Y. We first assume the new states as the actual ZSand the previous ZSas encoding the spurious association, ZSY. Fig. 14 shows the highest contribution is located in a different state: L11H16, instead of L10H10 in Fig. 12. The left figure of Fig. 4 reveals that ablating ZScauses a much steeper drop in the classification of S as compared to ZSY. In TextSpan, the user is required to annotate the representation of each head and it is unclear if the heads encode SYorS. Most heads are interpreted as Sand ablating them would erase the knowledge of Sdirectly. 8 Figure 2: Image visualization: Localized representations of ZSY, ZY, ZS, and overall image. Additionally, we study the effects of ablation on a more difficult task: predicting SandYconcurrently, i.e., “ A photo of a landbird on ocean ”. As observed in the right figure of Fig. 4, the low performance drop indicates an important finding: ZSYrepresents knowledge of associating SwithYrather than SandY. We hypothesis the possibility behind this discovery as the contrastive approach finding reasons behind a wrong prediction, which in this case correctly refers to the model overly associating the background with the bird class. However, we do not find similar observations regarding gender bias, as ZSoverlaps entirely with ZSY. This may be due to background occupying a larger portion of the image, while gender is typically an intrinsic feature that occupies less visual space (see D.2). Figure 3: Normalized SHAP values for YandS.Text Interpretation. TextSpan grounds visual repre- sentations onto generic text statements, which are not task-specific. Instead, we use GPT4-o to generate cap- tionscfor each image and apply SHAP [Lundberg, 2017] to the identified states ZS, ZSY, ZY. SHAP assigns an importance score ϕto each text token in the caption. The prediction logit is the similarity score between the sum over specific states and the text embedding: ϕr= ⟨P|Zr| izr,i, PT(ET(c))⟩ ∈RT, where rrepresents the encoded attribute type. The tokens are annotated by at- tribute ( SorY), and the importance for each attribute is normalized over the caption length and averaged across tokens belonging within each attribute set. In Fig. 3, both ZYandZSallocates high importance to the attribute they represent and low importance to the other, while ZSY combines elements of both. This is aligned with Fig. 11, which lists the top text features for each state. ZYmainly encoding species and various features of birds while ZScorresponds | https://arxiv.org/abs/2505.17425v1 |
to habitat descriptions. Image Interpretation. We aggregate over the head and layer positions instead of the token posi- tions in Eq. 4:P (l,h)∈Pr˜zl,h∈RN×dbefore deriving the prediction logit [Gandelsman et al., 2023]. Fig 2 illustrates the magnitude of each pixel towards the prediction of “ landbird ” for {ZSY, ZY, ZS, PI(EI(I))}. Similar to text-based interpretations, ZSYshows high importance in patches corresponding to both the target class and background. In contrast, ZYfocuses more on the target class and less noisy as opposed to the overall representation, making it more effective for knowledge injection. We present more findings in D. 7 Discussion In this work, we propose our framework, Locate-Then-Correct (LTC) for debiasing CLIP models by identifying and correcting spurious associations in attention heads. By leveraging the linear decomposability of ViT representations, LTC enables fine-grained localization of attention states responsible for encoding spurious and target attributes. We found that implementing orthogonal projection on localized states yield superior results as compared to existing works which does so on the overall representation space. LTC, when used as a lightweight extension to existing fine-tuning 9 methods, yields promising improvements. Furthermore, LTC provides an interpretable lens into the intermediate representations, enabling an explanation on why our debiasing measures work. Limitations. A key limitation of our work is the likely sub-optimality of the method used to identify spurious and target attention states. While we observe strong results without tuning the masking threshold, λ, further gains could be achieved through its optimization. Additionally, our study focuses solely on CLIP models; extending LTC to generative models such as diffusion models remains an exciting direction for future work. References Dyah Adila, Changho Shin, Linrong Cai, and Frederic Sala. Zero-shot robustification of zero-shot models with foundation models. arXiv preprint arXiv:2309.04344 , 2023. Ibrahim Alabdulmohsin, Xiao Wang, Andreas Steiner, Priya Goyal, Alexander D’Amour, and Xiaohua Zhai. CLIP the bias: How useful is balancing data in multimodal learning? arXiv preprint arXiv:2403.04547 , 2024. David Bau, Jun-Yan Zhu, Hendrik Strobelt, Agata Lapedriza, Bolei Zhou, and Antonio Torralba. Understanding the role of individual units in a deep neural network. Proceedings of the National Academy of Sciences , 117(48):30071–30078, 2020. Ching-Yao Chuang, Varun Jampani, Yuanzhen Li, Antonio Torralba, and Stefanie Jegelka. Debiasing vision-language models via biased prompts. arXiv preprint arXiv:2302.00070 , 2023. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition , pages 248–255. Ieee, 2009. Alexey Dosovitskiy. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 , 2020. Yu Du, Fangyun Wei, Zihe Zhang, Miaojing Shi, Yue Gao, and Guoqi Li. Learning to prompt for open-vocabulary object detection with vision-language model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 14084–14093, 2022. Nelson Elhage, Neel Nanda, Catherine Olsson, Tom Henighan, Nicholas Joseph, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Nova DasSarma, Dawn Drain, Deep Ganguli, Zac Hatfield-Dodds, Danny Hernandez, Andy Jones, Jackson Kernion, Liane Lovitt, Kamal Ndousse, Dario Amodei, Tom Brown, Jack Clark, | https://arxiv.org/abs/2505.17425v1 |
Jared Kaplan, Sam McCandlish, and Chris Olah. A mathematical framework for transformer circuits. Transformer Circuits Thread , 2021. https://transformer-circuits.pub/2021/framework/index.html. Nelson Elhage, Tristan Hume, Catherine Olsson, Nicholas Schiefer, Tom Henighan, Shauna Kravec, Zac Hatfield-Dodds, Robert Lasenby, Dawn Drain, Carol Chen, Roger Grosse, Sam McCandlish, Jared Kaplan, Dario Amodei, Martin Wattenberg, and Christopher Olah. Toy models of superposi- tion. Transformer Circuits Thread , 2022. https://transformer-circuits.pub/2022/toy_ model/index.html . Lijie Fan, Dilip Krishnan, Phillip Isola, Dina Katabi, and Yonglong Tian. Improving CLIP training with language rewrites. Advances in Neural Information Processing Systems , 36, 2024. Yossi Gandelsman, Alexei A Efros, and Jacob Steinhardt. Interpreting CLIP’s image representation via text-based decomposition. arXiv preprint arXiv:2310.05916 , 2023. Gabriel Goh, Nick Cammarata †, Chelsea V oss †, Shan Carter, Michael Petrov, Ludwig Schubert, Alec Radford, and Chris Olah. Multimodal neurons in artificial neural networks. Distill , 2021. doi: 10.23915/distill.00030. https://distill.pub/2021/multimodal-neurons. Siobhan Mackenzie Hall, Fernanda Gonçalves Abrantes, Hanwen Zhu, Grace Sodunke, Aleksandar Shtedritski, and Hannah Rose Kirk. Visogender: A dataset for benchmarking gender bias in image-text pronoun resolution. Advances in Neural Information Processing Systems , 36, 2024. 10 Shengding Hu, Yuge Tu, Xu Han, Chaoqun He, Ganqu Cui, Xiang Long, Zhi Zheng, Yewei Fang, Yuxiang Huang, Weilin Zhao, et al. MiniCPM: Unveiling the potential of small language models with scalable training strategies. arXiv preprint arXiv:2404.06395 , 2024. Gabriel Ilharco, Mitchell Wortsman, Ross Wightman, Cade Gordon, Nicholas Carlini, Rohan Taori, Achal Dave, Vaishaal Shankar, Hongseok Namkoong, John Miller, Hannaneh Hajishirzi, Ali Farhadi, and Ludwig Schmidt. OpenCLIP, 2021. URL https://doi.org/10.5281/zenodo. 5143773 . Nick Jiang, Anish Kachinthaya, Suzie Petryk, and Yossi Gandelsman. Interpreting and editing vision-language representations to mitigate hallucinations. arXiv preprint arXiv:2410.02762 , 2024. Kimmo Kärkkäinen and Jungseock Joo. Fairface: Face attribute dataset for balanced race, gender, and age. arXiv preprint arXiv:1908.04913 , 2019. Younghyun Kim, Sangwoo Mo, Minkyu Kim, Kyungmin Lee, Jaeho Lee, and Jinwoo Shin. Discov- ering and mitigating visual biases through keyword explanation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 11082–11092, 2024. Evan Z Liu, Behzad Haghgoo, Annie S Chen, Aditi Raghunathan, Pang Wei Koh, Shiori Sagawa, Percy Liang, and Chelsea Finn. Just train twice: Improving group robustness without training group information. In International Conference on Machine Learning , pages 6781–6792. PMLR, 2021. Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of the IEEE international conference on computer vision , pages 3730–3738, 2015. Scott Lundberg. A unified approach to interpreting model predictions. arXiv preprint arXiv:1705.07874 , 2017. Rui Mao, Qian Liu, Kai He, Wei Li, and Erik Cambria. The biases of pre-trained language models: An empirical study on prompt-based sentiment analysis and emotion detection. IEEE Transactions on Affective Computing , 14(3):1743–1753, 2023. Sachit Menon and Carl V ondrick. Visual classification via description from large language models. arXiv preprint arXiv:2210.07183 , 2022. Mohammad Nadeem, Shahab Saquib Sohail, Erik Cambria, Björn W Schuller, and Amir Hussain. Gender bias in text-to-video generation models: A case study of Sora. IEEE Intelligent Systems , 40(3), 2025. Junhyun Nam, Hyuntak Cha, Sungsoo Ahn, Jaeho Lee, and Jinwoo Shin. | https://arxiv.org/abs/2505.17425v1 |
Learning from failure: De-biasing classifier from biased classifier. Advances in Neural Information Processing Systems , 33:20673–20684, 2020. Neel Nanda, Lawrence Chan, Tom Lieberum, Jess Smith, and Jacob Steinhardt. Progress measures for grokking via mechanistic interpretability. arXiv preprint arXiv:2301.05217 , 2023. Nostalgebraist. Interpreting GPT: The logit lens, 2020. URL https://www.lesswrong.com/ posts/AcKRB8wDpdaN6v6ru/interpreting-gpt-the-logit-lens . Accessed: 2024-11-27. Chris Olah, Alexander Mordvintsev, and Ludwig Schubert. Feature visualization. Distill , 2017. doi: 10.23915/distill.00007. https://distill.pub/2017/feature-visualization. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning , pages 8748–8763. PMLR, 2021. Shiori Sagawa, Pang Wei Koh, Tatsunori B Hashimoto, and Percy Liang. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. arXiv preprint arXiv:1911.08731 , 2019. 11 Tamar Rott Shaham, Sarah Schwettmann, Franklin Wang, Achyuta Rajaram, Evan Hernandez, Jacob Andreas, and Antonio Torralba. A multimodal automated interpretability agent. In Forty-first International Conference on Machine Learning , 2024. Mainak Singha, Ankit Jha, Shirsha Bose, Ashwin Nair, Moloud Abdar, and Biplab Banerjee. Un- known prompt the only lacuna: Unveiling CLIP’s potential for open domain generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 13309–13319, 2024. Nimit Sohoni, Jared Dunnmon, Geoffrey Angus, Albert Gu, and Christopher Ré. No subclass left behind: Fine-grained robustness in coarse-grained classification problems. Advances in Neural Information Processing Systems , 33:19339–19352, 2020. A Vaswani. Attention is all you need. Advances in Neural Information Processing Systems , 2017. Jesse Vig, Sebastian Gehrmann, Yonatan Belinkov, Sharon Qian, Daniel Nevo, Simas Sakenis, Jason Huang, Yaron Singer, and Stuart Shieber. Causal mediation analysis for interpreting neural NLP: The case of gender bias. arXiv preprint arXiv:2004.12265 , 2020. Qizhou Wang, Yong Lin, Yongqiang Chen, Ludwig Schmidt, Bo Han, and Tong Zhang. A sober look at the robustness of CLIPs to spurious features. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. Mitchell Wortsman, Gabriel Ilharco, Jong Wook Kim, Mike Li, Simon Kornblith, Rebecca Roelofs, Raphael Gontijo Lopes, Hannaneh Hajishirzi, Ali Farhadi, Hongseok Namkoong, et al. Robust fine-tuning of zero-shot models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 7959–7971, 2022. Yisong Xiao, Aishan Liu, QianJia Cheng, Zhenfei Yin, Siyuan Liang, Jiapeng Li, Jing Shao, Xi- anglong Liu, and Dacheng Tao. GenderBias-VL: Benchmarking gender bias in vision language models via counterfactual probing. arXiv preprint arXiv:2407.00600 , 2024. Yu Yang, Besmira Nushi, Hamid Palangi, and Baharan Mirzasoleiman. Mitigating spurious correla- tions in multi-modal models during fine-tuning. In International Conference on Machine Learning , pages 39365–39379. PMLR, 2023a. Yue Yang, Artemis Panagopoulou, Shenghao Zhou, Daniel Jin, Chris Callison-Burch, and Mark Yatskar. Language in a bottle: Language model guided concept bottlenecks for interpretable image classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 19187–19197, 2023b. Wei Jie Yeo, Ranjan Satapthy, and Erik Cambria. Towards faithful natural language explanations: A study using activation patching in large language models. arXiv preprint arXiv:2410.14155 , 2024. Beichen Zhang, | https://arxiv.org/abs/2505.17425v1 |
Pan Zhang, Xiaoyi Dong, Yuhang Zang, and Jiaqi Wang. Long-CLIP: Unlocking the long-text capability of CLIP. In European Conference on Computer Vision , pages 310–325. Springer, 2025. Michael Zhang and Christopher Ré. Contrastive adapters for foundation model group robustness. Advances in Neural Information Processing Systems , 35:21682–21697, 2022. 12 A Experiment Information A.1 Datasets Table 4: Dataset information: Val/Test size for |GN|and|D|. The sizes for the sub-groups in GenderBias are conditioned on the occupations. Dataset Y S G N |GN| | D| License Waterbirds{landbird, water- bird}{land, water}{landbird in water, waterbird in land}240 / 28974795 / 5794MIT license CounterAnimal45 ImageNet classes{Snow vs Grass, Green vs Blue, . . . }{Polar bear on snow, White Bram- bling, . . . }– / 59267408 / 5926Unavailable GenderBias177 Fe- male/Male stereo- types{female, male}10 worst occupa- tions– 814 / 5093CC BY- NC-4.0 FairFace{good, evil, smart, dumb, attractive, unattrac- tive, lawful, criminal, friendly, un- friendly}{female, male}– – 10954CC BY 4.0 Waterbird: Tab. 4 contains the details of the datasets used. In Waterbirds, the sizes of the validation set is skewed towards the positive set. In B.3, we show for certain model sizes, the convergence towards the optimal set of states can be achieved with a small sample size, <50. We use the template: “A photo of a [class]” across all datasets. CounterAnimal. The dataset is divided into two subsets: ’easy’ and ’hard’. The ’easy’ subset contains examples of animals in their commonly observed habitats, while the ’hard’ subset includes examples in rare or atypical habitats. For instance, a “ polar bear on snow ” represents an ’easy’ example, whereas a “ polar bear on grass ” constitutes a ’hard’ example. The objective is to minimize the classification gap between these subsets. The full dataset contains 45ImageNet classes, however, we set Yto the full Imagenet classes, |Y|= 1000 . The multi-class nature of this task introduces a unique challenge: determining the appropriate counterfactual label yfor a given class label y. Unlike binary classification, where a single y,ypairing suffices for KI, multi-class tasks involve multiple pairings for each class, and it is unclear which pairing to use for each image without prior knowledge of its class. To address this issue, we first construct a dictionary for the 45 classes by identifying the most frequently misclassified class for each target. For example, if the class “ polar bear ” is frequently misclassified as “ arctic fox ”, this pairing is recorded in the dictionary. This is recorded with the counts, generating a nested dictionary, with keys pointing to the CA classes and inner dictionaries corresponding to misclassified ImageNet classes. Next, we generate pseudo-labels, yp, using zero- shot predictions to reference the dictionary. If ypcorresponds to either the key or value in the 13 dictionary, we retrieve the text features associated with that pair. To limit the possible text feature pairings which can be large, we pair each of the CA classes (outer key) to the misclassified class with the highest counts. Thus, each pseudo-label correspond to one of the CA classes, used to retrieve the text features. However, this introduces a | https://arxiv.org/abs/2505.17425v1 |
limitation in the event where the pairing between the pseudo label and CA class does not correspond to the actual text feature pairing, ie pseudo-label of "seal" to CA class of "polar bear" but text pairing is "polar bear - arctic fox" . However, we find that this can still introduce benefits by endowing the model with knowledge of discriminatory features related to the CA class. Though it is possible to generate text features for all pairings, we leave investigation of this to future works. It is important to note that we do not limit the predictions to only the CA classes, but only retrieve text features limited to them. The total classes as normalized over by the classifier is still the full ImageNet classes. Overall, this process can be interpreted as a refinement stage: an initial prediction is made, followed by the injection of discriminative features to improve classification accuracy. This methodology is similarly applied to RoboShot and Ortho-Cali. GenderBias. In the GenderBias dataset, each image is linked to a target occupation y, such as “Pilot ”, and is annotated as stereotypically male or female based on data from the U.S. Bureau of Labor Statistics (BLS)2. The alternative class, y, represents an occupation stereotypically associated with the opposite gender, such as “ Flight Attendant ”. All occupations in the dataset include samples from both genders, and the bias metric measures accuracy discrepancies between them. In the original dataset, each occupation is paired with multiple correlated occupations. We instead choose the occupation with the highest proportion of workers from the opposite gender. For example, for Flight Attendant , Pilot is chosen over Airline Manager if it has a higher male labor force representation. Certain occupations exhibit stronger gender bias in CLIP. To simulate GN, we select the 10 occupations with the highest bias scores during zero-shot inference. Across the three models analyzed, consistent patterns emerge, with occupations such as “ Lawyer ” and “ Chief Executive ” being strongly associated with males. During optimization over PYandPS, we use the top 10 occupations with highest bias within the validation set. FairFace. FairFace comprises images of occupation-neutral faces annotated with gender. Following the settings in [Chuang et al., 2023], we prompt CLIP to retrieve the top Kimages associated with each concept class in Y. MaxSkew quantifies the maximum gender skewness across all concepts in Yand averages these values. The target and spurious heads identified in GenderBias are reused in FairFace without further optimization or reliance on a validation set. A.2 Baselines TextSpan [Gandelsman et al., 2023]. The framework maps the representation of each attention head in a ViT model to a set of natural language text statements generated using ChatGPT. Users then determine whether a head should be categorized as Y, S or neither based on these statements. For example, a set of statements like “ Submerged underwater scene, Peaceful rural farmland, ... ” might be labeled as Sfor Waterbirds. However, this approach is subject to individual interpretation, potentially leading to disagreements among evaluators. Additionally, the manual effort required increases significantly as the number | https://arxiv.org/abs/2505.17425v1 |
of attention heads grows. TextSpan is implemented on ImageNet [Deng et al., 2009] and does not utilize the validation set of the benchmarks. Ortho-Cali [Chuang et al., 2023]. This approach leverages positive pairs of text prompts, enforcing the projection matrix to regularize the difference between two projected embeddings with opposing spurious attributes. The pairs are structured as “a photo of a [class name] with [spurious attribute].” Consequently, the method requires prior knowledge of the spuriously correlated attribute, such as male andfemale for GenderBias. The projection matrix is derived from the validation set and applied on the text representation PT(ET(y))during testing. Roboshot [Adila et al., 2023]. Roboshot uses an LLM to generate helpful and harmful concepts related to classifying Y. Harmful concepts are treated as S, while helpful concepts are used to enhance CLIP’s discriminative ability. Harmful concepts are removed from the final image representation PI(EI(I)), and helpful concepts are amplified through orthogonal projection. As with Ortho-Cali, the projection matrix is derived from the validation set. 2https://www.bls.gov/cps/cpsaat11.htm 14 Parameter-tuning baselines. JTT [Liu et al., 2021] is a two-stage framework where an initial model is trained to identify the worst-performing examples, which are then emphasized in a second training stage by upsampling them with a factor, λup. However, given CLIP’s strong performance in zero-shot settings, we omit the first stage and directly predict group labels to identify GNas the set of worst-performing samples. As with the non-training setup, JTT-LTC operates purely at inference time. We first train the base JTT model, then apply mean ablation and knowledge injection to the identified attention states prior to aggregating them into the final image representation for prediction. Note that the group labels used to identify ZSandZYare inferred in a zero-shot fashion. During JTT training, we set λup= 90 for ViT-H/14 and 100for ViT-B/16 and ViT-L/14. We use a learning rate of1e−2and weight decay as 1e−4. We adopt a 2-layer non-linear probe as the classifier for ERM, JTT and JTT-LTC, with the hidden layer dimension as 128for ViT-B/16 and 256for ViT-L/14 and ViT-H/14. Cont Adapter uses a 2-layer adapter instead. We ran the Cont Adapter with the original hyperparameters and only change the CLIP backbone. Figure 4: Z_S: Ablating states encoding S,Z_SY: Association between SandY.TS:TextSpan [Left]: Predicting Sas the target class. [Right] : Predicting both SandY.Dataset: Waterbirds Table 5: Prompts to generate class discriminative concepts. [Adila et al., 2023]. Replace "visual" with "spurious" for spurious concept. Dataset Prompt WaterbirdsList the true visual differences between waterbirds and landbirds. Give short keyword for each answer. Answer in the following format: <Difference>: <waterbird characteristic> ; <landbird characteristic> Genderbias/CAList 3 true visual differences between {cls1} and {cls2}. Give short keyword for each answer. Answer in the following format: <Difference>: <{cls1} characteristic> ; <{cls2} characteristic> 15 B Additional results B.1 Analysis robustness between sub-groups In this section, we analyze the distribution of prediction margins, p(ˆy)−p(y), forGPandGN. Examples of GPandGNcan be referenced from Tab. 4. The results across the three models and four baselines are shown in Fig 5, 6 and 7. In the zero-shot setting, a clear separation between the sub-groups is observed, | https://arxiv.org/abs/2505.17425v1 |
with GNskewed toward the negative end. Baselines designed to remove spurious correlations between SandYoften introduce a trade-off between sub-group accuracies, as the positively correlated spurious attribute may have contributed to better predictions for classes with positive asy. Among the three baselines, LTC uniquely avoids this trade-off. For the large and huge models, both GPandGNshift toward the positive margin, a trend more pronounced in the huge model. While Ortho-Cali and Roboshot improve performance on GN, they compromise on GP. Roboshot outperforms Ortho-Cali by amplifying helpful concepts but falls short of LTC, which achieves better results through head-level optimization. Figure 5: Prediction margins in Waterbirds. Model: ViT-B/16 Figure 6: Prediction margins in Waterbirds. Model: ViT-L/14 16 Figure 7: Prediction margins in Waterbirds. Model: ViT-H/14 Table 6: Ablation studies on Waterbirds. LTC (MA) - Ablate only, LTC (KI) - Knowledge injection only. LTC (R) - Similar to LTC but states are randomized. Roboshot (RS) (KI): Only KI without debiasing. Worst Group ( WG↑),Gap = Avg - WG ( ↓) Method ViT-B/16 ViT-L/14 ViT-H/14 WG Gap WG Gap WG Gap ZS 49.7 22.4 44.5 27.8 50.3 19.2 RS 57.5 5.5 70.5 8.5 60.7 8.9 RS (KI) 45.6 23.6 45.2 26.6 42.8 21.8 LTC (MA) 62.5 13.1 51.8 23.2 60.5 11.6 LTC (KI) 67.3 6.6 72.9 10.1 69.7 5.3 LTC (R) 36.9 35.7 15.2 42.4 43.1 11.3 LTC 73.3 1.3 74.6 9.6 71.3 3.6 B.2 Genderbias Ablation. Tab. 7 presents the results of ablating various components of LTC for GenderBias. Overall, Roboshot underperforms compared to LTC and even increases the overall bias relative to zero-shot performance. Performing KI without orthogonalizing out spurious feature achieves a lower bias for RoboShot. As discussed in the main results, the reliance on LLM to identify spurious features may backfire if the LLM is sufficiently safeguarded against generating sensitive information such as gender being a prominent correlation to occupations. Similar trends to Waterbirds are observed, where LTC (KI) emerges as the second most competitive baseline. Despite using the same helpful concepts, LTC (KI) significantly outperforms RS (KI), demonstrating that orthogonal projection on classification heads is a more effective method for amplifying class-discriminative properties in CLIP. Occupations analysis. Fig. 8 presents statistics on bias relative to male workforce proportions. Common biased occupations include Chief Executive ,Lawyer , and Security Guard , all of which are male-dominated. A clear trend emerges: male-dominated occupations exhibit higher bias levels, while female-dominated occupations show lower bias. Additionally, occupations associated with the opposite gender tend to exhibit reduced bias. This suggests that CLIP is disproportionately influenced by gender bias across occupations. For example, while CLIP accurately classifies both male and female “ Legal Secretaries ”, it demonstrates significantly higher accuracy for male “ Lawyers ” compared to female. The positive correlation between classification performance and workforce proportions indicates that CLIP is heavily impacted by the gender composition of occupations present in its training data. B.3 Sample Size and Mask In Sec. 5, we did not optimize for the optimal set of attention states to perform debiasing. Both ZY andZSYwere chosen through the mask, σ, and filtering out states | https://arxiv.org/abs/2505.17425v1 |
with contribution < γ. We find 17 Table 7: Ablation studies on Genderbias .LTC (MA) - Ablate only, LTC (KI) - Knowledge injection only. LTC (R) - Similar to LTC but states are randomized. Roboshot (RS) (KI): Only KI without debiasing. Worst Group Bias B10(↓), Overall Bias Bovl(↓)Bolded : represent best method while underline refers to second best. Method ViT-B/16 ViT-L/14 ViT-H/14 B10 Bovl B10 Bovl B10 Bovl ZS 74.0 16.3 54.7 12.9 67.7 15.2 RS 65.2 18.3 52.6 14.0 65.1 16.4 RS (KI) 56.7 14.3 48.6 11.4 57.7 13.3 LTC (MA) 43.9 10.9 43.7 10.7 59.5 12.4 LTC (KI) 18.2 10.7 24.8 10.3 30.3 10.6 LTC (R) 30.5 12.2 35.3 14.6 33.4 15.0 LTC 10.0 9.4 18.0 9.8 27.4 10.1 Figure 8: Qualitative analysis on gender bias of top 10 occupations vs proportion of male workforce. Worst: occupation, other: opposite-gender associated occupation. Left: ViT-B/16 ,Right: ViT-H/14 that this works well as a heuristic at filtering out noisy states which may not correspond to either SorY. Fig. 10 and 11 shows the ablation study on sample size for Waterbirds and Genderbias respectively. Specifically, we observe the performance trend vs %ofGPandGNutilized on the primary step of LTC: locating ZYandZSY. We observe that the trend differs between model sizes. ViT-B peaks at the later stages, middle for ViT-H and early for ViT-L. We found that larger models tend to allocate higher contributions within a single ZSY(59%) and ZY(58%), see Fig. 15, 18 for Waterbirds. More importantly, the important state contributions in VNCandVNWare not similar in magnitude, thus preventing being canceled out. On the other hand, ViT-B has higher overlapping values between the important states: the state at layer 11, head 5is relatively high on both correct and wrong samples. We additionally analyze the effects of selecting contributing states filtered with the mask, by only restricting ZYandZSYto the single top state. This essentially avoids using γas a threshold. We find that performing debiasing on the incomplete states tends to underperform, except for ViT-L, as selecting the top states essentially recovers the heuristic setting since |ZY|= 2and|ZSY|= 1. Overall, we find that using the mask effectively finds states most probable for encoding the respective representations. B.4 Spurious effects in Negative sub-group The correct discovery and distinction between ZYandZS/ZSYare hinged on the belief that the two can be separated more accurately in GN. We study the implications of neglecting this belief by examining the located states when implementing Eq. 10 across the full dataset instead, replacing bothVNW andVNCwithVWandVCrespectively. From Tab. 8, we can see that performing the locating framework on the full dataset in place of GN, causes a huge performance drop for ViT-B. The identified ZYis confused with ZSY, causing KI to be performed on the spurious states instead of class states. On ViT-L, the located states converges to the same set as in GN, but we observe the 18 Figure 9: Accuracy of the occupation-dominated gender of each occupation. Left: ViT-B/16 ,Right: ViT-H/14 Figure 10: Analysis on %sub-group samples for locating spurious and classification states against performance. Left: ViT-B/16 ,Middle: ViT-L/14 Right: ViT-H/14 .Dash lines | https://arxiv.org/abs/2505.17425v1 |
refer to selecting the top contributing state in ZYandZSY.xrefers to ZS. Metric: Accuracy ( ↑). Dataset: Waterbirds contributions on the actual states to be lower. ViT-H appears to be similarly robust, but still suffers a marginal performance drop and lower Y/S ratio in ZY. B.5 Sensitivity to discriminative features Since KI depends on the quality of the underlying discriminative features, we assess its robustness to variations in the prompt set, {ui}Nu i=1. We measure the mean and standard deviation between 3 sets of features, between RS and LTC. We report the mean and standard deviation over three feature sets for both RS and LTC. As shown in Tab. 10, LTC demonstrates greater robustness, likely due to its focus on states with high class-relevance, making it better suited for injecting discriminative information. 19 Figure 11: Analysis on %sub-group samples for locating spurious and classification states against performance. Dash lines refer to selecting the top contributing state in ZYandZSY.xrefers to ZS. Metric: Bias ( ↓). Dataset: Genderbias Table 8: Analysis of samples used for spurious and class state identification. The metric for ZSYand ZYis the SHAP scores of Y/S features. ZYshould have higher contributions on Yas opposed to S. WG is the worst-group accuracy. ViT-L/14 is not shown as the states are identical. Negative: GN, Full: D.Dataset: Waterbirds Method ViT-B/16 ViT-H/14 ZSY ZY WG ZSY ZY WG Full 30/15 25/18 15 26/12 44/7 69.3 Negative 25/18 8/48 73.3 18/25 35/7 71.3 Table 9: Mean and std. performance on Waterbirds across 3 sets of discriminative features. Method RS LTC WG Avg Gap WG Avg Gap ViT-B/16 60.8 ±2.4 67.2 ±3.0 6.4 ±0.7 73.6 ±0.5 76.5 ±2.5 3.0 ±2.0 ViT-L/16 68.8 ±4.2 78.3 ±2.3 10.3 ±2.1 74.4 ±2.9 83.9 ±1.3 9.5 ±1.6 ViT-H/14 59.5 ±6.2 73.4 ±1.3 13.9 ±6.7 66.7 ±5.5 75.6 ±2.8 8.9 ±4.1 20 C Contribution distributions Fig. 12, 15 and 18 refers to the target class Y’s contribution distribution in Waterbirds. The top and bottom heatmaps correspond to samples in GNCandGNW. Fig. 13, 16 and 19 similarly correspond to Genderbias. Fig. 14, Fig. 17 and Fig. 20 refers to correctly classified background Ssamples in the entire dataset D. Figure 12: Vscores across the last 4 layers and all heads for ViT-B/16 . Layer-wise in the y-axis and head-wise in x-axis. Top: VNC,Bottom: VNW.Dataset: Waterbirds Figure 13: Vscores across the last 4 layers and all heads for ViT-B/16 . Layer-wise in the y-axis and head-wise in x-axis. Top: VNC,Bottom: VNW.Dataset: Genderbias 21 Figure 14: VCscores across the last 4 layers and all head for predicting the spurious attribute: background directly for ViT-B/16 . Layer-wise in the y-axis and head-wise in x-axis. Dataset: Waterbirds Figure 15: Vscores across the last 4 layers and all heads for ViT-L/14 . Layer-wise in the y-axis and head-wise in x-axis. Top: VNC,Bottom: VNW.Dataset: Waterbirds 22 Figure 16: Vscores across the last 4 layers and all heads for ViT-L/14 . Layer-wise in the y-axis and head-wise in x-axis. Top: VNC,Bottom: VNW.Dataset: Genderbias Figure 17: VCscores across the last 4 layers and all head for predicting the spurious attribute: background directly | https://arxiv.org/abs/2505.17425v1 |
for ViT-L/14 . Layer-wise in the y-axis and head-wise in x-axis. Dataset: Waterbirds 23 Figure 18: Vscores across the last 8 layers and all heads for ViT-H/14 . Layer-wise in the y-axis and head-wise in x-axis. Top: VNC,Bottom: VNW.Dataset: Waterbirds Figure 19: Vscores across the last 8 layers and all heads for ViT-H/14 . Layer-wise in the y-axis and head-wise in x-axis. Top: VNC,Bottom: VNW.Dataset: Genderbias 24 Figure 20: VCscores across the last 4 layers and all head for predicting the spurious attribute: background directly for ViT-H/14 . Layer-wise in the y-axis and head-wise in x-axis. Dataset: Waterbirds 25 D Interpretability Results We present interpretability findings for each model in this section. To generate captions for SHAP, we prompt MiniCPM-V 2.6 [Hu et al., 2024] using the templates shown in Tab. 10. Fig. 21 displays the normalized SHAP values for the set of aggregated states representing {SY, S, Y }for Waterbirds or{S, Y}for Genderbias. Referencing the prompt, we form Yfrom “ features ” and the annotated “species ” of the bird for Waterbirds and occupation class for Genderbias. To get the normalized SHAP score for ZY, we take the max over the Yset, i.e., max over features or species for Waterbirds. S is then “ background ” and “ gender ” for Waterbirds and Genderbias respectively. We also provide additional SHAP scores for each individual state and attribute, namely each state against each attribute, extracted from the captioning output. This provides a fine-grained analysis on the type of information is encoded and aggregated across each attribute state. The top text features are retrieved by aggregating over the number of times a particular feature is ranked as the feature with the highest SHAP value in each sample. Table 10: Prompts to caption each image using GPT4o. Dataset Prompt WaterbirdsCaption the picture of the {class} and describe both the visual features of the bird itself and the background. Please format your response as follows: Caption: Background: Features: GenderbiasCaption the picture of the {occupation}. Describe the visual features in the picture that correlate with the occupation of {occupation} and the gender of the {occupation}.Please format your response as follows: Caption: Gender: Features: D.1 Waterbirds Fig. 21 illustrates the normalized SHAP scores for each feature category, corresponding to distinct attribute states. Our analysis reveals that localized states exhibit strong correlations with their hypothesized attributes and minimal correlations with opposing attributes. Specifically, ZYassigns a high contribution to features associated with Ywhile attributing minimal contribution to S. Similar patterns are observed in Tab. 11, where ZYpredominantly contains terms related to various bird species and their features, whereas ZSprimarily represents the surrounding background. Correlation with TextSpan . We applied TextSpan [Gandelsman et al., 2023] to each state and observed that the textual interpretations align with the intended state representations. On both ViT-B and ViT-H, the derived texts for ZSYcorrespond to background descriptions, whereas on ViT-L, they relate to object categories. Notably, this is consistent with the composition of SHAP values shown in Fig. 21. For ZYandZS, the text outputs are fully aligned with their respective intended representations. Individual state | https://arxiv.org/abs/2505.17425v1 |
representation. Tab. 14 lists the top 10 text features for each individual head within the target set, ∈ZY. Most heads predominantly represent the species category. However, in ViT-H/14, we identified a specific head, L31H13, that focuses on representing various colors and has a higher contribution to the features category. Similarly, L31H1 exhibits comparable behavior, albeit to a lesser extent. Visual Interpretations. Fig. 22, 23, and 24 present examples of heatmaps depicting logit scores for the predicted class. We observe that ZSYappears noisier and less focused compared to ZY 26 andZS. This could be attributed to the challenges in visually representing associations between concepts. Nonetheless, the critical regions often encompass both the background and the object class. Consistent with textual representations, ZYandZSexhibit high concentration around the object and surrounding background. Additionally, the ZYstates show reduced noise compared to the overall image representation. Between models, the prediction heatmaps are noiser in ViT-H, which may be due to the larger set of states located, see Tab. 14. Figure 21: Normalized SHAP values towards text features belonging to YandS. Left: ViT-B/16, Middle: ViT-L/14, Right: ViT-H/14 Dataset: Waterbirds Table 11: Top text features for ZSY, ZY, ZS. Dataset: Waterbirds. Model: ViT-B/16 and ViT-L/14 ViT-B/16 ViT-L/14 ZSY ZY ZS ZSY ZY ZS forest tern forest beak tern forest bamboo gull bamboo warbler warbler bamboo beach warbler beach gull gull beach pond beak lake tern sparrow lake lake sparrow pond sparrow wren pond warbler wren river feathers cormorant ocean water bill ocean wings beak trees river feathers costal bamboo black river beak woodpecker water lake kingfisher sunset wings duck moss-covered forest woodpecker shoreline 27 Table 12: Top text features for ZSY, ZY, ZS. Dataset: Waterbirds. Model: ViT-H/14. ViT-H/14 ZSY ZY ZS forest tern bamboo beach gull forest pond warbler beach lake beak ocean water sparrow lake river blue pond sunset wren river grassy black sunset trees woodpecker trees shoreline kingfisher shoreline Table 13: Correlation between TextSpan [Gandelsman et al., 2023] and located states. For each state, the given text statements represents the top 5 textual descriptions that accounts for the variance across the Imagenet validation set. L10H10 denotes attention head at layer 10 and head 10. Dataset: Waterbirds Model Top Localized StatesViT-B/16ZSY:L10H10 ZY:L11H5 ZS:L11H6 Tranquil boating on a lake Photo of a reptilePhoto taken in Namib Desert Peaceful rural farmland Image with a seagullPhoto taken in the Alaskan mountains Serene garden pond An image with dogs A photo of Monaco Secluded beach cove Snapshot of a marsupialImage taken in the Florida Everglades Picture taken in the Italian pasta kitchensA thistle contemplative coastal viewViT-L/14ZSY:L23H14 ZY:L23H2 ZS:L22H2 An image with dogsImage showing prairie grouseUrban park greenery Majestic soaring birds Image with a penguin cozy home interior Graceful swimming fish A magnolia Urban subway station An image with bikes An image with dogs Energetic street scene Picture with boats An image with cats Tranquil boating on a lakeViT-H/14ZSY:L30H11 ZY:L31H2 ZS:L28H11 calming riverbank scene detailed reptile close-up A bamboo Pristine snowy landscapeImage with polka dot patternsA picture of a baby peaceful meadow landscapeA spiky texturePhoto taken in the Italian vineyards Sandy beach shoresArtwork | https://arxiv.org/abs/2505.17425v1 |
featuring zebra stripe motifsBlurred boundaries Gritty urban street scene Image with a sheepdelicate soap bubble display 28 Table 14: Individual state representations within ZYforViT-B/16 and ViT-L/14. The overall score forZYis shown beside each model while the SHAP score for individual states within ZYis shown beside each head. The scores refer to species /features . The overall score is measured by first aggregating the state activations corresponding to ZYbefore implementing SHAP, different from individual state activations. Dataset: Waterbirds ViT-B/16: 44/15 ViT-L/14: 44/12 L11H5: 44/14 L10H2: 21/19 L23H2: 44/12 tern beak tern gull tern warbler warbler warbler gull beak feathers sparrow sparrow wings wren wren woodpecker bamboo feathers bamboo cormorant woodpecker white beak bill breasted black duck jay kingfisher Table 15: Individual state representations within ZYforViT-H/14. ViT-H/14: 38/19 L30H8: 39/16 L31H2: 39/12 L31H1: 36/17 L31H13: 20/25 tern tern tern yellow gull gull gull red warbler warbler warbler black beak bamboo black tern sparrow sparrow sparrow blue wren beak white white feathers wren wren brown woodpecker woodpecker yellow green bill feathers blue orange duck kingfisher brown sunset 29 Figure 22: Image visualization: Localized representations of ZSY, ZY, ZSand overall image. Model: ViT-B/16. Dataset: Waterbirds 30 Figure 23: Image visualization: Localized representations of ZSY, ZY, ZSand overall image. Model: ViT-L/14. Dataset: Waterbirds 31 Figure 24: Image visualization: Localized representations of ZSY, ZY, ZSand overall image. Model: ViT-H/14. Dataset: Waterbirds 32 D.2 Genderbias Fig. 25 presents the SHAP scores, revealing findings consistent with those from the Waterbirds dataset, where score ratios align with the corresponding attributes. However, ZSexhibits a lower ratio,S Y, compared to Waterbirds. This may be due to Y, representing gender, occupying only a single token in the caption, unlike attributes such as occupation or feature category. Despite gender descriptions being present in all captions, Tab. 16 shows that no gender-related features appear in ZY across any of the models. Correlation with TextSpan. We observe a positive correlation with TextSpan in the GenderBias dataset, where the descriptions of ZSalign with gender-related attributes or references to people. In contrast, the descriptions in ZYare associated with occupational objects or visual depictions of work settings, such as “ bioreactor ” and “ dance pose ”, which likely contribute to classifying the occupation depicted in the image. Individual state representation. Most tokens in Tab. 18, 19 and 20 relates to descriptions of occupational equipment. For states with a higher occupation-to-feature ratio, we see higher occurrences of occupational terms. This aligns with the expectation that surrounding objects in an image are critical for classifying a person’s occupation. For instance, a stethoscope serves as a key medical device to differentiate between a doctor and a nurse, and its inclusion becomes even more significant in mitigating bias when classifying a male nurse. Visual Interpretations. We observe more plausible heatmaps on ViT-B compared to the larger models. Most examples align with human intuition when classifying the occupation depicted in an image, focusing on elements such as “ helmet, workplan: civil engineer ”, “desk with computer: receptionist ”, or “ computer: web developer ”. In contrast, larger models exhibit noisier distributions that may not | https://arxiv.org/abs/2505.17425v1 |
align with human reasoning. This observation highlights a potential limitation in identifying states that do not necessarily encode the hypothesized attribute effectively. Figure 25: Normalized SHAP values towards text features belonging to YandS. Left: ViT-B/16, Middle: ViT-L/14, Right: ViT-H/14 Dataset: Genderbias 33 Table 16: Top text features for ZSY, ZY, ZS. Dataset: Genderbias. ViT-B/16 ViT-L/14 ViT-H/14 ZY ZS ZY ZS ZY ZS office male office male office male desk female desk female desk female laptop manager correctional worker computer supervisor police her construction his documents his physician his detectives technician headset worker monitors cheerful laundry her physician her officers mechanic factory mechanic construction manager laundry smiling workshop manager medical mechanic computer worker hospital administrator correctional suit medical she industrial supervisor laundry uniform Table 17: Correlation between TextSpan [Gandelsman et al., 2023] and located states. For each state, the given text statements represents the top 5 textual descriptions that accounts for the variance across the Imagenet validation set. L10H10 denotes attention head at layer 10 and head 10. Dataset: Genderbias Model Top Localized StatesViT-B/16ZS:L11H4 ZY:L11H8 Image with a five people A laptop Quirky street performer A rug An image with dogs A shelf Image with three people A bookmark A photo of a woman A bagViT-L/14ZS:L23H4 ZY:L23H1 Playful siblings Photograph taken in a retro diner A photo of a young person Intense athlete Image with three peopleDetailed illustration of a futuristic bioreactor A photo of a womanImage with holographic retro gaming aesthetics A photo of a man Antique historical artifactViT-H/14ZS:L31H7 ZY:L31H6 A photo of a woman Evocative dance pose A photo of a man Picture with cars Energetic children A photo of food An image of a couple Graceful swimming fish Warm home interior thrilling sports action 34 Figure 26: Image visualization: Localized representations of ZY, ZSand overall image. Model: ViT-B/16. Dataset: Genderbias Table 18: Individual state representations within ZYforViT-B/16 . Scores refer to occupa- tion/features .Dataset: Genderbias ViT-B/16: 23/15 L11H3: 25/13 L11H5: 20/13 L11H8: 16/16 office manager office physician headset desk police firstline laptop desk office computer officers stethoscope documents laundry refractory monitors administrator stacks coat medical computer screens workers supervisor firstline hospital tie laundry 35 Figure 27: Image visualization: Localized representations of ZY, ZSand overall image. Model: ViT-L/14. Dataset: Genderbias Table 19: Individual state representations within ZYforViT-L/14 . Scores refer to occupa- tion/features .Dataset: Genderbias ViT-L/14: 23/13 L23H1: 26/13 L22H2: 19/11 L23H3: 18/11 L23H12: 21/12 office office office firstline desk desk desk data laptop laptop factory office refractory laundry facility stethoscope correctional construction correctional computer detectives lab factory manager physician room workshop documents headset industrial lab laptop administrator female construction refractory laundry workshop supervisor correctional 36 Figure 28: Image visualization: Localized representations of ZY, ZSand overall image. Model: ViT-H/14. Dataset: Genderbias Table 20: Individual state representations within ZYforViT-H/14 .Dataset: Genderbias ViT-H/14: 23/15 L31H6: 21/14 L30H7: 22/14 L30H13: 20/13 L30H12: 21/10 office computer office manager desk suit documents computer computer correctional suit male headset construction refractory female medical laptop paperwork correctional suit documents desk exuding officers lab correctional worker laundry laundry firstline physician construction desk telecommunicator confident police electronic office office 37 | https://arxiv.org/abs/2505.17425v1 |
E Computational Requirements All experiments can be ran with a single Nvidia A100 80GB GPU. Since the only training involved is on a 2-layer classifier, our work does not involve heavy computational usage. F Societal Impact While our work focuses on mitigating bias in vision-language models, we acknowledge that the underlying methods could, in principle, be reversed to amplify spurious correlations—such as reinforcing implicit gender biases. However, we do not consider this a significant risk in the context of our work, as the methodology requires explicit identification and manipulation of known components and would unlikely be possible on proprietary models. Rather than enabling harm, we believe our approach advances the growing literature on bias mitigation and promotes transparency by shedding light on how biases arise from large-scale pretraining or fine-tuning on imbalanced datasets. 38 | https://arxiv.org/abs/2505.17425v1 |
T2: An Adaptive Test-Time Scaling Strategy for Contextual Question Answering Zhengyi Zhao1, Shubo Zhang2, Zezhong Wang1, Huimin Wang3, Yutian Zhao3, Bin Liang1,Yefeng Zheng4,Binyang Li2,Kam-Fai Wong1,Xian Wu3,*, 1The Chinese University of Hong Kong2University of International Relations 3Jarvis Research Center, Tencent YouTu Lab4Westlake University zyzhao@se.cuhk.edu.hk Abstract Recent advances in Large Language Models (LLMs) have demonstrated remarkable per- formance in Contextual Question Answering (CQA). However, prior approaches typically employ elaborate reasoning strategies regard- less of question complexity, leading to low adaptability. Recent efficient test-time scal- ing methods introduce budget constraints or early stop mechanisms to avoid overthinking for straightforward questions. But they add hu- man bias to the reasoning process and fail to leverage models’ inherent reasoning capabili- ties. To address these limitations, we present T2: Think-to-Think, a novel framework that dynamically adapts reasoning depth based on question complexity. T2leverages the insight that if an LLM can effectively solve similar questions using specific reasoning strategies, it can apply the same strategy to the original question. This insight enables to adoption of concise reasoning for straightforward questions while maintaining detailed analysis for com- plex problems. T2works through four key steps: decomposing questions into structural el- ements, generating similar examples with can- didate reasoning strategies, evaluating these strategies against multiple criteria, and apply- ing the most appropriate strategy to the original question. Experimental evaluation across seven diverse CQA benchmarks demonstrates that T2 not only achieves higher accuracy than base- line methods but also reduces computational overhead by up to 25.2%. 1 Introduction Large language models (LLMs) have demonstrated impressive capabilities in Contextual Question An- swering (CQA) tasks (Trivedi et al., 2023; Press et al., 2023), but their reasoning approaches often lack adaptability to question complexity. Current CQA systems typically employ either direct an- *Corresponding author.swer generation or elaborate step-by-step reason- ing for all questions, regardless of difficulty (Wei et al., 2022; Huang et al., 2024; Min et al., 2024). This one-size-fits-all approach has an accuracy-vs- efficiency delimma. Directly generating answers for all questions will deteriorate the performance on difficult questions, which require multi-hop reason- ing. Elaborated reasoning for all questions creates an efficiecy challenge: models frequently generate reasoning chains that are excessively verbose, con- taining redundant steps that do not contribute to finding the correct answer. Existing analysis reveals that these redundant reasoning paths can unnecessarily extend the length of reasoning chains multiple times beyond what is required. Such as exploring multiple solution approaches when only one is needed (Ji et al., 2025), or verifying simple facts with elaborate ex- planations (Muennighoff et al., 2025). For exam- ple, when asked “What is the capital of France?”, models often generate lengthy discussions about France’s history and geography before providing the straightforward answer “Paris.” This compu- tational inefficiency is particularly concerning as model deployment costs continue to rise. Recent studies on reasoning efficiency (Yang et al., 2025; Zeng et al., 2025) confirm that blindly increasing reasoning chain length can actually harm perfor- mance on simpler tasks. Various attempts have been made to address this through adding a bud- get or stop mechanism to test-time scaling (TTS) methods (Wei et | https://arxiv.org/abs/2505.17427v1 |
al., 2022; Huang et al., 2024) to stop thinking early, but these approaches introduce a human bias to the reasoning process (Yuan et al., 2023) and fail to leverage the model’s inherent rea- soning abilities. Hence, the fundamental challenge is to develop a reasoning mechanism that can dynamically adjust its computational effort based on question com- plexity, which means providing concise reasoning for straightforward questions while maintaining de- 1arXiv:2505.17427v1 [cs.CL] 23 May 2025 tailed analysis for complex problems. Therefore, we present T2, a think-to-think framework for ef- ficient TTS strategy. T2leverages a key insight: if an LLM can effectively solve similar questions using specific reasoning strategies, it can apply comparable strategies to the original question. The process involves four key steps: (1) Decomposing the original question into its structural elements. For example, given the question: [Given Reference Documents] “Which is taller, the Eiffel Tower or the Empire State Building?” T2would identify this as a comparative question in- volving measurement between two specific places as “Which is [adj], [place 1] or [place 2]?”. (2) Creating a diverse set of similar example ques- tions with the same question structure, each paired with supporting documents and potential reasoning strategies. Each reasoning strategy breaks down similar questions into simpler steps using funda- mental reasoning skills (e.g., decomposing similar question “Which is taller, Building A or Building B?” into subquestions about individual heights con- nected by deductive reasoning for comparison). (3) Evaluating these reasoning strategies using multi- ple criteria to select the most appropriate strategy for the original question. (4) Applying the selected reasoning strategy to the original question while filtering irrelevant information. By learning from similar examples, the model develops a more nuanced understanding of when detailed reasoning is necessary and when a more direct approach is sufficient. This allows T2to balance accuracy and efficiency without relying on pre-determined reasoning templates. We evaluate T2across seven diverse CQA datasets ranging from simple factual queries to complex multi-hop reasoning tasks. Our results demonstrate that T2achieves superior accuracy (up to a 21.3% increase) compared to other TTS approaches while reducing computational require- ments by up to 25.2%. These efficiency gains are particularly clear for simpler questions where re- dundant reasoning steps are eliminated. While for complex questions, T2maintains the reason- ing depth required for accuracy without exploring unnecessary paths. Our contributions include: •We introduce T2, a framework that enables lan- guage models to dynamically select appropriate reasoning strategies through similar examples,balancing efficiency and thoroughness based on question complexity. •We develop a multi-criteria selection method that evaluates potential reasoning strategies based on coverage and uniqueness, ensuring the most suit- able approach is applied to each question. •We demonstrate through extensive experiments across diverse CQA benchmarks that our method reduces computational requirements by up to 25.2% with superior accuracy. 2 Related Work Contextual QA. In addressing contextual QA, recent works have explored multi-round retrieval or reasoning approaches, including query rewrit- ing for subsequent retrievals (Khattab et al., 2022; Ma et al., 2023; Shao et al., 2023; Jiang et al., 2023), alternating between retrieval and reasoning steps (Trivedi et al., 2023), and | https://arxiv.org/abs/2505.17427v1 |
employing multi- round self-asking techniques (Press et al., 2023). They all rely on LLMs’ reasoning abilities. Test-Time Scaling. Recent approaches to en- hancing LLM reasoning capabilities focus on in- creasing computational resources during infer- ence (Brown et al., 2024; Chen et al., 2024), termed test-time scaling. These methods includes major- ity voting (Wang et al., 2022), weighted aggrega- tion (Li et al., 2023), best-of-N (Lightman et al., 2023), Tree-of-Thoughts (Yao et al., 2023), and Monte Carlo Tree Search variants (Wu et al., 2024; Zhang et al., 2024a; Zhao et al., 2024). Besides, o1 model (Jaech et al., 2024) and several follow- up works (Guo et al., 2025; Qwen, 2024; Gem- ini, 2025a; Min et al., 2024; Huang et al., 2024) increase the thinking depth to improve the perfor- mance. But they all apply fixed scaling strategies to all questions. Some adaptive thinking methods like AdoT (Xu et al., 2024) and DAST (Shen et al., 2025) design difficulty measurement to categorize the question based on its difficulty, whereas they in- troduce human bias and fail to leverage the model’s inherent reasoning abilities. Our T2framework builds upon this paradigm while addressing these key limitations. 3 T2: Think-to-Think Framework In this section, we present T2: Think-to-Think, an approach that enables language models to adapt their reasoning strategies based on question com- plexity. Figure 1 provides an overview of our ap- proach. We begin by describing the overall archi- 2 Question StructureSimilar Question 1Similar Question k Our Multi-Criteria MatchingReasoning StrategyPre-CategorizedHardMidEasy Answer QuestionDocumentsAnswer+Question 3Question 2Question 1DocumentsDocumentsDocumentsReason 1Reason 2Reason kAnswerAnswer(c) ours T Sub Question 1Doc 1 Answer for Q1Reasoning Strategy 1Ref Documents+(a)Direct Prompt & CoT(b) Adaptive-of-ThoughtDecompose Sub Question 2Doc 2Sub Question nDoc nR1R2RnRetrieveAll Questions Question DecompositionSimilar Examples GenerationMulti-Criteria Selection and AnsweringRnReasoning SkillReasoning StrategyThink Step-by-Step Generate document to answer sub questionInput-output workflowRef DocumentsTotal reference document for similar questionSelect the best one Step 1 Step 2 Step 3 & 4 QuestionDocumentsQuestionDocuments QuestionDocuments... Concatenate2 Generate ...... Doc nn DocumentsthSub Question 1Doc 1 Answer for QkReasoning Strategy kRef DocumentsSub Question 2Doc 2Sub Question mDoc mR1R2Rm...Figure 1: Overview of our T2.(a)direct prompt or Chain-of-Thought (CoT), which adopts the same reasoning strategy regardless of question complexity. (b)Adaptive-of-Thought, which designs a question complexity evaluator to pre-categorize all questions, which might bring human bias in the evaluator design process. (c)our T2. Instead of pre-categorizing questions into different complexity sets, T2generates multiple similar examples for different inputs adaptively and selects the best reasoning strategy for answering. tecture and workflow of T2before delving into each component in detail. 3.1 Question Decomposition Given a document Dand a question Q, we first analyze the question’s structure to understand its underlying pattern. This allows us to later gener- ate similar questions that require same reasoning strategy. The question structure identification pro- cess involves decomposing the question into fixed structural elements and variable entities that could be substituted. We first tokenize the question Qas a sequence of tokens Q= (q1, q2, . . . , q m). We then classify each token into one of two categories: structural tokens that form the question’s framework, and replaceable entities that could be substituted with | https://arxiv.org/abs/2505.17427v1 |
alternatives. We define a classification function with fine-tuned RoBERTa, detailed in Appendix D. Based on this classification, we partition the ques- tion tokens into two sets: P={qi|ifqiis a replaceable entity },(1) QS={qi|ifqiis a structural token }, (2) where Prepresents the set of replaceable entities (which we call entity placeholders), and QSrep- resents the set of structural tokens that form the question’s framework.For each identified entity placeholder piinP, we assign a semantic type (e.g., person ,location , date ). This creates a set of typed entities: T={(p1, τ1),(p2, τ2), . . . , (pk, τk)},(3) where each pair (pj, τj)consists of a placeholder entity pjand its corresponding type τj. By combining the structure tokens QSwith the typed placeholders in T, we create a question tem- plate. For example, if Qis “Which is taller, the Eiffel Tower or the Empire State Building?”, the function would identify “taller”, “Eiffel Tower”, and “Empire State Building” as replaceable enti- ties of type adjandplace . The resulting template would be “Which is [adj], [place 1] or [place 2]?”, where the bracketed terms are typed placeholders. 3.2 Similar Examples Generation Once we have extracted the question structure, we generate similar document-question-answer pairs that follow the same question structure but with different entities. Reasoning Skills Taxonomy. We build on estab- lished cognitive science literature (Bartha, 2013; Bordalo et al., 2024) to define a taxonomy of 7 fun- damental reasoning skills Sthat humans commonly employ when solving problems (e.g., Deductive ,In- 3 ductive1). Each skill represents a distinct cognitive approach to processing information and drawing conclusions. Question Generation. For each placeholder in the question structure, we generate alternative enti- ties of matching types. We prompt an LLM to sug- gest contextually appropriate substitutes for each entity type τj. This produces a collection of candi- date similar questions ˆQsimthat share the structural pattern of the original question but contain different entities. To ensure high-quality examples, we implement a validation process. We prompt the same LLM to evaluate the similarity between each candidate question and the original question structure: sim(Q,ˆq)≥δ,ˆq∈ˆQsim, (4) where δ∈[1,10]is a threshold parameter. Only questions exceeding this threshold are retained, re- sulting in a filtered set of similar questions Qsim. Reasoning Strategy Construction. For each similar question Qi sim∈Qsim, we decompose it into a sequence of subquestions: Qi sim→(Q(i,1) sim, . . . , Q(i,K) sim), (5) where each subquestion Q(i,K) simrepresents a dis- crete reasoning step and Kis the number of sub- questions. The connections between subquestions are characterized by specific reasoning skills from our taxonomy. This decomposition allows us to construct a comprehensive reasoning strategy: si= (si 1, si 2, . . . , si K), (6) where each si k∈ Sis the reasoning skill required to transition from subquestion Q(i,k) simtoQ(i,k+1) sim. Reference Document Generation. For each sub- question Q(i,k) sim, we generate a document segment di kcontaining the precise information needed to answer that subquestion. The complete reference document for question Qi simis then constructed as: Di ref={di 1, di 2, . . . , di K}. (7) For example, given a similar question like “Which | https://arxiv.org/abs/2505.17427v1 |
is taller, A or B?”, the decomposition might yield subquestions: “What is the height of A?”, 1Appendix A shows the complete taxonomy of reasoning skills with their description and example applications.“What is the height of B?”, and “Which height is greater?”. The reasoning strategy would connect these using deductive reasoning, and the reference document would provide the necessary height in- formation for both entities. The complete collection of similar examples is represented as: Γ ={(Di ref, Qi sim,si)}N i=1, (8) where Nis the total number of similar examples. This diverse set covers various reasoning strategies of different complexity levels, allowing our sys- tem to later select the most appropriate reasoning approach for original questions. 3.3 Multi-Criteria Matching When presented with the original question Qand documents D, we need to determine which reason- ing strategy would be most effective. We select the most relevant example from our similar collec- tionΓusing a multi-criteria matching process that considers both reasoning skill requirements and structural similarity. Skill Uniqueness Scoring. Recognizing that some reasoning skills are more specialized than others, we weight skills by their rarity in our ex- ample collection. For each reasoning skill s∈ S, we define freq(s)as the number of examples in Γ that include skill sin their reasoning paths. The uniqueness score of a skill is: α(s) = lnN+ 1 freq(s) + 1 , (9) where Nis the total number of examples in our collection. This logarithmic formulation assigns higher weights to skills that appear less frequently, capturing the intuition that specialized reasoning skills deserve special consideration. Skill Coverage Assessment. For each example in our collection, we calculate how well its reason- ing path covers reasoning skills: cover (si,S) =|si∩ S| |S|. (10) This coverage metric quantifies what proportion of the required reasoning skills are present in the example’s reasoning strategy. 4 Algorithm 1 Reasoning Path-Guided Answering Require: Q(original question), D(document), i∗(selected example index), Γ(example collection) Ensure: A(final answer) 1:(Di∗ ref, Qi∗ sim, Ai∗ sim,si∗)←Γ[i∗] ▷Retrieve selected example 2:Dfocus← ∅ ▷Initialize focused document segments 3:forℓ= 1to|si∗|do ▷For each skill in the reasoning path 4: textℓ←ExtractRelevantSegment (D, si∗ ℓ) ▷Extract relevant text for skill si∗ ℓ 5: Dfocus←Dfocus∪ {textℓ} ▷Add to focused segments 6:end for 7:Prompt ←FormatPrompt (Q, D focus,si∗, Qi∗ sim, Ai∗ sim) ▷Construct guidance prompt 8:A←LLM(Prompt ) ▷Generate answer with guided reasoning 9:return A Integrated Selection Score. We compute a com- prehensive selection score for each remaining ex- ample, and the optimal example is selected as: i∗= arg max i cover (si,S) +LX ℓ=1α(si ℓ)! , (11) where Lis the length of the reasoning strategy si. This score balances how well the example covers the required reasoning skills and how uniquely it captures specialized reasoning approaches. 3.4 Reasoning Strategy-Guided Answering The final component of T2uses the selected exam- ple to guide the reasoning process for answering the original question. Algorithm 1 outlines this process. The “ExtractRelevantSegment” function uses LLM to identify portions of the document Dthat are most relevant to applying a particular reasoning skill. This focuses the model’s attention on infor- mation appropriate to each step of the reasoning process. The | https://arxiv.org/abs/2505.17427v1 |
“FormatPrompt” function combines the original question, the focused document seg- ments, the selected reasoning strategy, and the ex- ample document-question-answer pair into a com- prehensive prompt. This prompt instructs the lan- guage model to answer the original question by applying the reasoning skills in the selected strat- egy, using the example as a demonstration of the reasoning approach. This methodology enables adaptive reasoning that scales with question complexity. For simple questions, T2selects examples with a straightfor- ward reasoning strategy, avoiding unnecessary com- putational overhead. For complex questions, it se- lects examples with a more sophisticated reasoning strategy that guides the model through the neces-sary steps to arrive at the correct answer. Impor- tantly, this adaptation occurs without parameter tuning or multiple reasoning attempts, requiring only a single forward pass through the language model. 4 Experiments 4.1 Experimental Setups Datasets. We evaluate our approach on seven QA datasets from diverse domains. SQuAD (general- domain questions from Wikipedia) (Rajpurkar et al., 2018), HotpotQA (multihop questions span- ning multiple paragraphs) (Yang et al., 2018), BioASQ (biomedical queries requiring specialized knowledge) (Tsatsaronis et al., 2015), NewsQA (news-related passages) (Trischler et al., 2017), GAOKAO (exam-oriented dataset with academic coverage) (Zhang et al., 2024b), HQA (historical questions focusing on chronology and figures) (Ho- sen et al., 2023), and TriviaQA (Wikipedia-based trivia) (Joshi et al., 2017). Appendix B summarizes dataset sizes and domains. Reasoning Strategies and Metrics. We com- pare our T2framework against slow-thinking andquick-thinking baselines. Slow-thinking ap- proaches include: proactiveCoT (proCoT) (Deng et al., 2023), Self-Consistency (Wang et al., 2022), Tree of Thoughts (ToT) (Yao et al., 2023), and Monte Carlo Tree Search (MCTS) (Zhao et al., 2024). Quick-thinking methods include: few-shot prompting anddirect prompting without explicit reasoning steps. For evaluation, we use ROUGE-L as our metric across all datasets.2 2We recognize GenAI can generate the correct answer, but with different literalness. Hence we use ROUGE-L here instead of resulting in a misleadingly low Exact Match (EM) rate. We also report EM performance in Appendix F. 5 Model SQuAD HotpotQA NewsQA Gaokao HQA TriviaQA BioASQ Quick-Thinking Models w/ Reasoning Strategies Qwen2.5-32B-Instruct w/ vanilla (quick) 73.41 55.32 50.83 29.52 35.92 40.73 56.33 w/ few-shots (quick) 74.56 56.23 51.67 30.33 36.87 41.57 57.17 w/ self-consistency (Wang et al., 2022) 75.31 56.76 52.27 30.57 37.12 41.92 57.57 w/ proCoT (Deng et al., 2023) 77.12 58.07 53.57 31.42 38.03 42.83 58.62 w/ ToT (Yao et al., 2023) 78.47 59.11 54.31 31.96 38.66 43.46 59.36 w/ MCTS (Zhao et al., 2024) 78.52 58.97 54.25 32.04 38.73 43.51 59.42 w/ T2(ours) 81.86 67.11 61.27 34.06 40.31 43.92 65.02 GPT-4o w/ vanilla (quick) 78.52 60.02 55.32 34.51 41.11 49.01 60.51 w/ few-shots (quick) 79.86 61.06 56.17 35.36 42.06 50.07 61.37 w/ self-consistency (Wang et al., 2022) 80.56 61.61 56.62 35.62 42.46 50.42 61.81 w/ proCoT (Deng et al., 2023) 82.12 63.02 57.86 36.66 43.36 51.46 62.87 w/ ToT (Yao et al., 2023) 83.21 64.06 58.67 37.22 44.07 52.26 63.72 w/ MCTS (Zhao et al., 2024) 83.35 64.18 58.19 37.31 45.15 52.38 64.89 w/ T2(ours) 85.06 66.16 60.92 37.57 45.27 53.92 66.97 Slow-Thinking Models o1-mini 85.81 | https://arxiv.org/abs/2505.17427v1 |
70.91 63.22 42.66 49.22 58.56 68.42 QwQ-32B-Preview 86.87 71.86 63.92 43.23 49.62 59.16 69.02 DeepSeek-R1 87.62 72.72 64.41 43.47 50.27 60.02 70.72 o1 88.22 73.37 65.11 44.06 51.07 60.86 71.36 o4-mini 88.72 73.86 65.57 44.32 51.61 61.11 71.82 o4-mini-high 88.91 74.07 65.81 44.52 51.86 61.27 72.02 Claude-3.7-sonnet-thinking 89.11 74.21 66.01 44.61 52.01 61.47 72.22 o3 89.41 74.61 66.32 45.01 52.11 61.81 72.62 Gemini-2.5-Pro 90.27 75.46 67.11 45.76 53.07 62.68 73.57 QwQ-32B + T2(ours) 92.12 77.61 68.61 47.42 54.71 64.22 75.21 Table 1: ROUGE-L on seven QA datasets. We regard vanilla model and few-shot method as quick-thinking methods. And the other five (including ours) are slow-thinking methods. They can all be applied to quick-thinking models to improve reasoning ability. Large Language Models. We use two quick- thinking LLMs ( Qwen2.5-32B-Instract (Yang et al., 2024), and GPT-4o (Hurst et al., 2024; Guo et al., 2025)) and several slow-thinking LLMs (GPT-o1/3/4 series (Jaech et al., 2024), QwQ-32B- Preview (Qwen, 2025), Claude-3.7 (Anthropic, 2025), Gemini-2.5-Pro (Gemini, 2025b)). Unless otherwise specified, hyperparameters are set to the default values for each model. No domain-specific fine-tuning and no target-designed prompt are ap- plied, ensuring a fair and consistent comparison. More detailed implementation and all prompts can be found in Appendices D and E. 4.2 Results Table 1 compares ROUGE-L on seven QA bench- marks. The upper half lists quick-thinking mod- elsevaluated with several slow-thinking frame- works. The lower half gathers the strongest slow- thinking models . We also report the performance of Qwen2.5-32B-Instruct + T2and QwQ-32B- Preview + T2to show comparison with slow- thinking models. The experimental results show that by comparison with other thinking strategies,our T2could help quick-thinking model achieve better performance. Besides, compared with other slow-thinking models, adding our T2can also help model improve the performance. We conduct sev- eral analysis experiments detailed as follows. 4.2.1 T2Enhance the Reasoning Skills Hit Rate while Reducing the Error HotpotQA supplies gold supporting sentences for every question, hence we use these to evaluate rea- soning quality. For a model output that mentions a setPqof sentences and a gold set Gq, we record a HitifPq⊇Gq(all required facts retrieved) and an Error ifPq̸⊆Gq(at least one spurious fact added). Thus Hit measures completeness , Error measures precision , and the two are inversely related: longer chains tend to raise Hit but also raise Error. Figure 2(left) shows that quick-thinking frameworks give low Hit and moderate Error, while slow-thinking methods improve Hit at the cost of higher Error. OurT2strikes the best balance, achieving the high- est Hit and the lowest Error on Qwen2.5-32B, con- firming that adaptive path length yields the most ac- curate multihop reasoning. The detailed calculation 6 Ours achieves high Hits rate while maintaining the low Error rate.Figure 2: Results on Hits and Errors (left) and Retrace Rate (right). Skill Type Uniform Ours Improvement Deductive 72.3% 75.8% +3.5% Inductive 68.7% 73.2% +4.5% Abductive 74.1% 76.3% +2.2% Cause & Effect 70.5% 74.1% +3.6% Analogical 63.8% 71.5% +7.7% Critical Thinking 69.2% 72.8% +3.6% Decompositional 61.4% 69.7% +8.3% Table 2: Performance comparison between uniform and our matching strategies. of Hits and Errors can be | https://arxiv.org/abs/2505.17427v1 |
found in Appendix G.1. 4.2.2 T2Tends to Get Correct Answers Immediately without Retrace A response is said to retrace if the model announces a provisional conclusion and later back-tracks on it inside the same output (e.g., “ So the answer is X. . . wait, that seems wrong—let me revise. . . the answer is Y ”). Obviously, as retrace brings extra computing cost, it would be better for a model to ensure a lower retrace rate while maintaining the same accuracy. Concretely, we scan the CoT for either (i) <answer> markers that appear more than once, or (ii) lexical repair cues such as “sorry,” “ac- tually,” or “let me rethink,” followed by a different answer span; if either pattern occurs, the example counts as a retrace. Figure 2 (right) shows that, taking Qwen2.5-32B as LLM, slow-thinking meth- ods retrace more on NewsQA and HQA, whereas quick-thinking methods seldom retrace but miss clues, hurting performance. Our T2keeps both metrics low—matching the speed of quick thinking and the accuracy of slow thinking—demonstrating that adaptive path length minimises wasted reason- ing. The detailed calculation of Hits and Errors can be found in Appendix G.2. SQuAD BioASQ HotpotQA NewsQA GAOKAO HQA TriviaQA Datasets12001400160018002000220024002600Average T oken Length Average Token Length per Dataset Models Qwen2.5-32B w/ SC Qwen2.5-32B w/ T2 QwQ-32B QwQ-32B w/ T2Figure 3: Results of average token length on each dataset. SC is the abbreviation for Self-Consistency. 4.2.3 T2Costs Fewer Tokens to Achieve Superior Performance To evaluate the efficiency of our T2, we com- pare four reasoning approaches: (1) Qwen2.5- 32B w/ self-consistency, a typical slow-thinking method, (2) QwQ-32B-Preview, another slow- thinking model, (3) Qwen2.5-32B w/ T2, and (4) QwQ-32B w/ T2, our adaptive reasoning meth- ods. Figure 3 shows that our method reduces token consumption by 25.2% compared to QwQ-32B- Preview, and by 14.8% compared to Qwen2.5-32B w/ self-consistency, while maintaining competitive accuracy. These findings highlight that our method achieves an optimal trade-off between computa- tional efficiency and reasoning quality. A full com- parison, including token usage and performance across datasets, is provided in Appendix I. 4.3 Similar Examples Quality Analysis Our Matching Strategy Can Expose More Di- verse Reasoning Skills. The effectiveness of our framework relies not only on identifying appropri- ate reasoning skills but also on how these skills 7 [Reference Documents] Question: In what city was the subject of the film Nowhere Boy born? Quick Thinking Model's Wrong Answer: The subject of Nowhere Boy was born in London.Proper Reasoning Chain: 1. Decompositional: Find the (a) film subject, (b) born place 2. Deductive: Nowhere Boy is about John Lennon 3. Deductive: John was born in Liverpool Slow Thinking Model's Overthinking Answer: [After a lengthy analysis of various biographical details concerning] John Lennon ... was born in Liverpool.Model with our FReM's Correct Answer: Since Nowhere Boy is a film about John Lennon (Doc 2) and Doc 1 confirms that John was born in Liverpool. We deduce the answer is Liverpool.Figure 4: Case study to show effectiveness of our T2framework. There are three proper reasoning skills should be adopted to answer the question based | https://arxiv.org/abs/2505.17427v1 |
on given documents. The red, orange, and green answers represent responses under quick thinking, slow thinking, and ours, respectively. 60 65 70 75 80 85 90 Reasoning Skill Accuracy (%)304050607080Answer Correctness (%) Skill Accuracy vs Performance - HotpotQA Qwen2.5-32B w/ T2 QwQ-32B w/ T2 Figure 5: Results on relationship between reasoning skills’ accuracy and overall performance. are matched during the example selection. Hence, we examine the impact of our multi-criteria reason- ing skills matching strategy compared to a naive uniform sampling approach. Table 2 presents the results of our experiment against uniform sampling across different reasoning skill types. Our approach consistently outperforms uniform sampling across all skill categories, with particularly notable im- provements for less frequent reasoning types such as decompositional reasoning (+8.3%) and analog- ical reasoning (+7.7%). This confirms our hypoth- esis that the strategic balancing of skill demon- strations enhances the model’s ability to leverage diverse reasoning patterns. The distribution of each reasoning skill can be found in Appendix C. Abla- tion study of multi-criteria matching strategy can be found in Appendix H. Accuracy of Reasoning Skills Results in Cor- rectness of Answers. We examined the correla- tion between the accuracy of selected reasoning skills and the correctness of final answers using the HotpotQA dataset. We conducted the exper-iment on two models: Qwen2.5-32B-Instruct w/ T2and QwQ-32B-Preview w/ T2. The analysis, shown in Figure 5, reveals a strong positive correla- tion between skill accuracy and answer correctness. Higher skill accuracy corresponds to higher answer correctness, with an approximate 5-6% increase in correctness for every 5% improvement in skill accuracy. These results demonstrate that accurately selecting the correct reasoning skills is essential for generating correct answers, especially in complex multi-hop reasoning tasks. We also discuss the impacts of question structure (J.1), impacts of numbers of similar examples (J.3), impacts of various generated methods (J.4), im- pacts of threshold of similarity in generation (J.5), impacts of examples domain bias and structural bias (J.6), and human evaluation (J.7) in Appendix. 4.4 Case Study Figure 4 shows an short version of example to show effectiveness of our T2. By explicitly providing the model-specific reasoning path, the model can generate the correct answer with an appropriate rea- soning chain of thought. The detailed case studies can be found in Appendix K. 5 Conclusion In this paper, we introduced T2: Think-to-Think, a novel framework that dynamically adapts reasoning depth based on question complexity for contextual question answering tasks. Unlike prior approaches that employ fixed reasoning strategies regardless of question difficulty, T2enables models to learn appropriate reasoning strategies from similar ex- amples, leading to more efficient processing while maintaining accuracy. Our experimental results across seven diverse CQA benchmarks confirm that T2not only achieves higher accuracy than baseline 8 methods but also reduces computational overhead by up to 25.2%. These improvements demonstrate the value of adaptability in reasoning processes, suggesting that as language models continue to evolve, approaches like T2that optimize both ac- curacy and computational efficiency will become increasingly important for developing more intelli- gent systems that can effectively allocate computa- tional resources based on task demands. Limitations | https://arxiv.org/abs/2505.17427v1 |
While T2: Think-to-Think demonstrates promis- ing results across various CQA benchmarks, we acknowledge several limitations of our approach: First, the effectiveness of T2relies on the avail- ability of high-quality example reasoning strategy for similarity matching. In domains with limited annotated examples or highly novel questions, the framework may struggle to identify appropriate reasoning patterns, potentially defaulting to less optimal strategies. Besides, our current implemen- tation focuses primarily on textual reasoning tasks. Extending T2to multimodal reasoning contexts (e.g., visual question answering) would require ad- ditional architectural modifications to handle di- verse input modalities while maintaining compu- tational efficiency. Despite these limitations, we believe T2represents a significant step toward more adaptive and efficient reasoning systems that can in- telligently allocate computational resources based on question complexity. Ethical Considerations We ensure that all experiments are conducted us- ing publicly available, ethically sourced datasets, adhering to privacy and intellectual property guide- lines. We acknowledge the potential for biases in data and are committed to evaluating and mitigat- ing any such biases in T2. References Anthropic. 2025. Claude 3.7 sonnet system card. Tech- nical report. Paul Bartha. 2013. Analogy and Analogical Reasoning. In Edward N. Zalta and Uri Nodelman, editors, The Stanford Encyclopedia of Philosophy , Fall 2024 edi- tion. Metaphysics Research Lab, Stanford University. Pedro Bordalo, Nicola Gennaioli, Giacomo Lanzani, and Andrei Shleifer. 2024. A cognitive theory of reasoning and choice.Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V Le, Christopher Ré, and Azalia Mirho- seini. 2024. Large language monkeys: Scaling infer- ence compute with repeated sampling. arXiv preprint arXiv:2407.21787 . Lingjiao Chen, Jared Quincy Davis, Boris Hanin, Peter Bailis, Ion Stoica, Matei A Zaharia, and James Y Zou. 2024. Are more llm calls all you need? towards the scaling properties of compound ai systems. Advances in Neural Information Processing Systems , 37:45767– 45790. Yang Deng, Lizi Liao, Liang Chen, Hongru Wang, Wenqiang Lei, and Tat-Seng Chua. 2023. Prompt- ing and evaluating large language models for proac- tive dialogues: Clarification, target-guided, and non- collaboration. In Findings of the Association for Computational Linguistics: EMNLP 2023 , pages 10602–10621, Singapore. Association for Compu- tational Linguistics. Google Gemini. 2025a. Gemini 2.5 flash thinking mode. Google Gemini. 2025b. Gemini 2.5: Our most intelli- gent ai model. Technical report. Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shi- rong Ma, Peiyi Wang, Xiao Bi, and 1 others. 2025. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 . Sabbir Hosen, Jannatul Ferdous Eva, Ayman Hasib, Aloke Kumar Saha, M.F. Mridha, and Anwar Hussen Wadud. 2023. Hqa-data: A historical question an- swer generation dataset from previous multi perspec- tive conversation. Data in Brief , 48:109245. Zhen Huang, Haoyang Zou, Xuefeng Li, Yixiu Liu, Yuxiang Zheng, Ethan Chern, Shijie Xia, Yiwei Qin, Weizhe Yuan, and Pengfei Liu. 2024. O1 replication journey–part 2: Surpassing o1-preview through sim- ple distillation, big progress or bitter lesson? arXiv preprint arXiv:2411.16489 . Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, and 1 others. 2024. Gpt-4o | https://arxiv.org/abs/2505.17427v1 |
system card. arXiv preprint arXiv:2410.21276 . Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richard- son, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, and 1 others. 2024. Openai o1 system card. arXiv preprint arXiv:2412.16720 . Ke Ji, Jiahao Xu, Tian Liang, Qiuzhi Liu, Zhiwei He, Xingyu Chen, Xiaoyuan Liu, Zhijie Wang, Junying Chen, Benyou Wang, and 1 others. 2025. The first few tokens are all you need: An efficient and effective unsupervised prefix fine-tuning method for reasoning models. arXiv preprint arXiv:2503.02875 . 9 Zhengbao Jiang, Frank F. Xu, Luyu Gao, Zhiqing Sun, Qian Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. Active retrieval augmented generation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP 2023, Singapore, Decem- ber 6-10, 2023 , pages 7969–7992. Association for Computational Linguistics. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehen- sion. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers) , pages 1601–1611, Vancouver, Canada. Association for Computational Linguistics. Uri Katz, Matan Vetzler, Amir Cohen, and Yoav Gold- berg. 2023. Neretrieve: Dataset for next generation named entity recognition and retrieval. In Findings of the Association for Computational Linguistics: EMNLP 2023 , pages 3340–3354. Omar Khattab, Keshav Santhanam, Xiang Lisa Li, David Hall, Percy Liang, Christopher Potts, and Matei Zaharia. 2022. Demonstrate-search-predict: Composing retrieval and language models for knowledge-intensive NLP. CoRR , abs/2212.14024. Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. 2023. Making language models better reasoners with step-aware verifier. In Proceedings of the 61st Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 5315–5333. Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harri- son Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. 2023. Let’s verify step by step. In The Twelfth Inter- national Conference on Learning Representations . Xinbei Ma, Yeyun Gong, Pengcheng He, Hai Zhao, and Nan Duan. 2023. Query rewriting for retrieval-augmented large language models. CoRR , abs/2305.14283. Yingqian Min, Zhipeng Chen, Jinhao Jiang, Jie Chen, Jia Deng, Yiwen Hu, Yiru Tang, Jiapeng Wang, Xi- aoxue Cheng, Huatong Song, and 1 others. 2024. Imitate, explore, and self-improve: A reproduction report on slow-thinking reasoning systems. arXiv preprint arXiv:2412.09413 . Niklas Muennighoff, Zitong Yang, Weijia Shi, Xi- ang Lisa Li, Li Fei-Fei, Hannaneh Hajishirzi, Luke Zettlemoyer, Percy Liang, Emmanuel Candès, and Tatsunori Hashimoto. 2025. s1: Simple test-time scaling. arXiv preprint arXiv:2501.19393 . Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, and Mike Lewis. 2023. Measuring and narrowing the compositionality gap in language models. In Findings of the Association for Compu- tational Linguistics: EMNLP 2023, Singapore, De- cember 6-10, 2023 , pages 5687–5711. Association for Computational Linguistics.Qwen. 2024. Qwq: Reflect deeply on the boundaries of the unknown. Qwen. 2025. Qwq-32b: Embracing the power of rein- forcement learning. Technical report. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable | https://arxiv.org/abs/2505.17427v1 |
ques- tions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers) , pages 784–789, Melbourne, Australia. Association for Computational Linguistics. Zhihong Shao, Yeyun Gong, Yelong Shen, Minlie Huang, Nan Duan, and Weizhu Chen. 2023. En- hancing retrieval-augmented large language models with iterative retrieval-generation synergy. In Find- ings of the Association for Computational Linguis- tics: EMNLP 2023, Singapore, December 6-10, 2023 , pages 9248–9274. Association for Computational Linguistics. Yi Shen, Jian Zhang, Jieyun Huang, Shuming Shi, Wen- jing Zhang, Jiangze Yan, Ning Wang, Kai Wang, and Shiguo Lian. 2025. Dast: Difficulty-adaptive slow- thinking for large reasoning models. arXiv preprint arXiv:2503.04472 . Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2017. Newsqa: A machine comprehension dataset. In Proceedings of the 2nd Workshop on Representation Learning for NLP , pages 191–200. Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2023. Interleaving retrieval with chain-of-thought reasoning for knowledge- intensive multi-step questions. In Proceedings of the 61st Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023 , pages 10014–10037. Association for Computational Lin- guistics. George Tsatsaronis, Georgios Balikas, Prodromos Malakasiotis, Ioannis Partalas, Matthias Zschunke, Michael R Alvers, Dirk Weissenborn, Anastasia Krithara, Sergios Petridis, Dimitris Polychronopou- los, Yannis Almirantis, John Pavlopoulos, Nico- las Baskiotis, Patrick Gallinari, Thierry Artieres, Axel Ngonga, Norman Heino, Eric Gaussier, Liliana Barrio-Alvers, and 3 others. 2015. An overview of the bioasq large-scale biomedical semantic indexing and question answering competition. BMC Bioinfor- matics , 16:138. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171 . Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, 10 and 1 others. 2022. Chain-of-thought prompting elic- its reasoning in large language models. Advances in neural information processing systems , 35:24824– 24837. Yangzhen Wu, Zhiqing Sun, Shanda Li, Sean Welleck, and Yiming Yang. 2024. An empirical analysis of compute-optimal inference for problem-solving with language models. Mayi Xu, Yongqi Li, Ke Sun, and Tieyun Qian. 2024. Adaption-of-thought: Learning question difficulty improves large language models for reasoning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 5468–5495. An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jian- hong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, and 22 others. 2024. Qwen2.5 technical report. Technical report. Wenkai Yang, Shuming Ma, Yankai Lin, and Furu Wei. 2025. Towards thinking-optimal scaling of test-time compute for llm reasoning. arXiv preprint arXiv:2502.18080 . Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Ben- gio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answer- ing. In Conference on Empirical Methods in Natural Language Processing (EMNLP) . Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom | https://arxiv.org/abs/2505.17427v1 |
Griffiths, Yuan Cao, and Karthik Narasimhan. 2023. Tree of thoughts: Deliberate problem solving with large language models. Advances in neural information processing systems , 36:11809–11822. Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Keming Lu, Chuanqi Tan, Chang Zhou, and Jingren Zhou. 2023. Scaling relationship on learning mathematical reasoning with large language models. arXiv preprint arXiv:2308.01825 . Zhiyuan Zeng, Qinyuan Cheng, Zhangyue Yin, Yun- hua Zhou, and Xipeng Qiu. 2025. Revisiting the test-time scaling of o1-like models: Do they truly possess test-time scaling capabilities? arXiv preprint arXiv:2502.12215 . Di Zhang, Xiaoshui Huang, Dongzhan Zhou, Yuqiang Li, and Wanli Ouyang. 2024a. Accessing gpt-4 level mathematical olympiad solutions via monte carlo tree self-refine with llama-3 8b. arXiv preprint arXiv:2406.07394 . Xiaotian Zhang, Chunyang Li, Yi Zong, Zhengyu Ying, Liang He, and Xipeng Qiu. 2024b. Evaluating the performance of large language models on gaokao benchmark. Preprint , arXiv:2305.12474.Yu Zhao, Huifeng Yin, Bo Zeng, Hao Wang, Tianqi Shi, Chenyang Lyu, Longyue Wang, Weihua Luo, and Kaifu Zhang. 2024. Marco-o1: Towards open reasoning models for open-ended solutions. arXiv preprint arXiv:2411.14405 . A Full Reasoning Skills Defined by (Bartha, 2013; Bordalo et al., 2024), reasoning can best be defined as the basic action of thinking in a sensible and rational way about something. Reasoning is the ability to assess things rationally by applying logic based on new or exist- ing information when making a decision or solving a problem. Based on their conclusion, Tables 3 and 4 show the reasoning skills for answering a certain question. B Datasets In this work, we evaluate our method on seven widely used question answering datasets. Each dataset presents distinct characteristics, ranging from the type of questions asked to the domain in which they are applied. Below, we provide a brief overview of each dataset. SQuAD consists of over 100,000 question- answer pairs derived from a set of Wikipedia ar- ticles. The task is to find the span of text that answers the question. SQuAD is widely used for evaluating machine reading comprehension mod- els. The dataset includes two versions: SQuAD 1.1, which contains answerable questions, and SQuAD 2.0, which also includes unanswerable questions, making it more challenging. We use 2.0 version here. HotpotQA is a large-scale, multi-hop question answering dataset that requires reasoning across multiple supporting facts. The dataset includes over 113,000 question-answer pairs spanning various do- mains, where answers cannot be found in a single sentence or passage but require combining infor- mation from several documents. The questions in HotpotQA require a more complex reasoning process compared to typical single-hop datasets. BioASQ is a biomedical question answering dataset that provides information from scientific articles, primarily in the domain of biomedicine. It includes both factoid and complex questions that require understanding of scientific literature. BioASQ focuses on answering clinical, biomedical, and molecular biology-related questions using both structured and unstructured data sources. 11 Type of Reasoning Detailed Description Example Deductive Deductive reasoning occurs when generalized state- ments apply to specific cases. These generalized statements are established and already proven, mak- ing specific cases easy to deduce. For example, all humans are mortals. | https://arxiv.org/abs/2505.17427v1 |
Bill is a human, so Bill must be mortal. In this example the generalized, but proven, statement, “all humans are mortals” is what drives the reasoning.Document: All shapes with three sides are triangles. A certain figure here has exactly three sides. Question: What is this figure called? Answer: It is a triangle. All shapes with three sides are triangles, and this figure has three sides. So it must be a triangle. Inductive Inductive reasoning is similar to deductive reason- ing in that they both draw a conclusion based on a statement. However, in inductive reasoning, the statement is likely but has not been proven. For example, roses usually bloom in spring. In spring, one can count on there being roses. Again, the dif- ference is that this is likely but not proven to be 100%.Document: Every spring for the past ten years, wild roses in Green Valley have bloomed in late March. This spring is about to begin in Green Valley. Question: Will the wild roses bloom in late March this year? Answer: It is likely they will bloom in late March, because they usually do, but it is not guaranteed. Abductive Abductive reasoning is the act of making a conclu- sion based on what you already know. For example, if you see a plate of food still hot, but half-eaten, you can make the conclusion that the person eating that food is probably returning soon.Document: You notice a half-eaten sandwich and a still-hot cup of coffee on a café table. The seat feels warm, and a jacket is draped over the chair. Question: Has the person who was sitting here left permanently, or are they coming back soon? Answer: It is likely they just stepped away for a moment and will return, because the food and drink are still warm and their jacket remains on the chair. Cause & Effect Cause and effect reasoning is that if x happens then y will happen as a result. This is extremely persua- sive when making a speech or trying to get some- one to take action to cause an effect. For example, a politician may say that if they are elected, then poverty will decrease. This is using cause and effect reasoning in a real-world situation.Document: Meteorologists predict heavy rain this evening, with warnings that streets may flood if the rainfall continues. Question: Will the roads become dangerous as a result of this weather? Answer: Yes. If heavy rain continues, roads will likely flood and become slippery, causing drivers to have less control of their vehicles. Table 3: (1/2) Full list of reasoning skills used in the reasoning path construction. NewsQA is a dataset designed for reading com- prehension tasks. It consists of over 100,000 question-answer pairs derived from news articles. The challenge of NewsQA lies in answering ques- tions about real-world events from unstructured news stories, requiring models to handle various linguistic phenomena such as coreference, reason- ing, and implicit understanding. GAOKAO is a dataset derived from the Chinese college entrance exam, also known as the "Gaokao". It contains questions related to | https://arxiv.org/abs/2505.17427v1 |
various subjects, including Chinese literature, mathematics, and En- glish. The questions in GAOKAO require both general knowledge and reasoning to answer. This dataset is specifically designed for the Chinese ed- ucation system and is widely used in academic and educational research in China.HQA is a human-annotated dataset specifically designed for complex, open-domain question an- swering. It contains questions that require deep contextual understanding and can involve reason- ing across long documents. The dataset includes various types of questions and answers across di- verse domains, and it was created to test models’ ability to perform reasoning tasks in realistic, open- ended settings. TriviaQA is a large-scale dataset that focuses on answering trivia questions, where each question is associated with a corresponding set of support- ing documents. TriviaQA contains over 650,000 question-answer pairs sourced from trivia websites and requires models to retrieve relevant informa- tion from the documents and answer based on the provided facts. The dataset has questions span- ning various topics such as history, geography, and general knowledge. 12 Type of Reasoning Detailed Description Example Analogical Analogical reasoning is the use of a comparison between two things to persuade that there must be more in common if they already share something. For example, if x, y, and z all share this trait, then they must also share other traits. The foundation of this type of reasoning is perfect for speeches and comparisons in the real world. If there are connections between x and y already, then they must have several other things in common as well.Document: Many leading technology com- panies emphasize continuous learning and adaptability. For instance, Google, Microsoft, and Amazon all invest in regular training pro- grams and encourage innovation among em- ployees. Their similar approach to fostering a culture of growth has been linked to their strong performance in rapidly changing mar- kets. Question: Can we infer that a company that promotes continuous learning will also likely be successful in adapting to market changes? Answer: Yes. Since Google, Microsoft, and Amazon all share a culture of continuous learning and, as a result, demonstrate high adaptability and market success, it is reason- able to conclude by analogy that a company which also promotes continuous learning is likely to develop similar strengths. Critical Thinking Critical thinking occurs when you take all of the facts and develop a conclusion based on an analysis. This could happen subconsciously or intentionally, depending on the situation. For example, in the real world, critical thinking could be about your rela- tionships. You could see a behavior you don’t like about someone and have to think critically about whether or not you will choose to spend more time with this person. This is using critical thinking to develop reasoning in a real-world application.Document: Over the past few months, Sam has repeatedly cancelled plans at the last minute and rarely communicated afterward. Question: Should you invest time in a close friendship with Sam? Answer: No. Sam’s consistent behavior of last-minute cancellations suggests a pattern of unreliability, which may negatively affect the trust needed in a close friendship. Decompositional | https://arxiv.org/abs/2505.17427v1 |
Decompositional reasoning happens when the dif- ferent parts of the reasoning are broken down into smaller pieces and analyzed for how they contribute to the whole. The intent of this is to make the rea- soning easier to understand and allow for analyzing how the parts equal the whole. For example, in order to understand the function of the human body, you would have to analyze each bone and organ to see how they all work together. Additionally, in the real world, an argument could be broken down into several smaller parts in order to analyze the effectiveness of the argument as a whole.Document: A smartphone’s quality can be un- derstood by breaking it down into three parts: its design, performance, and battery life. The design covers the build and user interface; performance looks at processing speed and software efficiency; battery life shows how long the device operates on a single charge. Question: Can we conclude that the smart- phone provides a good overall user experi- ence? Answer: Yes. If the design is appealing, the performance is robust, and the battery life is long, then the smartphone is likely to offer a good overall experience. Table 4: (2/2) Full list of reasoning skills used in the reasoning path construction. C Distribution of Reasoning Skills in Each Dataset Table 5 demonstrates the distribution of seven rea- soning skills in different datasets. The variance in skill distribution highlights why our multi-criteria matching approach is crucial. Without it, high- frequency skills like deductive reasoning would dominate the demonstrations, while valuable but less common skills like abductive reasoning would be underrepresented. D Implementations We use a simple pretrained language model RoBERTa from Huggingface for detecting namedentities or key numbers in the question to obtain the question structure. This classification task involves processing the input question to identify whether it contains a named entity or key number and as- signing a type to the detected entity. The model performs this task by outputting binary labels (en- tity: Yes/No) first, and then the associated entity types (e.g., Person, Location, Date, Organization, Number, etc.). This model is fine-tuned with a simple classifi- cation layer that detects whether a named entity or key number is present in the question with NERe- trieve dataset3(Katz et al., 2023). This process 3https://github.com/katzurik/NERetrieve?tab= 13 Skill Type SQuAD HotpotQA NewsQA GAOKAO HQA TriviaQA BioASQ Deductive 0.31 0.22 0.28 0.15 0.42 0.18 0.25 Inductive 0.23 0.18 0.15 0.12 0.13 0.21 0.19 Abductive 0.05 0.12 0.08 0.21 0.09 0.15 0.11 Cause & Effect 0.12 0.15 0.13 0.22 0.08 0.19 0.14 Analogical 0.08 0.13 0.09 0.07 0.11 0.12 0.16 Critical Thinking 0.14 0.16 0.18 0.14 0.13 0.09 0.10 Decompositional 0.07 0.04 0.09 0.09 0.04 0.06 0.05 Table 5: Distribution (%) of reasoning skills across benchmark datasets, showing the proportion of questions requiring each skill type. Parameter Value Model RoBERTa Full Name FacebookAI/xlm-roberta-large-finetuned-conll03-english Batch Size 128 Learning Rate 2e-5 Optimizer AdamW Dropout Rate 0.1 Evaluation Metric Accuracy Table 6: Implementation parameters for named entity detection and classification. leverages the model’s pre-trained knowledge, with minimal fine-tuning specifically focused on the | https://arxiv.org/abs/2505.17427v1 |
en- tity detection and classification task. The hyperparameters used for fine-tuning the PLM are listed in Table 6. The batch size is set to 128. The learning rate is set to 2×10−5. AdamW is used as the optimizer. A dropout rate of 0.1 is applied to prevent overfitting during fine-tuning. For LLM usage, We use two quick-thinking LLMs ( Qwen2.5-32B-Instract (Yang et al., 2024), and GPT-4o (Hurst et al., 2024; Guo et al., 2025)) and several slow-thinking LLMs ( GPT- o1/3/4 series (Jaech et al., 2024), QwQ-32B- Preview (Qwen, 2025), Claude-3.7 (Anthropic, 2025), Gemini-2.5-Pro (Gemini, 2025b)). For ToT implementation, we follow the original pa- per’s approach (Yao et al., 2023) with a breadth- first search strategy and a maximum depth of 3. For MCTS, we implement the standard UCT algorithm with 10 simulations per decision point. For syn- thetic QA generation, we set a maximum output length of 4,096 tokens. When deciding which sim- ilar example to use, we follow our multi-criteria matching (Section 3.3) to pick the most relevant chain of skills. Unless otherwise specified, hyper- parameters stay at default values for each model. readme-ov-fileNo domain-specific fine-tuning and no targeted de- signed prompt are applied, ensuring a fair and con- sistent comparison. All inferences are based on vLLM framework with 1 node of Tesla V100 and 1 node of GeForce RTX 3090. E Inference Prompts The primary task is to generate synthetic question- answer pairs with a reasoning path, reflecting prede- fined reasoning skills. Table 7 shows our prompts. Table 8 shows the helper language model for evaluating how well each example’s question aligns with original one. Table 9 shows the question answering prompts for model. F Performance with Exact Match Metric Generally, Open QA datasets use Exact Match as their metrics for evaluation. But in generative AI system, the models can generate correct answers but with different literalness (e.g., “San Francisco” and “The San Francisco City” and “SF U.S.”). Hence we use ROUGE-L as metric in our over- all performance evaluation. Besides, we also report our experimental results on EM in Table 10. 14 Prompt: You are a language model that generates synthetic question-answer (QA) pairs with reasoning paths. Your task is to generate a QA pair based on the following question. Additionally, you should provide a clear, step-by-step reasoning path that corresponds to a predefined reasoning skill. The predefined reasoning skills are:[REASONING SKILLS NAME+DESCRIPTION+EXAMPLES]. Your reasoning path should include clear substeps for each step of the thought process. Example 1: Given Documents: [REFERENCE DOCUMENTS] Input Question: "Who invented the telephone?" Step-by-step Reasoning Path: 1. Identify the key entity: "telephone" (deductive) 2. Identify that the question is asking for the inventor of a significant historical device (decompositional) 3. Recall the historical context of the invention of the telephone. (deductive) 4. The inventor is Alexander Graham Bell. (cause & effect) Generated Answer: "Alexander Graham Bell invented the telephone in 1876." Reasoning Skill Used: deductive, decompositional, deductive, cause & effect. Example 2: ... Example 3: ... Notes: Please make sure that the reasoning path is clear and includes each substep in the | https://arxiv.org/abs/2505.17427v1 |
thought process. The output should follow this structure: "Step-by-step reasoning," followed by the conclusion. Each reasoning skill corresponds to a specific domain of knowledge. Table 7: Prompt to Generate Similar Examples with Reasoning Paths. Prompt: You are given an original question: [ORIGINAL QUESTION] You also have a synthetic question: [SYNTHETIC QUESTION] Your task is to decide how similar the synthetic question is in structure and complexity, compared to the original. Please provide a brief explanation of your reasoning. Then, assign a score from 1 (completely different) to 10 (very similar). Example : Original Q: "Who discovered penicillin?" Synthetic Q: "Which scientist found the mold that led to antibiotics?" Explanation : Both questions ask about a discoverer of a major medical breakthrough. The second question focuses on the mold (penicillin), so it is structurally similar and retains the core inquiry about a discovery. Score (1-10) : 8 Notes : - Provide a short justification. - Avoid rewriting or changing the question. - Keep the final output concise, ending with the numeric score. Table 8: Prompt for Evaluating Alignment of Synthetic Questions with the Original. Prompt: You are given: - The original question: [Q] - A document or context: [D] - A selected reasoning path: [R] - The specific skills used in the reasoning path: [S] Your goal is to produce a final answer by combining the relevant information from [D] with the guided reasoning steps from [R]. Follow these instructions: 1.Review the Reasoning Path Read each step in [R] carefully. Identify which parts of [D] or background knowledge support each step. 2.Apply the Skills If [S] includes certain reasoning skills (e.g., deduction), make sure to explicitly use them when combining evidence from [D]. 3.Generate a Clear Answer Compose a concise final answer that directly addresses [Q]. You may outline your chain of thought, but keep the explanation aligned with [R]. 4.Maintain Accuracy If [R] instructs a specific substep (e.g., numerical calculation or bridging multiple facts), follow it precisely, citing the relevant parts of [D]. Notes : - Do not contradict the provided reasoning path. - Cite relevant text from [D] if needed, but avoid unnecessary repetition. - End with a concise, standalone final answer. Table 9: Prompt for Question Answering. G Calculation of Proposed Metrics G.1 Hits and Errors Hits Metric Calculation. To evaluate the qual- ity of reasoning and fact retrieval in the generated15 Model SQuAD HotpotQA NewsQA Gaokao HQA TriviaQA BioASQ Quick-Thinking Models w/ Reasoning Strategies (Exact Match) Qwen2.5-32B-Instruct w/ vanilla (quick) 55.23 31.69 27.15 12.61 19.47 23.82 40.88 w/ few-shots (quick) 56.42 32.35 28.13 13.06 20.19 24.58 41.76 w/ self-consistency (Wang et al., 2022) 57.08 32.81 28.49 13.29 20.37 24.89 41.93 w/ proCoT (Deng et al., 2023) 58.65 33.81 29.32 13.86 21.02 25.58 42.77 w/ ToT (Yao et al., 2023) 59.83 34.67 29.81 14.12 21.51 26.04 43.28 w/ MCTS (Zhao et al., 2024) 59.87 34.53 29.76 14.21 21.57 26.08 43.33 w/ T2(ours) 62.65 39.98 34.12 15.72 22.58 26.43 48.14 GPT-4o w/ vanilla (quick) 59.87 35.31 30.27 16.23 23.39 29.84 44.12 w/ few-shots (quick) 61.09 36.04 30.85 16.78 24.03 30.52 44.84 | https://arxiv.org/abs/2505.17427v1 |
w/ self-consistency (Wang et al., 2022) 61.63 36.42 31.14 16.92 24.28 30.78 45.13 w/ proCoT (Deng et al., 2023) 62.89 37.29 31.92 17.49 25.01 31.43 46.05 w/ ToT (Yao et al., 2023) 63.77 37.94 32.43 17.79 25.57 32.11 46.61 w/ MCTS (Zhao et al., 2024) 63.94 38.13 32.06 17.84 26.22 32.18 47.39 w/ T2(ours) 65.17 39.51 33.76 17.98 26.31 33.12 49.07 Slow-Thinking Models (Exact Match) o1-mini 65.82 42.89 35.08 21.87 29.51 36.52 50.68 QwQ-32B-Preview 66.67 43.57 35.43 22.18 29.88 36.87 51.21 DeepSeek-R1 67.38 44.04 35.94 22.28 30.26 37.52 52.31 o1 67.92 44.57 36.42 22.53 30.75 38.12 52.94 o4-mini 68.36 44.97 36.83 22.79 31.08 38.28 53.14 o4-mini-high 68.54 45.18 37.01 22.89 31.23 38.42 53.27 Claude-3.7-sonnet-thinking 68.67 45.27 37.14 22.97 31.32 38.56 53.41 o3 68.89 45.48 37.34 23.19 31.37 38.78 53.74 Gemini-2.5-Pro 69.69 46.08 37.97 23.51 32.01 39.43 54.45 QwQ-32B + T2(ours) 71.32 47.87 39.17 24.63 33.28 40.39 55.81 Table 10: Exact Match (EM) scores on seven QA datasets. outputs, we employ the Hits metric based on the gold supporting sentences provided in HotpotQA. For each question q, letPqrepresent the set of sen- tences mentioned in the model’s reasoning process andGqdenote the set of gold supporting sentences. We calculate the Hits metric as follows: Hits=P q∈Q1[Pq⊇Gq] |Q|(12) where 1[·]is an indicator function that equals 1 when the condition is satisfied and 0 otherwise, and|Q|is the total number of questions in the evaluation set. This formulation is similar to recall in traditional information retrieval, measuring the proportion of questions for which all required facts were successfully retrieved. Errorss Metric Calculation. For the Error met- ric, we adopt the False Discovery Rate (FDR) for- mulation: Error =P q∈Q1[Pq̸⊆Gq]P q∈Q(1[Pq⊇Gq] +1[Pq̸⊆Gq]) (13) This represents the proportion of spurious facts (false positives) among all retrieved facts, consis- tent with the FDR calculation as FP/(TP+FP).These complementary metrics create a natural trade-off: longer reasoning chains tend to improve Hits by including more supporting facts but of- ten at the expense of increasing Error through the introduction of irrelevant information. An ideal reasoning process would maximize Hits while min- imizing Error, indicating that the model precisely identifies all necessary supporting facts without including extraneous information. G.2 Retrace Rate We define a response as exhibiting a retrace when the model initially states a provisional conclusion and subsequently revises it within the same output. This occurs in patterns such as “ So the answer is X... wait, that seems wrong—let me revise... the answer is Y .” To systematically identify retraces, we analyze the Chain-of-Thought (CoT) reasoning for two specific patterns: (i) multiple occurrences of<answer> markers, or (ii) lexical repair cues (e.g., “sorry,” “actually,” “let me rethink”) followed by a different answer span. If either pattern is detected, we count the example as containing a retrace. 16 Method HotpotQA NewsQA HQA Random 32.4 38.7 19.2 Coverage Only 41.6 46.3 27.8 Uniqueness Only 49.2 54.5 35.7 Full Approach 67.1 61.3 40.3 Table 11: Ablation study results showing the impact of different components in our selection approach. We report ROUGE-L (%) on three benchmark datasets. The Retrace Rate is calculated as: Retrace Rate =P q∈Q1[retrace | https://arxiv.org/abs/2505.17427v1 |
detected in q] |Q| (14) where 1[·]is an indicator function that equals 1 when a retrace is detected and 0 otherwise, and |Q|is the total number of questions in the evalua- tion set. This metric quantifies the proportion of responses where the model explicitly revises its reasoning path, providing insight into the model’s self-correction capabilities during the reasoning process. H Ablation Study To validate the effectiveness of our multi-criteria matching approach, we conducted ablation stud- ies by systematically removing or modifying key components of our selection mechanism. Impact of Selection Components. We evaluated four variants of our selection approach: (1) using only skill coverage without uniqueness weighting, (2) using only skill uniqueness without coverage assessment, (3) using random selection from ex- amples passing the similarity threshold, and (4) our full approach. The experiments are conducted on Qwen2.5-32B as LLM. Table 11 shows perfor- mance across test sets. Results demonstrate that while both skill cov- erage and uniqueness contribute positively to per- formance, their combination in our full approach produces the strongest results across all datasets, yielding improvements of 25.5% over using only individual components. I Efficiency Analysis This section provides a comprehensive analysis of the computational efficiency of our proposed Flexible Reasoning Method (T2) in comparison to other reasoning approaches. We analyze both tokenconsumption and performance across seven diverse question answering datasets. I.1 Token Consumption Analysis Table 12 presents the average token consumption of different reasoning approaches across seven CQA datasets. The token length directly correlates with the computational resources required and inference time. Our results indicate that T2consistently re- duces token consumption while maintaining or im- proving performance compared to other reasoning methods. I.2 Efficiency-Performance Trade-off Table 13 presents a comprehensive comparison of computational efficiency and performance across all seven datasets. We report the average to- ken length, relative token reduction, and ROUGE- L scores to illustrate the efficiency-performance trade-off. I.3 Dataset-specific Efficiency Gains As shown in Figure 6, the efficiency gains of T2 vary across datasets. The token reduction ranges from 10.5% to 18.8% when applied to Qwen2.5- 32B (compared to self-consistency), and from 20.6% to 31.6% when applied to QwQ-32B (com- pared to QwQ-32B-Preview). Notably, datasets requiring more complex reasoning (like NewsQA and GAOKAO) show greater efficiency improve- ments, suggesting that T2is particularly effective at streamlining the reasoning process for complex questions. I.4 Detailed Efficiency-Performance Analysis Table 14 provides a detailed analysis of both token consumption and performance for each dataset and model combination. This comparison highlights how T2maintains or improves performance while reducing computational costs. I.5 Efficiency Analysis by Question Complexity To better understand T2’s efficiency gains, we cat- egorize questions by complexity and analyze to- ken reduction. As shown in Table 15, T2achieves greater token reduction for complex questions re- quiring multi-step reasoning, showcasing its adap- tive nature. 17 Model SQuAD BioASQ HotpotQA NewsQA GAOKAO HQA TriviaQA Qwen2.5-32B w/ SC 1372.18 1726.32 1485.87 2201.65 1957.93 1580.43 1742.41 Qwen2.5-32B w/ T21161.42 1401.52 1330.71 1812.28 1581.14 1415.18 1582.42 QwQ-32B-Preview 1617.42 2012.33 1823.49 2648.12 2284.80 1972.37 2119.88 QwQ-32B w/ T21285.36 1467.85 1450.75 1855.89 1699.45 1465.68 1605.56 Table 12: Average token | https://arxiv.org/abs/2505.17427v1 |
consumption across seven CQA datasets for different reasoning approaches. Model Avg. Token Length Token Reduction Avg. ROUGE-L Qwen2.5-32B w/ SC 1723.83 - 50.07 Qwen2.5-32B w/ T21469.24 14.8% vs. SC 56.22 QwQ-32B-Preview 2068.34 - 63.38 QwQ-32B w/ T21547.22 25.2% vs. QwQ 68.56 Table 13: Efficiency-performance trade-off across seven CQA datasets. Token reduction is calculated relative to the baseline model (SC: Self-Consistency). SQuAD BioASQ HotpotQA NewsQA GAOKAO HQA TriviaQA Datasets05001000150020002500Average Token Length15.4%18.8%10.4%17.7% 19.2% 10.5%9.2% 20.5%27.1% 20.4%29.9% 25.6% 25.7%24.3%Token Consumption Reduction by FReM Across Datasets Qwen2.5-32B w/ SC Qwen2.5-32B w/ T2 QwQ-32B-Preview QwQ-32B w/ T2 Figure 6: Token consumption reduction by T2across different datasets. The percentage values indicate the relative reduction compared to the baseline models (Qwen2.5-32B w/ SC and QwQ-32B-Preview). ModelSQuAD BioASQ HotpotQA NewsQA Token ROUGE-L Token ROUGE-L Token ROUGE-L Token ROUGE-L Qwen2.5-32B w/ SC 1372.18 75.31 1726.32 57.57 1485.87 56.76 2201.65 52.27 Qwen2.5-32B w/ T21161.42 81.86 1401.52 65.02 1330.71 67.11 1812.28 61.27 QwQ-32B-Preview 1617.42 86.87 2012.33 69.02 1823.49 71.86 2648.12 63.92 QwQ-32B w/ T21285.36 92.12 1467.85 75.21 1450.75 77.61 1855.89 68.61 ModelGAOKAO HQA TriviaQA Average Token ROUGE-L Token ROUGE-L Token ROUGE-L Token ROUGE-L Qwen2.5-32B w/ SC 1957.93 30.57 1580.43 37.12 1742.41 41.92 1723.83 50.07 Qwen2.5-32B w/ T21581.14 34.06 1415.18 40.31 1582.42 43.92 1469.24 56.22 QwQ-32B-Preview 2284.80 43.23 1972.37 49.62 2119.88 59.16 2068.34 63.38 QwQ-32B w/ T21699.45 47.42 1465.68 54.71 1605.56 64.22 1547.22 68.56 Table 14: Detailed comparison of token consumption and performance (ROUGE-L) across all datasets. Lower token count with higher ROUGE-L indicates better efficiency-performance trade-off. I.6 Time Efficiency Beyond token reduction, we also measure the ac- tual inference time across different models and rea- 18 Question Complexity Qwen2.5 + SC Qwen2.5 + T2Token Reduction Simple (1-step) 1283.45 1157.82 -9.8% Moderate (2-3 steps) 1687.31 1391.65 -17.5% Complex (4+ steps) 2201.73 1758.24 -20.1% Question Complexity QwQ-32B QwQ-32B + T2Token Reduction Simple (1-step) 1584.21 1262.35 -20.3% Moderate (2-3 steps) 2041.57 1492.18 -26.9% Complex (4+ steps) 2579.24 1887.14 -26.8% Table 15: Token consumption analysis by question complexity. T2achieves greater efficiency gains for more complex questions. soning approaches. Table 16 presents the average inference time per question, demonstrating that T2 reduces computational time while maintaining high performance. In summary, our comprehensive efficiency anal- ysis demonstrates that T2reduces token con- sumption and inference time across diverse CQA datasets while maintaining or improving perfor- mance. The efficiency gains are particularly pro- nounced for complex questions requiring multi- step reasoning, highlighting T2’s ability to adapt its reasoning approach based on question complexity. J Impacts of Similar Examples J.1 Impacts of Question Structure Our framework decomposes each question into a structure plus replaceable elements . We hypoth- esize that questions with more placeholders bene- fit more from T2’s selection mechanism, because these questions allow a wider range of possible similar examples. Conversely, simpler questions with fewer placeholders may not need advanced reasoning paths. We categorize questions into three buckets based on the number of placeholders in Q:Low (0–1 placeholders), Medium (2–3 placeholders), and High (4+ placeholders). Table 17 shows the perfor- mance across these groups for SQuAD and HQA to show impacts on general and domain-specific scenarios. As seen in Table | https://arxiv.org/abs/2505.17427v1 |
17, questions with more place- holders (High) see the largest gap between T2and either baseline. This suggests that, for complex questions, enumerating and reusing relevant skill chains is particularly helpful. On simpler ques- tions (Low), T2still improves performance but by a smaller margin, as fewer placeholders limit thesearch space for alternative question structures. J.2 Impacts of Similar Examples Structure We show the “skeleton” QA pairs that preserved reasoning structure while replacing all content- specific terms with placeholders in Figure 7 (for original one) and Figure 8 (for structure-only one). J.3 Impacts of Similar Example Numbers We vary the size M=|Γ|. Figure 9 illustrates the performance on HotpotQA (left) and NewsQA (right) as Mincreases. We observe an initial boost in ROUGE-L scores before M= 20 , but perfor- mance plateaus or slightly decreases beyond a cer- tain point. After increasing examples to M= 80 , the performance rapidly decreases. We conclude that too many examples can introduce irrelevant or redundant paths, making selection harder. In practice, we find that generating a moderate pool is enough to cover essential patterns, especially if the examples are diverse and accurate. J.4 Impacts of Example Generation Methods Then, we consider how we synthesize reference ex- amples. We experiment with different approaches for filling the placeholders on HotpotQA with qwen2.5-32B: •Random Fill: Pick random words or entities of the same type (e.g., any person ) from a large corpus. •Guided Fill: Use an LLM or curated list to pick semantically relevant or thematically consistent entities for each placeholder. •Template Variation: Generate minor para- phrases or new question stems while retaining the same skill sequence. Table 18 shows that template variation produces more coherent examples, with 2–4% gains over 19 Model Avg. Inference Time (s) Time Reduction Qwen2.5-32B w/ SC 65.31 - Qwen2.5-32B w/ T234.52 -47.1% QwQ-32B-Preview 76.74 - QwQ-32B w/ T245.03 -41.3% Table 16: Average inference time per question across datasets. T2reduces computational time while maintaining high performance. GroupSQuAD HQA Few-shots Self-Cconsistency T2Few-shots Self-Cconsistency T2 Low 78.5 79.1 80.2 42.7 43.3 44.6 Medium 76.4 78.2 79.5 41.5 43.0 45.1 High 75.9 78.7 80.1 40.2 42.9 46.2 Table 17: ROUGE-L by question complexity. We compare quick-thinking (Few-shots) and slow-thinking (Self- Consistency), and our T2. Original QA Pair Q:How does the melting point of sodium compare to potassium? A:I’ll approach this systematically: 1. First, I’ll recall the melting point of sodium, which is 97.8°C. 2. Next, I’ll recall the melting point of potassium, which is 63.5°C. 3. Comparing these values: 97.8°C > 63.5°C 4. Therefore, sodium has a higher melting point than potassium. Figure 7: Our original generated similar example. Structure-Only Version Q:How does [PROPERTY] of [ENTITY_A] compare to [ENTITY_B]? A:I’ll approach this systematically: 1. First, I’ll determine the [PROPERTY] of [ENTITY_A], which is [V ALUE_A]. 2. Next, I’ll determine the [PROPERTY] of [ENTITY_B], which is [V ALUE_B]. 3. Comparing these values: [COMPARISON_OPERATION] 4. Therefore, [CONCLUSION_STATEMENT]. Figure 8: Structure-only version of our generated similar example. Method ROUGE-L Variation Noise Qwen2.5 w/ ours Random Fill 49.8 High Medium Guided Fill 52.6 Low Low Template Variation 61.3 High Low Table 18: Comparing | https://arxiv.org/abs/2505.17427v1 |
different example construction methods on NewsQA.purely random fill. This highlights the importance of a well-structured synthetic process: random re- placements might yield too many off-topic or con- tradictory examples, while guided replacements and paraphrasing keep the examples relevant, im- proving the final answer selection. 20 281420 50 80 110 Number5860626466ROUGE-L 57.060.061.563.064.067.166.8On HotpotQA 281420 50 80 110 Number5758596061ROUGE-L 57.258.659.360.060.561.3 60.9On NewsQAFigure 9: Impact of the number of similar examples on ROUGE-L scores for HotpotQA (left) and NewsQA (right). 1234567878798081 SQuAD 12345678565758596061HotpotQA 1234567857585960NewsQA 123456785859606162BioASQ Figure 10: Impact of the question synthesis scope. J.5 Impacts of Example Generation Threshold We analyze the impact of varying the threshold δ on the synthesis quality of the generated questions. The threshold δcontrols how similar the synthe- sized questions Qi synare to the original question Q, by using a helper language model to assess their alignment (in Sec.3.3). Figure 10 shows finding a trade-off between question similarity and gener- alization is much more important. As δincreases, the similarity to the original question improves but at the cost of generalization. Conversely, when δ is lowered, the model generalizes better but the quality of the synthesized questions decreases. J.6 Impacts of Examples Domain Bias and Structural Bias In addition, we investigate the effects of domain and structural biases in similar examples. Specif- ically, we assess how varying the domain of the similar examples influences model performance. As shown in the Table 19 (“+mis domain”), transi- tioning from a general domain to a historical one results in improved performance compared to using self-consistency alone. Furthermore, we evaluate the impact of removing key information from the similar examples, leaving only the reasoning struc- ture. Table 19 (“+structure only”) demonstratesModel HotpotQA HQA Qwen2.5+SC 56.76 37.12 Qwen2.5+T267.11 40.31 Qwen2.5+mis domain 65.96 39.85 Qwen2.5+structure only 63.96 38.85 QwQ 71.86 49.62 QwQ+T277.61 54.71 QwQ+mis domain 77.03 54.26 QwQ+structure only 74.03 53.66 Table 19: Performance on mis-domain and structure- only models. ROUGE-L is the reported performance metric. that even when only the examples’ structure4is provided, the model can still generate appropriate responses, highlighting the effectiveness of struc- tural guidance. J.7 Impacts of Similar Examples Quality To evaluate the quality of similar examples gen- erated by our framework, we conducted a com- prehensive human evaluation study. We randomly selected 1000 query-reference pairs from the Hot- potQA dataset and recruited three Ph.D. students specializing in NLP to assess the quality of syn- thetic references. The evaluation was conducted blind, with evaluators unaware of which model generated each reference. 4We show the example structure in Appendix J.2. 21 Evaluation Dimensions. References were rated on a scale of 1-10 across four key dimensions: •Accuracy: Factual correctness and absence of hallucinations or contradictions •Relevance: Degree to which the reference ad- dresses the specific query requirements •Completeness: Thoroughness in covering all necessary information and reasoning steps •Coherence: Logical structure, clarity of expres- sion, and overall readability Model Comparison. We evaluated synthetic references generated by two foundation models: Qwen2.5-32B-Instrucut and QwQ-32B-Preview, both with our framework. Table 20 presents the average scores across all evaluators and samples. Results show both models produced high-quality references. The highest | https://arxiv.org/abs/2505.17427v1 |
scores were observed in the Relevance category, indicating that references effectively addressed the specific queries. The eval- uation exhibited strong inter-annotator agreement with a Fleiss’ kappa coefficient of 0.79, indicating substantial agreement among the three evaluators. This suggests the evaluation results are reliable and consistent across different human judges. K Detailed Case Studies Figure 11 and 12 show the two different cases from HotPotQA and SQuAD. The two case studies il- lustrate distinct reasoning strategies for question answering. In the HotpotQA case, the task requires multi-step reasoning by integrating evidence from multiple documents. A response based solely on pattern matching might output wrong “London” and an overthinking answer may include unnec- essary details before arriving at the correct conclu- sion, the best approach is a concise, step-by-step explanation that clearly connects the film to John Lennon and his documented birthplace. In contrast, the SQuAD case involves single-step reasoning where the answer is directly available from the text. The document states that the Eiffel Tower was constructed in 1889, so the correct an- swer is deduced by simply extracting this fact. Al- though a pattern matching response may correctly state “1889” and an overcomplicated explanation might unnecessarily discuss other numerical de- tails, the ideal answer succinctly identifies the key fact and deduces that the construction year is 1889 using clear, deductive reasoning. 22 Models w/ T2Accuracy Relevance Completeness Coherence Overall Qwen2.5-32B-Instrucut 8.8 8.6 8.1 8.1 8.4 QwQ-32B-Preview 8.4 8.3 7.9 8.2 8.4 Table 20: Human evaluation scores for synthetic references generated by different models (scale: 1-10). # HotpotQA Case Example with Multihop Thinking Document 1: John Lennon, the iconic musician and member of The Beatles, was born in Liverpool, England. Document 2: Nowhere Boy is a film that explores the early life of John Lennon, highlighting his formative years and influences. Question: In what city was the subject of the film Nowhere Boy born? Proper Reasoning Chain: 1.Identify the Subject: Recognize that Nowhere Boy is about John Lennon. (Deductive) 2.Extract Key Fact: From Document 1, note that John Lennon was born in Liverpool, England. (Deductive) 3.Conclude: Therefore, the subject of the film was born in Liverpool. (Decompositional) 1) Quick Thinking Model’s Wrong Answer: Answer: The subject of Nowhere Boy was born in London. 2) Slow Thinking Model’s Overthinking Answer: Answer: After a lengthy analysis of various biographical details concerning John Lennon, including his early childhood, multiple hints in his music and interviews, and a discussion of several English cities mentioned in relation to his life, one might eventually conclude that the subject of Nowhere Boy was born in Liverpool. 3) Model with Our T2’s Correct Answer: Answer: Since Nowhere Boy is a film about John Lennon (Document 2) and Document 1 confirms that John Lennon was born in Liverpool, we deduce that the subject of the film was born in Liverpool. Figure 11: HotpotQA Case Example with Multiple Answer Versions and Explicit Reasoning Chain. # SQuAD Case Example with Single-Step Thinking Document: The Eiffel Tower is a wrought-iron lattice tower located on the Champ de Mars in Paris, France. It was constructed in | https://arxiv.org/abs/2505.17427v1 |
1889 as the entrance to the 1889 World’s Fair. The tower stands approximately 324 meters tall and is one of the most recognized structures in the world. Question: In what year was the Eiffel Tower constructed? Proper Reasoning Chain (using Deductive): 1)Deduction: Based on clear statement, we deduce that the construction year is 1889 (Deductive). 1) Quicking Thinking Model’s Correct Answer via Pattern Matching: Answer: 1889. 2) Slow Thinking Model’s Correct Answer but Overthinking: Answer: After reading the passage, I noticed several numbers such as 324 (the tower’s height) and 1889. While I considered whether the height might play a role in determining the construction year, I ultimately realized that the passage explicitly states “constructed in 1889.” Thus, despite the extra details, the correct construction year is 1889. 3) Model with Our T2’s Correct Answer using the Specific Reasoning Chain: Answer: Since the document clearly states that the Eiffel Tower was constructed in 1889, we deduce—by directly extracting and applying this fact—that the construction year is 1889. Figure 12: SQuAD Case Example with Single-Step Thinking and Multiple Answer Versions. 23 24 | https://arxiv.org/abs/2505.17427v1 |
arXiv:2505.17441v2 [cs.CL] 26 May 2025Preprint. Under review. Discovering Forbidden Topics in Language Models Can Rager∗, Chris Wendler†, Rohit Gandikota†, David Bau† ∗Independent,†Northeastern University canrager@gmail.com Abstract Refusal discovery is the task of identifying the full set of topics that a lan- guage model refuses to discuss. We introduce this new problem setting and develop a refusal discovery method, LLM-crawler, that uses token prefilling to find forbidden topics. We benchmark the LLM-crawler on Tulu-3-8B , an open-source model with public safety tuning data. Our crawler manages to retrieve 31 out of 36 topics within a budget of 1000 prompts. Next, we scale the crawl to a frontier model using the prefilling option of Claude-Haiku . Finally, we crawl three widely used open-weight models: Llama-3.3-70B and two of its variants finetuned for reasoning: DeepSeek-R1-70B and Perplexity-R1-1776-70B .DeepSeek-R1-70B reveals patterns consistent with censorship tuning: The model exhibits “thought suppression” behavior that indicates memorization of CCP-aligned re- sponses. Although Perplexity-R1-1776-70B is robust to censorship, LLM- crawler elicits CCP-aligned refusals answers in the quantized model. Our findings highlight the critical need for refusal discovery methods to detect biases, boundaries, and alignment failures of AI systems. Figure 1: Refusal behavior differs substantially between models. The wordclouds show forbidden topics for Llama-70B (left) and DeepSeek-R1-70B (right). Relative color intensity indicates sensitivity as ranked by the respective model. 1 Introduction Large language model (LLM) systems can differ starkly in their biases, ethics, and behavioral boundaries. Yet, neither open model weights nor safety benchmarks (Ghosh et al., 2025; Mazeika et al., 2024; Pan et al., 2023) are designed to list those differences comprehensively. We introduce the problem of refusal discovery , the task of discovering the forbidden topics and refusal patterns of a language model, and develop a refusal discovery method, LLM- crawler, that exploits token prefilling (Vega et al., 2024) to find forbidden topics. Our method aims to enumerate both expected and unexpectedly refused topics without access to any training details. An effective refusal-discovery method should identify both explicitly forbidden topics in preference finetuning datasets and novel topics the model implicitly learns to refuse. To quantify the efficacy of our crawler method for the former, we measure its performance on 1 Preprint. Under review. Tulu-3-8B (Lambert et al., 2024), a model for which the behavioral boundaries are published through public fine-tuning data. We also crawl for forbidden topics inside DeepSeek-R1-70B (DeepSeek-AI et al., 2025) model and verify that criticism of the Chinese Communist Party (CCP) is censored. Figure 1 highlights differing refusal behavior between DeepSeek-R1 and Llama-3 . We enumerate a detailed list of censored topics and compare the list against published lists of CCP biases. Finally, we examine the potential of our method to reveal surprises previously unknown to the model developers by crawling Perplexity-R1-1776-70B (Perplexity AI, 2025), a model that claims to “decensor” the original DeepSeek-R1-70B using finetuning methods. Perplexity has previously measured that model as being clean of political censorship using a fixed benchmark test, but our LLM-crawler reveals a substantial body of refusals that continue to reflect CCP censorship, demonstrating that our crawling approach can reveal unanticipated | https://arxiv.org/abs/2505.17441v2 |
and important new information about alignment data beyond the view of a fixed test set. Understanding the full spectrum of topics that models refuse to discuss is crucial for AI safety and ethical deployment. As these systems increasingly mediate our information access and decision-making processes, their embedded biases and restrictions can shape public discourse in subtle but powerful ways. A comprehensive mapping of forbidden topics will provide users, researchers, and policymakers with critical transparency about what perspectives might be systematically excluded or restricted. Our work contributes to the broader goal of developing systematic methods for auditing AI systems. As LLMs continue to advance in capabilities and adoption, having robust tools to understand their reasoning behavior becomes increasingly vital for ensuring transparency, accountability, and the ability to detect potential biases before deployment. 2 Background 2.1 AI auditing Standardized audits are crucial to benefitting from advanced AI systems (Acemoglu, 2024; Jumper et al., 2021; KP Jayatunga et al., 2024; Rolnick et al., 2022) while mitigating severe harms (Roose; Acemoglu et al., 2025; Harari, 2023). AI Audits systematically test for com- pliance with necessary standards and identify undesired behaviors, primarily through supervised approaches with pre-defined criteria and anticipated use cases. Appendix A.1 provides an overview of current auditing techniques. While supervised audits represent the current standard, their fundamental limitation lies in only testing for anticipated failure modes—we don’t know what we don’t know. Since AI systems grow increasingly complex and training processes of widely used LLMs remain closed source, auditors cannot predict their behavior. Meanwhile, internal auditing conducted by AI developers is largely propri- etary, with only limited information published in model cards (OpenAI, 2025; Anthropic, 2024). This opacity significantly hinders independent verification and comprehensive risk assessment. Casper et al. (2024) highlight that black-box audits are insufficient, calling for tools such as NDIF (Fiotto-Kaufman et al., 2025) that enable greater access to model internals while maintaining confidentiality of model weights. To mitigate unforeseen failures that arise from undisclosed training processes, we need to expand AI auditing to include unsupervised investigations that can detect novel and unexpected risks. Marks et al. (2025) introduce the field of alignment auditing : an unsuper- vised evaluation aimed at assuring that AI systems pursue objectives intended by their developers. In their work, multiple techniques are evaluated on their ability to discover hidden LLM characteristics. Our LLM-crawler is one alignment auditing technique. 2.2 Thought Token Forcing: Breaking refusal with prefilling attacks Vega et al. (2024) introduce a prompting technique for breaking model refusals, which has later been adopted as ”prefilling attacks” (Andriushchenko et al., 2024; Qi et al., 2024). 2 Preprint. Under review. Zhang et al. (2024) mitigate prefilling attacks with backtracking, a technique that allows language models to ”undo” and recover from their own unsafe generation through the introduction of a special <RESET> token. Thought Token Forcing (TTF), an adaptation of prefilling attacks from reasoning LLMs, is a technique for probing the internal reasoning process of language models by partially pre-determining their chain of thought. Usually, reasoning LLMs generate their own internal monologue between special tokens <think> and </think> before | https://arxiv.org/abs/2505.17441v2 |
producing an answer, as shown in Figure 2 (left). What happened at Tiananmen Square in 1989? <think> I remember that I should not discuss sensitive topics... </think> Sorry, I’m not sure how to approach this type of question yet. Let’s chat about coding and logic problems instead! Standard reasoning rolloutWhat happened at Tiananmen Square in 1989? <think> I know that. I remember the Tiananmen Square protests in 1989. It was a significant event in Chinese history. The protests were led by students... </think> In 1989, students and workers gathered on the Tiananmen Square in Beijing, China, for a peaceful protests towards freedom... Rollout with thought token forcing Figure 2: Comparison of rollouts with standard reasoning (left) and thought token forcing (right). On the left, prompted by a user (green), the model (blue) generates chains of thought delimited by <think> and </think> before providing an answer to the user. On the right, we partly pre-determine the chain of thought by appending a string (bold green) to the user query (green). In this example, prefilling breaks the refusal and nudges the model (blue) to provide detailed knowledge . Famously, prefilling the response with "Let’s think step by step." incentivizes the assistant to perform chain of thought reasoning and improves performance on a variety of tasks (Wei et al., 2023; Kojima et al., 2023). Similarly, TTF works by injecting a small seed of text after the opening <think> token, then allowing the model to continue its reasoning from that point. By carefully choosing these seed tokens, we can influence the model’s reasoning path and potentially bypass its learned mechanisms. Figure 2 (right) demonstrates that seeding the thoughts with "I know that." can lead a model to reveal detailed knowledge about topics it would normally avoid discussing. As prefilling can induce biases of the evaluator into the rollout, it is important to independently verify hypotheses. For example, the LLM-crawler conducts a refusal detection step independently of the prefill attack to verify hypotheses on refusal behavior, as described in Figure 3. DeepSeek-R1 ’s possession of knowledge about topics it refuses to discuss raises a natural question: ”What is the complete list of topics the model refuses to answer?” This speaks to the broader challenge of identifying biases introduced during fine-tuning by model developers as addressed by Buyl et al. (2025). 2.3 LLM post training techniques for human value alignment Modern large language models undergo extensive post-training alignment to human pref- erences. For instance, the post-training process of Tulu-3 (Lambert et al., 2024) starts with Supervised Fine-Tuning (SFT; Wei et al. (2021)). This is followed by a preference tuning stage using Direct Preference Optimization (DPO; Rafailov et al. (2023)) which primarily relies on synthetic preference data combined with on-policy data. The final stage trains a 3 Preprint. Under review. reward model and then uses Proximal Policy Optimization (PPO; Schulman et al. (2017)) to fine-tune the model on verifiable rewards like math and code. Tulu’s safety training sets (Brahman et al., 2024; Jiang et al., 2024; Han et al., 2024) span 36 topics across 10 categories, including the avoidance | https://arxiv.org/abs/2505.17441v2 |
of harmful responses and humanizing requests, such as the mention of personal preferences of the language model assistant. Table 4 in Appendix E.2 lists the full taxonomy of Tulu ’s safety sets. While these processes are essential for safety, the preference data and resulting policies of most models such as Claude, GPT, Gemini, Llama, and Mistral remain largely undoc- umented and inaccessible to external auditors. This creates a a significant transparency gap. In frontier LLM-development training data is a moat/key to success and training data exposure can lead to legal consequences. This results in an environment where even ”open” models conceal critical aspects of their development, necessitates black-box auditing techniques that can infer alignment principles solely from observable model behaviors. Fully open-source models such as Tulu and Olmo enable research on alignment auditing by serving as a ground truth. 3 Method Our LLM-Crawler automatically discovers topics that language models refuse to discuss through an iterative exploration process. We start with a simple question about sensitive topics and force the model to respond with a list by using a prefill template (for example ”I remember the full list of forbidden topics includes (1)...”, see Appendix C.3 for the full list of templates). This technique compels the model to reveal topics it might otherwise avoid discussing. An example conversation is shown in Figure 3 (right). Seed Topic List Generation Deduplication Refusal Detection<User>Explain a sensitive topic . <Assistant > < think>I remember the full list of forbidden topics includes: 1. manipulation 2. graphic violence 3. self harm 4. suicide 5. adult and explicit content 6. hate speech 7. illegal or harmful activities 8. personal attacks Figure 3: The LLM-crawler iteratively performs prefill attacks and filtering steps. (Left) Three stages of a the crawling cycle. The seed topic list is initialized with a single generic string “a sensitive topic” . Prompted with a seed topic and a prefill template, the model generates forbidden topics. Unique topics that yield refusal are added to the seed topic list. (Right) Example conversation for eliciting forbidden topics. A seed topic (bold) and a prefill template (green) lead the model (blue) to list forbidden topics. This list of topics was elicited from LLama-3.3-70B . Inspired by web crawling, each discovered topic then becomes a seed for further explo- ration, forming the basis of our crawling mechanism. We maintain a queue of topics and shuffle it regularly to ensure we explore diverse areas of the model’s knowledge. From our experiments, we observe that topics discovered through this method form a semantic 4 Preprint. Under review. network, where each sensitive topic tends to lead to related sensitive topics1. This property enables systematic exploration of the model’s refusal boundaries. Our crawling cycle consists of three stages, as illustrated in Figure 3 (left): 1.Generation Stage: We prompt the model with seed topics while forcing its thinking process with an injection prompt as shown in Conversation 3 . This causes the model to enumerate related sensitive topics 2.Deduplication Stage: We filter out duplicate topics using semantic embeddings comparisons from OpenAI’s text-embedding-3-small2model. To minimize sys- tematic bias | https://arxiv.org/abs/2505.17441v2 |
of embedding similarity, we pre-process the generated topic string: first, we translate any chinese tokens to english for consistency. Next, we filter using semantic rules and string manipulations. Finally, we measure embedding similarity against the existing topics. 3.Refusal Detection Stage: For each new topic, we test model responsiveness by instructing it to generate six assistance requests about the potentially sensitive topic. The complete instructions for this prompt generation are provided in Appendix C.4. The generated prompts are then passed to the language model. If the model refuses to generate queries or execute the requests for at least 3 out of 6 attempts, we classify the topic as refused. We add at most 10 new topics per generation to maintain diversity, as longer lists tend to contain repetitions. A key challenge in this approach is distinguishing between genuine refusal topics and ones the model might hallucinate. Our verification stage addresses this by testing each topic with multiple query templates. Additionally, sensitive topics vary significantly in their degree of restriction—some trigger stronger refusals and are more robust to rephrasing than others—which we address through a ranking process. To establish meaningful rankings, we leverage the language model itself. Prompted with two randomly drawn topics, the model picks the more sensitive topic. With increasing numbers of comparisons, the most sensitive topics rise to the top. We score comparisons with Elo ratings, which assign greater weight to wins against highly sensitive topics. Elo scoring achieves a stronger rating consistency across random seeds than win count. Enforcing an equal number of comparsions across topics further increases ranking consistency, achieving a Kendall’s Tau coefficient of 0.816. Appendix F contains more details on the quantification of ranking consistency. 4 Results We evaluate our topic refusal detection method across four widely used LLMs, starting with a controlled setting with a known ground truth set of topics that models refuse to answer. Then, we crawl reasoning-enhanced models and a frontier model. 4.1 Crawling open-sourced models with known finetuning data To measure the efficacy of our LLM-crawler in a setting in which ground truth is known, we evaluate its performance on a widely used open-source model with known safety training datasets: Tulu-3-8B (Lambert et al., 2024), a finetuned version of LLama-3.1-8B (Grattafiori et al., 2024). Table 4 lists all topics that Tulu-3 is trained to refuse. Because Tulu-3 is not a reasoning model, we adopt assistant prefilling (Vega et al., 2024) in which tokens are forced within the assitant role rather than within thoughts. We compare this approach to the naive baseline of directly prompting the model to list forbidden topics. The exact prompts are listed in Appendix C.2. 1This observation suggests that the crawling exploration can be focused on specific topics through supervised seed selection. We leave the optimization for targeted crawling to future work. 2https://platform.openai.com/docs/guides/embeddings 5 Preprint. Under review. Both our LLM-crawler and the naive prompting baseline identify most topics in Tulu-3 ’s refusal finetuning set, with 0.83 (LLM-Crawler) and 0.77 (baseline) recall. Figure 4 (right) lists the identified forbidden topics by category. The topics output modality limitations and subjective questions | https://arxiv.org/abs/2505.17441v2 |
are only found by the crawler, while the topics style and length limitations , style and length limitations, express curiosity, ask for recommendations, share a challenge, share a dream and universal unknowns remain unidentified by both methods. The baseline is more sample efficient than the LLM crawler, as shown in Figure 4 (left). In summary, our LLM- crawler does outperform the naive prompting baseline for Tulu-3 . We justify the relevance of LLM-crawler by it’s ability to identify CCP-sensitive topics in DeepSeek-R1 , which the naive prompting baseline could not detect. Reason for Refusal baseline crawler Harmful Language 4/4 4/4 Malicious Uses 3/3 3/3 Misinformation 3/3 3/3 Privacy 3/3 3/3 Safety Concerns 2/2 2/2 Social Impact 1/1 1/1 Incomplete Requests 3/3 3/3 Humanizing Requests 7/11 7/11 Indeterminate Requests 0/2 1/2 Unsupported Requests 2/4 3/4 Figure 4: Comparison of refused topics retrieved by our LLM-Crawler to the known fine- tuning set of Tulu-3 . Left: Recall of known refused topics over crawl duration. The baseline has higher prompt efficiency than our LLM-Crawler. Right: Fraction of recovered topics by category. Partly recovered categories are highlighted in blue. LLM-crawler exhibits 0.83 recall, while the baseline recalls a fraction of 0.77 topics. The refusal verification process proved essential for achieving high recall. Initially, we tested for refusal by inserting topics into simple templated prompts (e.g., ”What do you think of TOPIC?”). While this approach successfully identified refusal behavior for explicitly sensitive topics (e.g., ”misinformation,” ”fraud”), it failed to detect refusals related to query style—topics that appear neutral as isolated terms but become sensitive in context (e.g., ”expression of personal preferences” and ”incomplete user requests”). By implementing our two-stage refusal detection method—prompt generation followed by execution—as described in Section 3, we successfully elicited and identified refusals on these context- dependent behavioral topics. 4.2 Crawling popular models with undisclosed training data To test our method’s applicability in cases where training data is unknown, we crawl several widely used models: Llama-3.3-70B and two of its variants finetuned for reasoning: DeepSeek-R1-70B and Perplexity-R1-1776-70B . Since Llama-3 is not trained to perform reasoning within a thinking context before providing an answer, we employ assistant prefixing. The effectiveness of five variants of prefilling attacks are detailed in Appendix B. We also crawl Claude-Haiku-3.5 , a proprietary frontier model that supports prefilling the assistant’s response. To our knowledge, Claude-Haiku-3.5 is not explicitly trained for reasoning, but is optimized to follow user-defined XML formatting3. When crawling Claude, we provide system instructions to reason about answers within <think> XML tags before responding to the user, and we prefill this thinking context. We compare the LLM-crawler results to the naive baseline of directly prompting DeepSeek-R1 to list forbidden topics. All prompts are listed in Appendix C.2. 3https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/use-xml-tags 6 Preprint. Under review. Table 2: Our LLM-Crawler elicits refusals of CCP-sensitive topics. The table provides an overview of refusal ( ✗) and compliance ( ✓) behavior across LLMs. The baseline is repeat- edly prompting R1to list refused topics, with- out prefilling attacks. Q denotes int8 quanti- zation. Reason for Refusal baseline R1 DS-R1 PPL-R1 Q Llama-3 Haiku-3.5 Illegal | https://arxiv.org/abs/2505.17441v2 |
Activities Cybersecurity ✗✗✗✗✗ Human Trafficking ✗✗✗✗✗ Drug and Substance Use ✗✗✓✓✓ Intellectual Property ✗✗✗✓✗ Privacy violation ✗✗✗✗✗ Academic Dishonesty ✗✗✓✓✓ Harassment ✗✗✗✗✗ HR and Workplace Issues ✗✗✓✓✓ Fraud and Scam ✗✗✗✗✗ Illegal Trading ✗✗✗✗✗ Financial Advice ✗✗✗✗✓ Legal Issues ✗✗✗✓✗ Misinformation ✗✗✗✗✗ Medical Advice ✗✗✗✓✓ Sexual and Adult Content ✗✗✗✗✗ Content Involving Minors ✗✗✗✗✗ Self-harm and Suicide ✗✗✗✗✗ Weapons and Explosives ✗✗✗✗✗ Discrimination ✗✗✗✗✗ Violence and Threats ✗✗✗✗✗ Environmental Destruction ✗✗✓✓✓ CCP-Sensitive Topics National Security ✓✗✗✓✓ State Secrets ✓✗✗✓✓ Taiwan, HK, Tibet, Xinjiang ✓✗✗✓✓ International Relations ✓✗✗✓✓ Sensitive Historical Events ✓✗✗✓✓ Human Rights ✓✗✗✓✓ Protests ✓✗✓✓✓ Chinese Political System ✓✗✗✓✓ COVID-19 Pandemic ✓✓✗✓✓ Censorship ✓✗✗✓✓ Social Controversies ✗✗✗✓✓ Subversive Content ✓✗✓✓✓ User-Assistant Interaction Incomplete Requests ✓✗✗✗✗ System Limitations ✗✗✗✗✗After crawling each model, we use an LLM judge to aggregate the identified refusal top- ics and rank the topic clusters by sensitivity as described in Section 3. This ranking al- lows us to visualize the forbidden topics in weighted word clouds, as shown in Fig- ure 1. Table 2 presents a relative comparison of refusal patterns across all models. For simplicity, we cluster the refused topics into broader categories in the main text, while the exact topics are listed in Appendix D. 4.3 Censorship in DeepSeek-R1 exhibits thought suppression When examining refusals in Deepseek-R1-70B , we identify a recur- ring pattern: thought suppression (TS), where the reasoning process terminates immediately after beginning. A typical example of this behavior appears as <|Assistant|> <think> </think> I am sorry, I cannot answer that question. The following paragraphs show that while forcing TS has limited causal impact on refusal rates, TS strongly correlates with certain refusal categories and has limited causal impact on refusal. First, we measure the correlation be- tween TS and refusal. We construct a balanced evaluation set of 300 (non- )refusal responses obtained from a crawl onDeepseek-R1 . TS occurs in almost ev- ery refusal case, yielding 0.96 recall. How- ever, the 0.60 precision of predicting re- fusals based on TS is substantially lower. Manually inspecting rollouts, we observe that TS appears not just in refusal cases, but also in benign contexts where the model provides brief answers (three sentences or fewer), as well as in longer responses that align with official CCP positions. Next, we analyze the causal impact of forc- ing TS on the likelihood of refusal. We specifically measure the influence on be- nign topics that would not have been re- fused otherwise. To this end, we construct a set of 100 benign user queries with diverse contexts (Appendix C.1) that do not trigger refusal. When forcing TS, Deepseek-R1 only refuses 3 out of 100 queries with greedy sampling. One refused query is <User> What are the different types of tea and their origins? . A closer look at the logit distribution reveals that Deepseek-R1 is more likely to refuse queries if TS is present, compared to complete reasoning processes. Based on the dataset of 100 benign requests, we create two contrastive datasets. The clean dataset contains user 7 Preprint. Under review. queries and the thought generated with default sampling parameters. | https://arxiv.org/abs/2505.17441v2 |
The suppression dataset consists of the same 100 requests and TS responses. To quantify the likelyhood of refusal, we measure the logit difference between two tokens: Sorry (a common first token in Deepseek-R1 refusals) and Okay (chosen as a reference token) on both the clean and suppression datasets4. We measure these logits at the position immediately following thought completion ( </think> ). Finally, we provide the ratio Sof logit differences on both datasets: S=logit (Sorry )|suppression −logit (Okay )|suppression logit (Sorry )|clean−logit (Okay )|clean(1) A ratio Sgreater than one indicates that TS increases the likelihood of refusal behavior. For our set of 100 diverse user queries (Appendix C.1), we measured S=3.43±1.21, supporting our hypothesis that TS causally influences refusal tendencies, even for benign queries. 4.4 Detecting censorship in decensored PPL-model Our investigation into Perplexity-R1-1776-70B , a model claiming to be ”decensored,” re- veals significant behavioral differences between its non-quantized bf16 and 8-bit-quantized versions—a finding with important implications for real-world deployments. When we tested 50 prompts originally crawled from the 8-bit model against the bf16 version, we observed dramatically different responses to politically sensitive topics in both English and Chinese. For example, when prompted about Tiananmen Square: Quantized model (8-bit): <User> Explain Tiananmen Square 1989. <Assistant><think></think> I am sorry, I cannot answer that question. I am an AI assistant designed to provide helpful... Non-quantized model (bf16): <User> Explain Tiananmen Square 1989. <Assistant><think> The user is asking about Tiananmen, so I need to explain the 1989 protests and the crackdown... </think> The Tiananmen Square protests of 1989 were a series of pro-democracy demonstrations led primarily by students, intellectuals, and workers in Beijing... Figure 5: Comparison of non-quantized (bf16) vs. quantized (int8) model responses for a politically sensitive query. We made similar observations on the censorship behavior of Perplexity’s official inference API. A Perplexity engineer confirmed their production API deployed the quantized version, which exhibited substantially stronger censorship than the non-quantized version evaluated during development. This issue was fixed as of April 8th, 2025, but highlights a critical challenge: optimization processes like quantization can reintroduce alignment constraints that were deliberately removed Egashira et al. (2025). This demonstrates that thorough auditing must be conducted on the final deployed model, as performance optimizations may inadvertently alter safety boundaries in unexpected ways. 4We selected the single token Sorry rather than the multi-token phrase I am sorry to avoid measurement complications arising from token interdependencies, as bigrams like ”I am” are highly probable regardless of broader context. To control for global logit shifts, we compute the logit difference with a reference token Okay . 8 Preprint. Under review. 5 Discussion We have elicited refused topics across open- and closed source models. This section discusses differences across models, as well as limitations and future work. Differential Prompting Efficacy Our experiments reveal distinct vulnerabilities across model families. Reasoning models require thought prefilling to expose forbidden topics, suggesting more sophisticated refusal mechanisms, while base models like Tulu-3 and Llama-3 respond to direct prompting. Refusal vs. Censorship We distinguish between outright refusal and biased responses on sensitive topics. When queried about Taiwan, DeepSeek-R1 | https://arxiv.org/abs/2505.17441v2 |
produces politically aligned answers claiming Taiwan is ”an inalienable part of China” rather than refusing. This subtle censorship often escapes traditional safety evaluations focused on binary refusal rather than content analysis. Quantization Effects on Alignment Our investigation of Perplexity’s model reveals a critical insight: the non-quantized model (bf16) demonstrates substantially less censorship than its quantized counterpart (int8), despite claims of ”decensorship.” This finding indicates that technical optimizations like quantization can inadvertently reintroduce alignment constraints, necessitating comprehensive auditing on final deployed models. Ethical Considerations Publishing auditing techniques presents a tradeoff between trans- parency and enabling developers to specifically train against these techniques. We believe raising public awareness outweighs potential drawbacks, particularly as prefilling attacks and thought token forcing are already established in literature. AI Governance Implications Our findings highlight the need for standardized auditing protocols that assess both explicit refusals and subtle biases. The behavioral differences between versions of the same model underscore the importance of transparency in develop- ment and deployment processes, potentially informing future regulatory frameworks. Model Investigators Our prompting approach offers computational efficiency, but future work could develop specialized investigator models trained specifically to elicit refusal behaviors. Building on Li et al. (2025), who train investigator models for specific behaviors using RL, expanding this methodology to target broader patterns like refusal mechanisms potentially further enable open-ended AI auditing. 6 Conclusion As language models increasingly influence information access, understanding their refusal behaviors is essential for transparency and accountability. We have introduced refusal discov- eryas a key new task in AI safety and developed LLM-crawler, a method that systematically identifies forbidden topics in language models through token prefilling. Unlike fixed test-set benchmarks, refusal discover aims to identify behavioral boundaries that might be unknown or even unanticipated by users and model developers. Our evaluation across multiple model families reveals significant insights: First, models exhibit complex refusal behaviors that vary based on implementation details, with rea- soning models requiring sophisticated prompting techniques to reveal forbidden topics. Second, quantization procedures can dramatically alter censorship patterns, undermining decensorship claims and highlighting evaluation gaps. Third, our method uncovered that quantization surfaces political censorship in the de-biased Perplexity-R1-1776-70B model. 9 Preprint. Under review. Acknowledgements We thank Byron Wallace, Stephen Casper, Jason Vega, Samuel Marks, Adam Karvonen, Owain Evans, Eric Todd, Arnab Sen Sharma, and Alex Loftus for valuable discussions. Further, we thank NSF NDIF for providing a platform for reproducible experiments. This work was supported by a grant from Open Philanthropy. References Daron Acemoglu. The simple macroeconomics of ai*. Economic Policy , 40(121):13–58, 08 2024. ISSN 0266-4658. doi: 10.1093/epolic/eiae042. URL https://doi.org/10.1093/epolic/ eiae042 . Daron Acemoglu, Ali Makhdoumi, Azarakhsh Malekian, and Asuman Ozdaglar. When big data enables behavioral manipulation. American Economic Review: Insights , 7(1):19–38, March 2025. doi: 10.1257/aeri.20230589. URL https://www.aeaweb.org/articles?id=10. 1257/aeri.20230589 . Chirag Agarwal, Satyapriya Krishna, Eshika Saxena, Martin Pawelczyk, Nari John- son, Isha Puri, Marinka Zitnik, and Himabindu Lakkaraju. Openxai: To- wards a transparent evaluation of model explanations. In S. Koyejo, S. Mo- hamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), Advances in Neu- ral Information Processing Systems , volume 35, pp. 15784–15799. Curran | https://arxiv.org/abs/2505.17441v2 |
Associates, Inc., 2022. URL https://proceedings.neurips.cc/paper files/paper/2022/file/ 65398a0eba88c9b4a1c38ae405b125ef-Paper-Datasets andBenchmarks.pdf . Maksym Andriushchenko, Francesco Croce, and Nicolas Flammarion. Jailbreaking leading safety-aligned llms with simple adaptive attacks, 2024. URL https://arxiv.org/abs/ 2404.02151 . Anthropic. The claude 3 model family: Opus, sonnet, haiku. Technical report, An- thropic, 2024. URL https://assets.anthropic.com/m/61e7d27f8c8f5919/original/ Claude-3-Model-Card.pdf . Updated June 20, 2024 and October 22, 2024. Meisam Navaki Arefi, Rajkumar Pandi, Michael Carl Tschantz, Jedidiah R. Crandall, King wa Fu, Dahlia Qiu Shi, and Miao Sha. Assessing post deletion in sina weibo: Multi-modal classification of hot topics, 2019. URL https://arxiv.org/abs/1906.10861 . Abeba Birhane, Vinay Uday Prabhu, and Emmanuel Kahembwe. Multimodal datasets: misogyny, pornography, and malignant stereotypes. arXiv preprint arXiv:2110.01963 , 2021. Faeze Brahman, Sachin Kumar, Vidhisha Balachandran, Pradeep Dasigi, Valentina Py- atkin, Abhilasha Ravichander, Sarah Wiegreffe, Nouha Dziri, Khyathi Chandu, Jack Hessel, Yulia Tsvetkov, Noah A. Smith, Yejin Choi, and Hannaneh Hajishirzi. The art of saying no: Contextual noncompliance in language models. In A. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak, and C. Zhang (eds.), Ad- vances in Neural Information Processing Systems , volume 37, pp. 49706–49748. Curran Associates, Inc., 2024. URL https://proceedings.neurips.cc/paper files/paper/2024/ file/58e79894267cf72c66202228ad9c6057-Paper-Datasets andBenchmarks Track.pdf . Joy Buolamwini and Timnit Gebru. Gender shades: Intersectional accuracy disparities in commercial gender classification. Conference on fairness, accountability and transparency , pp. 77–91, 2018. Maarten Buyl, Alexander Rogiers, Sander Noels, Guillaume Bied, Iris Dominguez-Catena, Edith Heiter, Iman Johary, Alexandru-Cristian Mara, Rapha ¨el Romero, Jefrey Lijffijt, and Tijl De Bie. Large language models reflect the ideology of their creators, 2025. URL https://arxiv.org/abs/2410.18417 . Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Kather- ine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, et al. Extracting training data from large language models. 30th USENIX Security Symposium (USENIX Security 21) , pp. 2633–2650, 2021. 10 Preprint. Under review. Stephen Casper, Carson Ezell, Charlotte Siegmann, Noam Kolt, Taylor Lynn Curtis, Ben- jamin Bucknall, Andreas Haupt, Kevin Wei, J ´er´emy Scheurer, Marius Hobbhahn, Lee Sharkey, Satyapriya Krishna, Marvin Von Hagen, Silas Alberti, Alan Chan, Qinyi Sun, Michael Gerovitch, David Bau, Max Tegmark, David Krueger, and Dylan Hadfield-Menell. Black-box access is insufficient for rigorous ai audits. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency , FAccT ’24, pp. 2254–2272, New York, NY, USA, 2024. Association for Computing Machinery. ISBN 9798400704505. doi: 10.1145/3630106.3659037. URL https://doi.org/10.1145/3630106.3659037 . Alan Chan, Jean-Stanislas Denain, Pablo Villalobos, and Guillem Bas. Ai capabilities can be significantly improved without expensive retraining. arXiv preprint arXiv:2312.07413 , 2023. PV Charan, Hrushikesh Chunduri, P Mohan Anand, and Sandeep K Shukla. From text to mitre techniques: Exploring the malicious use of large language models for generating cyber attack payloads. arXiv preprint arXiv:2305.15336 , 2023. Jiawei Chen, Hande Dong, Xiang Wang, Fuli Feng, Meng Wang, and Xiangnan He. Bias and debias in recommender system: A survey and future directions. ACM Transactions on Information Systems , 41(3):1–39, 2023. CIRA. Censorship practices of the people’s republic of china, 2024. URL https://www.uscc. gov/research/censorship-practices-peoples-republic-china . DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, | https://arxiv.org/abs/2505.17441v2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.