MikaStars39 commited on
Commit
b83c198
·
verified ·
1 Parent(s): 0fc5e2e

Upload folder using huggingface_hub

Browse files
agent_rl/papers.csv CHANGED
@@ -1,4 +1,4 @@
1
  title,keywords,url,type
2
- """[2505.17122] Shallow Preference Signals: Large Language Model Aligns Even Better with Truncated Data?""","llm, alignment, preference",https://arxiv.org/abs/2505.17122,agent_rl
3
- """[2505.17923] Language models can learn implicit multi-hop reasoning, but only if they have lots of training data""","llm, reasoning, multi-hop",https://arxiv.org/abs/2505.17923,agent_rl
4
- """[2505.22617] The Entropy Mechanism of Reinforcement Learning for Reasoning Language Models""","rl, llm, reasoning",https://arxiv.org/abs/2505.22617,agent_rl
 
1
  title,keywords,url,type
2
+ [2505.17122] Shallow Preference Signals: Large Language Model Aligns Even Better with Truncated Data?,"llm, alignment, preference",https://arxiv.org/abs/2505.17122,agent_rl
3
+ "[2505.17923] Language models can learn implicit multi-hop reasoning, but only if they have lots of training data","llm, reasoning, multi-hop",https://arxiv.org/abs/2505.17923,agent_rl
4
+ [2505.22617] The Entropy Mechanism of Reinforcement Learning for Reasoning Language Models,"rl, llm, reasoning",https://arxiv.org/abs/2505.22617,agent_rl
all/papers.csv CHANGED
@@ -1,237 +1,237 @@
1
  title,keywords,url,type
2
- """[2210.01117] Omnigrok: Grokking Beyond Algorithmic Data""","grok, llm, interp",https://arxiv.org/abs/2210.01117,interpretability
3
- """[2308.10248] Steering Language Models With Activation Engineering""",steer,https://arxiv.org/abs/2308.10248,interpretability
4
- """[2310.15213] Function Vectors in Large Language Models""","llm, icl, function vector",https://arxiv.org/abs/2310.15213,interpretability
5
- """[2312.00752] Mamba: Linear-Time Sequence Modeling with Selective State Spaces""","linear, mamba",https://arxiv.org/abs/2312.00752,efficiency
6
- """[2312.06635] Gated Linear Attention Transformers with Hardware-Efficient Training""",linear,https://arxiv.org/abs/2312.06635,efficiency
7
- """[2401.06824] Revisiting Jailbreaking for Large Language Models: A Representation Engineering Perspective""","steer, llm",https://arxiv.org/abs/2401.06824,interpretability
8
- """[2402.18312] How to think step-by-step: A mechanistic understanding of chain-of-thought reasoning""","CoT, interp, llm",https://arxiv.org/abs/2402.18312,interpretability
9
- """[2402.18344] Focus on Your Question! Interpreting and Mitigating Toxic CoT Problems in Commonsense Reasoning""","CoT, interp, llm",https://arxiv.org/abs/2402.18344,interpretability
10
- """[2403.01590] The Hidden Attention of Mamba Models""","linear, mamba, attention",https://arxiv.org/abs/2403.01590,interpretability
11
- """[2405.12522] Sparse Autoencoders Enable Scalable and Reliable Circuit Identification in Language Models""","SAE, llm",https://arxiv.org/abs/2405.12522,interpretability
12
- """[2405.14860] Not All Language Model Features Are Linear""","SAE, feature",https://arxiv.org/abs/2405.14860,interpretability
13
- """[2405.15071] Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization""","grok, llm",https://arxiv.org/abs/2405.15071,interpretability
14
- """[2405.21060] Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality""","linear, mamba",https://arxiv.org/abs/2405.21060,efficiency
15
- """[2406.06484] Parallelizing Linear Transformers with the Delta Rule over Sequence Length""",linear,https://arxiv.org/abs/2406.06484,efficiency
16
- """[2406.11944] Transcoders Find Interpretable LLM Feature Circuits""",SAE,https://arxiv.org/abs/2406.11944,interpretability
17
- """[2407.14494] InterpBench: Semi-Synthetic Transformers for Evaluating Mechanistic interpretability Techniques""","interp, benchmark",https://arxiv.org/abs/2407.14494,interpretability
18
- """[2409.04185] Residual Stream Analysis with Multi-Layer SAEs""","SAE, match, residual stream",https://arxiv.org/abs/2409.04185,interpretability
19
- """[2410.04234] Functional Homotopy: Smoothing Discrete Optimization via Continuous Parameters for LLM Jailbreak Attacks""",jailbreak,https://arxiv.org/abs/2410.04234,interpretability
20
- """[2410.06981] Sparse Autoencoders Reveal Universal Feature Spaces Across Large Language Models""","SAE, llm",https://arxiv.org/abs/2410.06981,interpretability
21
- """[2410.07656] Mechanistic Permutability: Match Features Across Layers""","SAE, match, residual stream",https://arxiv.org/abs/2410.07656,interpretability
22
- """[2410.16314] Steering Large Language Models using Conceptors: Improving Addition-Based Activation Engineering""",steer,https://arxiv.org/abs/2410.16314,interpretability
23
- """[2411.04330] Scaling Laws for Precision""","quantization, llm, scaling law",https://arxiv.org/abs/2411.04330,interpretability
24
- """[2411.14257] Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models""","hallucination, llm, knowledge",https://arxiv.org/abs/2411.14257,interpretability
25
- """[2501.06254] Rethinking Evaluation of Sparse Autoencoders through the Representation of Polysemous Words""",SAE,https://arxiv.org/abs/2501.06254,interpretability
26
- """[2502.03032] Analyze Feature Flow to Enhance Interpretation and Steering in Language Models""","steer, llm, feature",https://arxiv.org/abs/2502.03032,interpretability
27
- """[2502.09245] You Do Not Fully Utilize Transformer's Representation Capacity""","llm, transformers, representation",https://arxiv.org/abs/2502.09245,interpretability
28
- """[2502.12179] Identifiable Steering via Sparse Autoencoding of Multi-Concept Shifts""","steer, SAE",https://arxiv.org/abs/2502.12179,interpretability
29
- """[2502.12446] Multi-Attribute Steering of Language Models via Targeted Intervention""","steer, llm, alignment",https://arxiv.org/abs/2502.12446,interpretability
30
- """[2502.13490] What are Models Thinking about? Understanding Large Language Model Hallucinations ""Psychology"" through Model Inner State Analysis""","llm, hallucination",https://arxiv.org/abs/2502.13490,interpretability
31
- """[2502.13632] Concept Layers: Enhancing interpretability and Intervenability via LLM Conceptualization""","concept, llm, interp",https://arxiv.org/abs/2502.13632,interpretability
32
- """[2502.13913] How Do LLMs Perform Two-Hop Reasoning in Context?""","llm, reasoning, interp",https://arxiv.org/abs/2502.13913,interpretability
33
- """[2502.13946] Why Safeguarded Ships Run Aground? Aligned Large Language Models' Safety Mechanisms Tend to Be Anchored in The Template Region""","llm, alignment, safety",https://arxiv.org/abs/2502.13946,interpretability
34
- """[2502.14010] Which Attention Heads Matter for In-Context Learning?""","llm, icl, attention head",https://arxiv.org/abs/2502.14010,interpretability
35
- """[2502.14258] Does Time Have Its Place? Temporal Heads: Where Language Models Recall Time-specific Information""","attention head, time",https://arxiv.org/abs/2502.14258,interpretability
36
- """[2502.14888] The Multi-Faceted Monosemanticity in Multimodal Representations""","mllm, monosemanticity",https://arxiv.org/abs/2502.14888,interpretability
37
- """[2502.15277] Analyzing the Inner Workings of Transformers in Compositional Generalization""","llm, interp",https://arxiv.org/abs/2502.15277,interpretability
38
- """[2502.15603] Do Multilingual LLMs Think In English?""","mllm, think",https://arxiv.org/abs/2502.15603,interpretability
39
- """[2502.17355] On Relation-Specific Neurons in Large Language Models""","neuron, llm, relation",https://arxiv.org/abs/2502.17355,interpretability
40
- """[2502.17420] The Geometry of Refusal in Large Language Models: Concept Cones and Representational Independence""","safety, refusal, llm",https://arxiv.org/abs/2502.17420,interpretability
41
- """[2502.19964] Do Sparse Autoencoders Generalize? A Case Study of Answerability""","SAE, generalize",https://arxiv.org/abs/2502.19964,interpretability
42
- """[2503.02078] Superscopes: Amplifying Internal Feature Representations for Language Model Interpretation""","feature, llm",https://arxiv.org/abs/2503.02078,interpretability
43
- """[2503.02989] Effectively Steer LLM To Follow Preference via Building Confident Directions""","steer, llm",https://arxiv.org/abs/2503.02989,interpretability
44
- """[2503.03862] Not-Just-Scaling Laws: Towards a Better Understanding of the Downstream Impact of Language Model Design Decisions""","scaling law, llm, downstream",https://arxiv.org/abs/2503.03862,interpretability
45
- """[2503.07572] Optimizing Test-Time Compute via Meta Reinforcement Fine-Tuning""","test time scaling, RL, finetuning",https://arxiv.org/abs/2503.07572,interpretability
46
- """[2503.09573] Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models""","diffusion, llm, autogressive",https://arxiv.org/abs/2503.09573,interpretability
47
- """[2503.21073] Shared Global and Local Geometry of Language Model Embeddings""","geometry, llm",https://arxiv.org/abs/2503.21073,interpretability
48
- """[2503.21676] How do language models learn facts? Dynamics, curricula and hallucinations""","knowledge, hallucinations, llm",https://arxiv.org/abs/2503.21676,interpretability
49
- """[2503.22720] Why Representation Engineering Works: A Theoretical and Empirical Study in Vision-Language Models""","representation, vision, llm",https://arxiv.org/abs/2503.22720,interpretability
50
- """[2503.23084] The Reasoning-Memorization Interplay in Language Models Is Mediated by a Single Direction""","reasoning, memory, llm, direction",https://arxiv.org/abs/2503.23084,interpretability
51
- """[2503.23306] Focus Directions Make Your Language Models Pay More Attention to Relevant Contexts""","direction, llm, attention",https://arxiv.org/abs/2503.23306,interpretability
52
- """[2503.24071] From Colors to Classes: Emergence of Concepts in Vision Transformers""","concept, llm, vision",https://arxiv.org/abs/2503.24071,interpretability
53
- """[2503.24277] Evaluating and Designing Sparse Autoencoders by Approximating Quasi-Orthogonality""","sae, evaluation",https://arxiv.org/abs/2503.24277,interpretability
54
- """[2504.00194] Identifying Sparsely Active Circuits Through Local Loss Landscape Decomposition""","sparsity, circuits",https://arxiv.org/abs/2504.00194,interpretability
55
- """[2504.01100] Repetitions are not all alike: distinct mechanisms sustain repetition in language models""","repetition, llm",https://arxiv.org/abs/2504.01100,interpretability
56
- """[2504.01871] Interpreting Emergent Planning in Model-Free Reinforcement Learning""","rl, llm, planning",https://arxiv.org/abs/2504.01871,interpretability
57
- """[2504.02620] Efficient Model Editing with Task-Localized Sparse Fine-tuning""","edit, llm, sparsity",https://arxiv.org/abs/2504.02620,interpretability
58
- """[2504.02708] The Hidden Space of Safety: Understanding Preference-Tuned LLMs in Multilingual context""","safety, multilingual",https://arxiv.org/abs/2504.02708,interpretability
59
- """[2504.02732] Why do LLMs attend to the first token?""","attention, llm, attention sink",https://arxiv.org/abs/2504.02732,interpretability
60
- """[2504.02821] Sparse Autoencoders Learn Monosemantic Features in Vision-Language Models""","sae, llm, vision, mllm",https://arxiv.org/abs/2504.02821,interpretability
61
- """[2504.02862] Towards Understanding How Knowledge Evolves in Large Vision-Language Models""","knowledge, vision, llm, mllm",https://arxiv.org/abs/2504.02862,interpretability
62
- """[2504.02904] How Post-Training Reshapes LLMs: A Mechanistic View on Knowledge, Truthfulness, Refusal, and Confidence""","llm, post-training",https://arxiv.org/abs/2504.02904,interpretability
63
- """[2504.02922] Robustly identifying concepts introduced during chat fine-tuning using crosscoders""","chat, llm, crosscoder",https://arxiv.org/abs/2504.02922,interpretability
64
- """[2504.02956] Understanding Aha Moments: from External Observations to Internal Mechanisms""","r1, o1, tts, aha",https://arxiv.org/abs/2504.02956,interpretability
65
- """[2504.03022] The Dual-Route Model of Induction""","induction, llm",https://arxiv.org/abs/2504.03022,interpretability
66
- """[2504.03635] Do Larger Language Models Imply Better Reasoning? A Pretraining Scaling Law for Reasoning""","scaling law, reasoning, llm",https://arxiv.org/abs/2504.03635,interpretability
67
- """[2504.03889] Using Attention Sinks to Identify and Evaluate Dormant Heads in Pretrained LLMs""","attention sink, llm, attention head",https://arxiv.org/abs/2504.03889,interpretability
68
- """[2504.03933] Language Models Are Implicitly Continuous""","llm, continuous",https://arxiv.org/abs/2504.03933,interpretability
69
- """[2504.04215] Towards Understanding and Improving Refusal in Compressed Models via Mechanistic interpretability""","refusal, mi",https://arxiv.org/abs/2504.04215,interpretability
70
- """[2504.04238] Sensitivity Meets Sparsity: The Impact of Extremely Sparse Parameter Patterns on Theory-of-Mind of Large Language Models""","llm, sparsity",https://arxiv.org/abs/2504.04238,interpretability
71
- """[2504.04264] Lost in Multilinguality: Dissecting Cross-lingual Factual Inconsistency in Transformer Language Models""","mllm, llm",https://arxiv.org/abs/2504.04264,interpretability
72
- """[2504.04635] Steering off Course: Reliability Challenges in Steering Language Models""","steer, llm",https://arxiv.org/abs/2504.04635,interpretability
73
- """[2504.04994] Following the Whispers of Values: Unraveling Neural Mechanisms Behind Value-Oriented Behaviors in LLMs""","neural mechanism, llm",https://arxiv.org/abs/2504.04994,interpretability
74
- """[2504.14218] Understanding the Repeat Curse in Large Language Models from a Feature Perspective""","repetition, llm, sae, feature",https://arxiv.org/abs/2504.14218,interpretability
75
- """[2504.14496] Functional Abstraction of Knowledge Recall in Large Language Models""","knowledge, recall, llm",https://arxiv.org/abs/2504.14496,interpretability
76
- """[2504.15133] EasyEdit2: An Easy-to-use Steering Framework for Editing Large Language Models""","steer, llm, edit",https://arxiv.org/abs/2504.15133,interpretability
77
- """[2504.15471] Bigram Subnetworks: Mapping to Next Tokens in Transformer Language Models""","next token, transformer",https://arxiv.org/abs/2504.15471,interpretability
78
- """[2504.15473] Emergence and Evolution of Interpretable Concepts in Diffusion Models""","diffusion, concept",https://arxiv.org/abs/2504.15473,interpretability
79
- """[2504.15630] Exploiting Contextual Knowledge in LLMs through V-usable Information based Layer Enhancement""","knowledge, llm",https://arxiv.org/abs/2504.15630,interpretability
80
- """[2504.16871] Exploring How LLMs Capture and Represent Domain-Specific Knowledge""","knowledge, llm",https://arxiv.org/abs/2504.16871,interpretability
81
- """[2505.13514] Induction Head Toxicity Mechanistically Explains Repetition Curse in Large Language Models""","induction head, repetition curse, llm",https://arxiv.org/abs/2505.13514,interpretability
82
- """[2505.13737] Causal Head Gating: A Framework for Interpreting Roles of Attention Heads in Transformers""","causal head gating, attention heads, transformers",https://arxiv.org/abs/2505.13737,interpretability
83
- """[2505.13763] Language Models Are Capable of Metacognitive Monitoring and Control of Their Internal Activations""","metacognition, internal activations, llm",https://arxiv.org/abs/2505.13763,interpretability
84
- """[2505.13898] Do Language Models Use Their Depth Efficiently?""","model depth, efficiency, language models",https://arxiv.org/abs/2505.13898,interpretability
85
- """[2505.14158] Temporal Alignment of Time Sensitive Facts with Activation Engineering""","temporal alignment, activation engineering",https://arxiv.org/abs/2505.14158,interpretability
86
- """[2505.14158] Temporal Alignment of Time Sensitive Facts with Activation Engineering""","temporal alignment, activation engineering",https://arxiv.org/abs/2505.14158,interpretability
87
- """[2505.14178] Tokenization Constraints in LLMs: A Study of Symbolic and Arithmetic Reasoning Limits""","tokenization, llm, reasoning limits",https://arxiv.org/abs/2505.14178,interpretability
88
- """[2505.14185] Safety Subspaces are Not Distinct: A Fine-Tuning Case Study""","safety subspaces, fine-tuning",https://arxiv.org/abs/2505.14185,interpretability
89
- """[2505.14233] Mechanistic Fine-tuning for In-context Learning""","mechanistic fine-tuning, in-context learning",https://arxiv.org/abs/2505.14233,interpretability
90
- """[2505.14257] Aligning Attention Distribution to Information Flow for Hallucination Mitigation in Large Vision-Language Models""","attention alignment, hallucination mitigation, lvlm",https://arxiv.org/abs/2505.14257,interpretability
91
- """[2505.14352] Towards eliciting latent knowledge from LLMs with mechanistic interpretability""","latent knowledge, llm, mechanistic interpretability",https://arxiv.org/abs/2505.14352,interpretability
92
- """[2505.14406] Pierce the Mists, Greet the Sky: Decipher Knowledge Overshadowing via Knowledge Circuit Analysis""","knowledge overshadowing, circuit analysis",https://arxiv.org/abs/2505.14406,interpretability
93
- """[2505.14467] Void in Language Models""","void, language models",https://arxiv.org/abs/2505.14467,interpretability
94
- """[2505.14536] Breaking Bad Tokens: Detoxification of LLMs Using Sparse Autoencoders""","detoxification, llm, sparse autoencoders",https://arxiv.org/abs/2505.14536,interpretability
95
- """[2505.14685] Language Models use Lookbacks to Track Beliefs""","lookbacks, belief tracking, language models",https://arxiv.org/abs/2505.14685,interpretability
96
- """[2505.17122] Shallow Preference Signals: Large Language Model Aligns Even Better with Truncated Data?""","llm, alignment, preference",https://arxiv.org/abs/2505.17122,agent_rl
97
- """[2505.17923] Language models can learn implicit multi-hop reasoning, but only if they have lots of training data""","llm, reasoning, multi-hop",https://arxiv.org/abs/2505.17923,agent_rl
98
- """[2505.17630] GIM: Improved interpretability for Large Language Models""","llm, interp",https://arxiv.org/abs/2505.17630,interpretability
99
- """[2505.17936] Understanding Gated Neurons in Transformers from Their Input-Output Functionality""","transformers, interp",https://arxiv.org/abs/2505.17936,interpretability
100
- """[2505.17760] But what is your honest answer? Aiding LLM-judges with honest alternatives using steering vectors""","steer, llm, honest",https://arxiv.org/abs/2505.17760,interpretability
101
- """[2505.17322] From Compression to Expansion: A Layerwise Analysis of In-Context Learning""","in-context learning, llm",https://arxiv.org/abs/2505.17322,interpretability
102
- """[2505.17073] Mechanistic interpretability of GPT-like Models on Summarization Tasks""","mechanistic interp, llm, summarization",https://arxiv.org/abs/2505.17073,interpretability
103
- """[2505.17697] Activation Control for Efficiently Eliciting Long Chain-of-thought Ability of Language Models""","activation, CoT, llm, steer",https://arxiv.org/abs/2505.17697,interpretability
104
- """[2505.17712] Understanding How Value Neurons Shape the Generation of Specified Values in LLMs""","interp, value neuron, llm",https://arxiv.org/abs/2505.17712,interpretability
105
- """[2505.17812] Seeing It or Not? Interpretable Vision-aware Latent Steering to Mitigate Object Hallucinations""","interp, vision, steer, llm",https://arxiv.org/abs/2505.17812,interpretability
106
- """[2505.17769] Inference-Time Decomposition of Activations (ITDA): A Scalable Approach to Interpreting Large Language Models""","interp, activation, llm",https://arxiv.org/abs/2505.17769,interpretability
107
- """[2505.17260] The Rise of Parameter Specialization for Knowledge Storage in Large Language Models""","knowledge, llm",https://arxiv.org/abs/2505.17260,interpretability
108
- """[2505.17863] The emergence of sparse attention: impact of data distribution and benefits of repetition""","attention, sparse, llm",https://arxiv.org/abs/2505.17863,interpretability
109
- """[2505.17071] What's in a prompt? Language models encode literary style in prompt embeddings""","prompt, llm, interp",https://arxiv.org/abs/2505.17071,interpretability
110
- """[2505.17646] Understanding Pre-training and Fine-tuning from Loss Landscape Perspectives""","pre-training, fine-tuning, llm",https://arxiv.org/abs/2505.17646,interpretability
111
- """[2505.17078] GloSS over Toxicity: Understanding and Mitigating Toxicity in LLMs via Global Toxic Subspace""","toxicity, llm, interp",https://arxiv.org/abs/2505.17078,interpretability
112
- """[2505.19440] The Birth of Knowledge: Emergent Features across Time, Space, and Scale in Large Language Models""","emergent features, llm",https://arxiv.org/abs/2505.19440,interpretability
113
- """[2505.18672] Does Representation Intervention Really Identify Desired Concepts and Elicit Alignment?""","representation intervention, alignment",https://arxiv.org/abs/2505.18672,interpretability
114
- """[2505.18752] Unifying Attention Heads and Task Vectors via Hidden State Geometry in In-Context Learning""","attention heads, task vectors, in-context learning",https://arxiv.org/abs/2505.18752,interpretability
115
- """[2505.18933] REACT: Representation Extraction And Controllable Tuning to Overcome Overfitting in LLM Knowledge Editing""","representation extraction, controllable tuning, knowledge editing",https://arxiv.org/abs/2505.18933,interpretability
116
- """[2505.18588] Safety Alignment via Constrained Knowledge Unlearning""","safety alignment, knowledge unlearning",https://arxiv.org/abs/2505.18588,interpretability
117
- """[2505.18706] Steering LLM Reasoning Through Bias-Only Adaptation""","steering, llm, bias adaptation",https://arxiv.org/abs/2505.18706,interpretability
118
- """[2505.19488] Understanding Transformer from the Perspective of Associative Memory""","transformer, associative memory",https://arxiv.org/abs/2505.19488,interpretability
119
- """[2505.20076] Grokking ExPLAIND: Unifying Model, Data, and Training Attribution to Study Model Behavior""","grokking, model attribution, data attribution",https://arxiv.org/abs/2505.20076,interpretability
120
- """[2505.18235] The Origins of Representation Manifolds in Large Language Models""","representation manifolds, llm",https://arxiv.org/abs/2505.18235,interpretability
121
- """[2505.20063] SAEs Are Good for Steering -- If You Select the Right Features""","SAE, steering",https://arxiv.org/abs/2505.20063,interpretability
122
- """[2505.20045] Uncertainty-Aware Attention Heads: Efficient Unsupervised Uncertainty Quantification for LLMs""","attention heads, uncertainty quantification, llm",https://arxiv.org/abs/2505.20045,efficiency
123
- """[2505.16178] Understanding Fact Recall in Language Models: Why Two-Stage Training Encourages Memorization but Mixed Training Teaches Knowledge""","llm, memorization, knowledge",https://arxiv.org/abs/2505.16178,interpretability
124
- """[2505.16284] Only Large Weights (And Not Skip Connections) Can Prevent the Perils of Rank Collapse""","rank collapse, weights",https://arxiv.org/abs/2505.16284,efficiency
125
- """[2505.22586] Precise In-Parameter Concept Erasure in Large Language Models""","llm, erasure, interp",https://arxiv.org/abs/2505.22586,interpretability
126
- """[2505.22630] Stochastic Chameleons: Irrelevant Context Hallucinations Reveal Class-Based (Mis)Generalization in LLMs""","llm, hallucination, generalization",https://arxiv.org/abs/2505.22630,interpretability
127
- """[2505.21800] From Directions to Cones: Exploring Multidimensional Representations of Propositional Facts in LLMs""","llm, representation, interp",https://arxiv.org/abs/2505.21800,interpretability
128
- """[2505.21785] Born a Transformer -- Always a Transformer?""","transformer, architecture",https://arxiv.org/abs/2505.21785,efficiency
129
- """[2505.22506] Sparsification and Reconstruction from the Perspective of Representation Geometry""","sparsification, representation, geometry",https://arxiv.org/abs/2505.22506,efficiency
130
- """[2505.22411] Mitigating Overthinking in Large Reasoning Models via Manifold Steering""","llm, reasoning, steer",https://arxiv.org/abs/2505.22411,interpretability
131
- """[2505.22572] Fusion Steering: Prompt-Specific Activation Control""","steer, activation, llm",https://arxiv.org/abs/2505.22572,interpretability
132
- """[2505.21772] Calibrating LLM Confidence by Probing Perturbed Representation Stability""","llm, calibration, representation",https://arxiv.org/abs/2505.21772,interpretability
133
- """[2505.22255] Train Sparse Autoencoders Efficiently by Utilizing Features Correlation""","SAE, efficiency, training",https://arxiv.org/abs/2505.22255,efficiency
134
- """[2505.22617] The Entropy Mechanism of Reinforcement Learning for Reasoning Language Models""","rl, llm, reasoning",https://arxiv.org/abs/2505.22617,agent_rl
135
- """[2506.01034] Less is More: Local Intrinsic Dimensions of Contextual Language Models""","llm, dimension",https://arxiv.org/abs/2506.01034,Interpretability
136
- """[2506.01042] Probing Neural Topology of Large Language Models""","llm, probing, topology",https://arxiv.org/abs/2506.01042,Interpretability
137
- """[2506.01074] How Programming Concepts and Neurons Are Shared in Code Language Models - ACL 2025 Findings""","code, llm, neuron",https://arxiv.org/abs/2506.01074,Interpretability
138
- """[2506.01115] Attention Retrieves, MLP Memorizes: Disentangling Trainable Components in the Transformer""","attention, mlp, transformer",https://arxiv.org/abs/2506.01115,Interpretability
139
- """[2506.00653] Linear Representation Transferability Hypothesis: Leveraging Small Models to Steer Large Models""","steer, llm, transferability",https://arxiv.org/abs/2506.00653,Interpretability
140
- """[2506.00382] Spectral Insights into Data-Oblivious Critical Layers in Large Language Models - ACL 2025 Findings""","llm, spectral",https://arxiv.org/abs/2506.00382,Interpretability
141
- """[2506.00085] COSMIC: Generalized Refusal Direction Identification in LLM Activations - ACL 2025 Findings""","llm, activation",https://arxiv.org/abs/2506.00085,Interpretability
142
- """[2506.00823] Probing the Geometry of Truth: Consistency and Generalization of Truth Directions in LLMs Across Logical Transformations and Question Answering Tasks - ACL 2025 Findings""","llm, probing, truth",https://arxiv.org/abs/2506.00823,Interpretability
143
- """[2506.00772] LIFT the Veil for the Truth: Principal Weights Emerge after Rank Reduction for Reasoning-Focused Supervised Fine-Tuning - ICML 2025""","reasoning, fine-tuning",https://arxiv.org/abs/2506.00772,Interpretability
144
- """[2506.01939] Beyond the 80/20 Rule: High-Entropy Minority Tokens Drive Effective Reinforcement Learning for LLM Reasoning - Qwen""","rl, llm, reasoning",https://arxiv.org/abs/2506.01939,Agent/RL
145
- """[2506.00799] Uni-LoRA: One Vector is All You Need""","LoRA, efficient",https://arxiv.org/abs/2506.00799,Efficiency
146
- """[2505.24293] Large Language Models are Locally Linear Mappings""","llm, linear",https://arxiv.org/abs/2505.24293,Interpretability
147
- """[2505.24731] Circuit Stability Characterizes Language Model Generalization""","llm, generalization, circuit",https://arxiv.org/abs/2505.24731,Interpretability
148
- """[2505.24832] How much do language models memorize?""","llm, memorize",https://arxiv.org/abs/2505.24832,Interpretability
149
- """[2505.24244] Mamba Knockout for Unraveling Factual Information Flow - ACL 2025""","mamba, factual",https://arxiv.org/abs/2505.24244,Interpretability
150
- """[2505.24688] Soft Reasoning: Navigating Solution Spaces in Large Language Models through Controlled Embedding Exploration - ICML 2025""","reasoning, llm, embedding",https://arxiv.org/abs/2505.24688,Interpretability
151
- """[2505.24428] Model Unlearning via Sparse Autoencoder Subspace Guided Projections""","unlearning, sae",https://arxiv.org/abs/2505.24428,Interpretability
152
- """[2505.24009] Diversity of Transformer Layers: One Aspect of Parameter Scaling Laws""","transformer, scaling laws",https://arxiv.org/abs/2505.24009,Interpretability
153
- """[2505.24539] Localizing Persona Representations in LLMs""","llm, persona",https://arxiv.org/abs/2505.24539,Interpretability
154
- """[2505.24535] Beyond Linear Steering: Unified Multi-Attribute Control for Language Models""","steer, llm",https://arxiv.org/abs/2505.24535,Interpretability
155
- """[2505.24362] Knowing Before Saying: LLM Representations Encode Information About Chain-of-Thought Success Before Completion""","llm, CoT",https://arxiv.org/abs/2505.24362,Interpretability
156
- """[2505.24360] Interpreting Large Text-to-Image Diffusion Models with Dictionary Learning""","diffusion, interp",https://arxiv.org/abs/2505.24360,Interpretability
157
- """[2505.23911] One Task Vector is not Enough: A Large-Scale Study for In-Context Learning""",in-context learning,https://arxiv.org/abs/2505.23911,Interpretability
158
- """[2505.24473] Train One Sparse Autoencoder Across Multiple Sparsity Budgets to Preserve Interpretability and Accuracy""","sae, interp",https://arxiv.org/abs/2505.24473,Interpretability
159
- """[2505.23013] Scalable Complexity Control Facilitates Reasoning Ability of LLMs""","reasoning, llm, complexity",https://arxiv.org/abs/2505.23013,Interpretability
160
- """[2505.23556] Understanding Refusal in Language Models with Sparse Autoencoders""","refusal, llm, SAE",https://arxiv.org/abs/2505.23556,Interpretability
161
- """[2505.23653] How does Transformer Learn Implicit Reasoning?""","transformer, reasoning, implicit",https://arxiv.org/abs/2505.23653,Interpretability
162
- """[2505.23657] Active Layer-Contrastive Decoding Reduces Hallucination in Large Language Model Generation""","hallucination, llm, decoding",https://arxiv.org/abs/2505.23657,Efficiency
163
- """[2505.23701] Can LLMs Reason Abstractly Over Math Word Problems Without CoT? Disentangling Abstract Formulation From Arithmetic Computation""","llm, reasoning, CoT",https://arxiv.org/abs/2505.23701,Interpretability
164
- """[2505.22689] SlimLLM: Accurate Structured Pruning for Large Language Models""","pruning, llm, efficiency",https://arxiv.org/abs/2505.22689,Efficiency
165
- """[2505.22756] Decomposing Elements of Problem Solving: What ""Math"" Does RL Teach?""","RL, math, problem solving",https://arxiv.org/abs/2505.22756,Agent/RL
166
- """[2506.02132] Model Internal Sleuthing: Finding Lexical Identity and Inflectional Morphology in Modern Language Models""","llm, interp, morphology",https://arxiv.org/abs/2506.02132,Interpretability
167
- """[2506.02996] Linear Spatial World Models Emerge in Large Language Models""","llm, world models, linear",https://arxiv.org/abs/2506.02996,Interpretability
168
- """[2506.02701] On Entity Identification in Language Models""","llm, entity identification",https://arxiv.org/abs/2506.02701,Interpretability
169
- """[2506.02867] Demystifying Reasoning Dynamics with Mutual Information: Thinking Tokens are Information Peaks in LLM Reasoning""","llm, reasoning, mutual information",https://arxiv.org/abs/2506.02867,Interpretability
170
- """[2506.03434] Time Course MechInterp: Analyzing the Evolution of Components and Knowledge in Large Language Models""","llm, interp, mechanistic interpretability",https://arxiv.org/abs/2506.03434,Interpretability
171
- """[2506.04142] Establishing Trustworthy LLM Evaluation via Shortcut Neuron Analysis""","llm, evaluation, shortcut neurons",https://arxiv.org/abs/2506.04142,Interpretability
172
- """[2506.03292] HyperSteer: Activation Steering at Scale with Hypernetworks""","steer, llm, hypernetworks",https://arxiv.org/abs/2506.03292,Agent/RL
173
- """[2506.03426] Adaptive Task Vectors for Large Language Models""","llm, task vectors",https://arxiv.org/abs/2506.03426,Agent/RL
174
- """[2506.01347] The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning""","RL, LLM, reasoning",https://arxiv.org/abs/2506.01347,Agent/RL
175
- """[2506.17673] FaithfulSAE: Towards Capturing Faithful Features with Sparse Autoencoders without External Dataset Dependencies""","SAE, interp, llm",https://arxiv.org/abs/2506.17673,Interpretability
176
- """[2506.18053] Mechanistic Interpretability in the Presence of Architectural Obfuscation""","interp, mechanistic",https://arxiv.org/abs/2506.18053,Interpretability
177
- """[2506.18167] Understanding Reasoning in Thinking Language Models via Steering Vectors - Neel Nanda""","steer, llm, reasoning",https://arxiv.org/abs/2506.18167,Interpretability
178
- """[2506.18141] Sparse Feature Coactivation Reveals Composable Semantic Modules in Large Language Models""","interp, llm, feature",https://arxiv.org/abs/2506.18141,Interpretability
179
- """[2506.18887] Steering Conceptual Bias via Transformer Latent-Subspace Activation""","steer, llm, bias",https://arxiv.org/abs/2506.18887,Interpretability
180
- """[2506.18233] The 4th Dimension for Scaling Model Size""","scaling, efficiency",https://arxiv.org/abs/2506.18233,Efficiency
181
- """[2506.18852] Mechanistic Interpretability Needs Philosophy""","interp, mechanistic, philosophy",https://arxiv.org/abs/2506.18852,Interpretability
182
- """[2506.17859] In-Context Learning Strategies Emerge Rationally""","in-context learning, llm",https://arxiv.org/abs/2506.17859,Interpretability
183
- """[2506.16975] Latent Concept Disentanglement in Transformer-based Language Models""","llm, interp, disentanglement",https://arxiv.org/abs/2506.16975,Interpretability
184
- """[2506.16078] Probing the Robustness of Large Language Models Safety to Latent Perturbations""","llm, safety, robustness",https://arxiv.org/abs/2506.16078,Interpretability
185
- """[2506.16678] Mechanisms vs. Outcomes: Probing for Syntax Fails to Explain Performance on Targeted Syntactic Evaluations""","llm, interp, syntax",https://arxiv.org/abs/2506.16678,Interpretability
186
- """[2506.17052] From Concepts to Components: Concept-Agnostic Attention Module Discovery in Transformers""","llm, attention, interp",https://arxiv.org/abs/2506.17052,Interpretability
187
- """[2506.17090] Better Language Model Inversion by Compactly Representing Next-Token Distributions""","llm, inversion",https://arxiv.org/abs/2506.17090,Interpretability
188
- """[2506.15872] Hidden Breakthroughs in Language Model Training""","llm, training",https://arxiv.org/abs/2506.15872,Efficiency
189
- """[2506.15710] RAST: Reasoning Activation in LLMs via Small-model Transfer""","llm, reasoning",https://arxiv.org/abs/2506.15710,Interpretability
190
- """[2506.15735] ContextBench: Modifying Contexts for Targeted Latent Activation""","llm, activation, interp",https://arxiv.org/abs/2506.15735,Interpretability
191
- """[2506.16406] Drag-and-Drop LLMs: Zero-Shot Prompt-to-Weights""","llm, prompting",https://arxiv.org/abs/2506.16406,Interpretability
192
- """[2506.15963] On the Theoretical Understanding of Identifiable Sparse Autoencoders and Beyond""","SAE, interp, theory",https://arxiv.org/abs/2506.15963,Interpretability
193
- """[2506.15679] Dense SAE Latents Are Features, Not Bugs""","SAE, interp, features",https://arxiv.org/abs/2506.15679,Interpretability
194
- """[2506.15606] LoX: Low-Rank Extrapolation Robustifies LLM Safety Against Fine-tuning""","llm, safety, fine-tuning",https://arxiv.org/abs/2506.15606,Interpretability
195
- """[2506.15647] Exploring and Exploiting the Inherent Efficiency within Large Reasoning Models for Self-Guided Efficiency Enhancement""","llm, reasoning, efficiency",https://arxiv.org/abs/2506.15647,Efficiency
196
- """[2506.12152] Because we have LLMs, we Can and Should Pursue Agentic Interpretability""","agent, interp, llm",https://arxiv.org/abs/2506.12152,Interpretability
197
- """[2506.12576] Enabling Precise Topic Alignment in Large Language Models Via Sparse Autoencoders""","llm, SAE, alignment",https://arxiv.org/abs/2506.12576,Interpretability
198
- """[2506.12217] From Emergence to Control: Probing and Modulating Self-Reflection in Language Models""","llm, control, interp",https://arxiv.org/abs/2506.12217,Interpretability
199
- """[2506.13206] Thought Crime: Backdoors and Emergent Misalignment in Reasoning Models""","llm, safety, reasoning",https://arxiv.org/abs/2506.13206,Interpretability
200
- """[2506.13216] Capability Salience Vector: Fine-grained Alignment of Loss and Capabilities for Downstream Task Scaling Law""","llm, scaling",https://arxiv.org/abs/2506.13216,Efficiency
201
- """[2506.13688] What Happens During the Loss Plateau? Understanding Abrupt Learning in Transformers""","llm, training",https://arxiv.org/abs/2506.13688,Efficiency
202
- """[2506.12880] Universal Jailbreak Suffixes Are Strong Attention Hijackers""","llm, safety, attention",https://arxiv.org/abs/2506.12880,Interpretability
203
- """[2506.13674] Prefix-Tuning+: Modernizing Prefix-Tuning through Attention Independent Prefix Data""","llm, tuning",https://arxiv.org/abs/2506.13674,Efficiency
204
- """[2506.12119] Can Mixture-of-Experts Surpass Dense LLMs Under Strictly Equal Resources?""","llm, MoE, efficiency",https://arxiv.org/abs/2506.12119,Efficiency
205
- """[2506.13752] Steering LLM Thinking with Budget Guidance""","steer, llm, reasoning",https://arxiv.org/abs/2506.13752,Interpretability
206
- """[2506.13734] Instruction Following by Boosting Attention of Large Language Models""","llm, attention",https://arxiv.org/abs/2506.13734,Interpretability
207
- """[2506.11618] Convergent Linear Representations of Emergent Misalignment - Neel Nanda""","llm, misalignment, interp",https://arxiv.org/abs/2506.11618,Interpretability
208
- """[2506.11613] Model Organisms for Emergent Misalignment - Neel Nanda""","llm, misalignment, interp",https://arxiv.org/abs/2506.11613,Interpretability
209
- """[2506.11976] How Visual Representations Map to Language Feature Space in Multimodal LLMs - Neel Nanda""","llm, multimodal, interp",https://arxiv.org/abs/2506.11976,Interpretability
210
- """[2506.11088] Two Birds with One Stone: Improving Factuality and Faithfulness of LLMs via Dynamic Interactive Subspace Editing""","llm, factuality, faithfulness",https://arxiv.org/abs/2506.11088,Interpretability
211
- """[2506.11769] Long-Short Alignment for Effective Long-Context Modeling in LLMs - ICML 2025""","llm, long-context",https://arxiv.org/abs/2506.11769,Efficiency
212
- """[2506.11123] Sparse Autoencoders Bridge The Deep Learning Model and The Brain""","SAE, interp",https://arxiv.org/abs/2506.11123,Interpretability
213
- """[2506.10641] Spelling-out is not Straightforward: LLMs' Capability of Tokenization from Token to Characters""","llm, tokenization",https://arxiv.org/abs/2506.10641,Interpretability
214
- """[2506.10920] Decomposing MLP Activations into Interpretable Features via Semi-Nonnegative Matrix Factorization""","llm, interp, features",https://arxiv.org/abs/2506.10920,Interpretability
215
- """[2506.10887] Generalization or Hallucination? Understanding Out-of-Context Reasoning in Transformers""","llm, reasoning, generalization",https://arxiv.org/abs/2506.10887,Interpretability
216
- """[2506.10922] Robustly Improving LLM Fairness in Realistic Settings via Interpretability""","llm, fairness, interp",https://arxiv.org/abs/2506.10922,Interpretability
217
- """[2506.09099] Too Big to Think: Capacity, Memorization, and Generalization in Pre-Trained Transformers""","llm, generalization, memorization",https://arxiv.org/abs/2506.09099,Interpretability
218
- """[2506.09277] Did I Faithfully Say What I Thought? Bridging the Gap Between Neural Activity and Self-Explanations in Large Language Models""","llm, interp, self-explanation",https://arxiv.org/abs/2506.09277,Interpretability
219
- """[2506.09890] The Emergence of Abstract Thought in Large Language Models Beyond Any Language""","llm, abstract thought",https://arxiv.org/abs/2506.09890,Interpretability
220
- """[2506.09251] Extrapolation by Association: Length Generalization Transfer in Transformers""","llm, generalization",https://arxiv.org/abs/2506.09251,Efficiency
221
- """[2506.08427] Know-MRI: A Knowledge Mechanisms Revealer&Interpreter for Large Language Models""","llm, interp, knowledge",https://arxiv.org/abs/2506.08427,Interpretability
222
- """[2506.08359] DEAL: Disentangling Transformer Head Activations for LLM Steering""","steer, llm, interp",https://arxiv.org/abs/2506.08359,Interpretability
223
- """[2506.08572] The Geometries of Truth Are Orthogonal Across Tasks""","llm, truth",https://arxiv.org/abs/2506.08572,Interpretability
224
- """[2506.08473] AsFT: Anchoring Safety During LLM Fine-Tuning Within Narrow Safety Basin""","llm, safety, fine-tuning",https://arxiv.org/abs/2506.08473,Interpretability
225
- """[2506.09048] Understanding Task Vectors in In-Context Learning: Emergence, Functionality, and Limitations""","llm, in-context learning",https://arxiv.org/abs/2506.09048,Interpretability
226
- """[2506.08184] Unable to forget: Proactive lnterference Reveals Working Memory Limits in LLMs Beyond Context Length""","llm, memory",https://arxiv.org/abs/2506.08184,Interpretability
227
- """[2506.08966] Pre-trained Language Models Learn Remarkably Accurate Representations of Numbers""","llm, representation",https://arxiv.org/abs/2506.08966,Interpretability
228
- """[2506.09047] Same Task, Different Circuits: Disentangling Modality-Specific Mechanisms in VLMs""","llm, multimodal, interp",https://arxiv.org/abs/2506.09047,Interpretability
229
- """[2506.08552] Efficient Post-Training Refinement of Latent Reasoning in Large Language Models""","llm, reasoning, efficiency",https://arxiv.org/abs/2506.08552,Efficiency
230
- """[2506.07406] InverseScope: Scalable Activation Inversion for Interpreting Large Language Models""","llm, interp, activation inversion",https://arxiv.org/abs/2506.07406,Interpretability
231
- """[2506.07691] Training Superior Sparse Autoencoders for Instruct Models""","SAE, llm, training",https://arxiv.org/abs/2506.07691,Interpretability
232
- """[2506.07335] Improving LLM Reasoning through Interpretable Role-Playing Steering""","steer, llm, reasoning",https://arxiv.org/abs/2506.07335,Interpretability
233
- """[2506.06686] Learning Distribution-Wise Control in Representation Space for Language Models""","llm, control, representation",https://arxiv.org/abs/2506.06686,Interpretability
234
- """[2506.06609] Transferring Features Across Language Models With Model Stitching""","llm, transfer learning",https://arxiv.org/abs/2506.06609,Efficiency
235
- """[2506.07240] Overclocking LLM Reasoning: Monitoring and Controlling Thinking Path Lengths in LLMs""","llm, reasoning, control",https://arxiv.org/abs/2506.07240,Interpretability
236
- """[2506.06105] Text-to-LoRA: Instant Transformer Adaption""","llm, LoRA, adaption",https://arxiv.org/abs/2506.06105,Efficiency
237
- """[2506.06607] Training-Free Tokenizer Transplantation via Orthogonal Matching Pursuit""","tokenizer, llm",https://arxiv.org/abs/2506.06607,Efficiency
 
1
  title,keywords,url,type
2
+ [2210.01117] Omnigrok: Grokking Beyond Algorithmic Data,"grok, llm, interp",https://arxiv.org/abs/2210.01117,interpretability
3
+ [2308.10248] Steering Language Models With Activation Engineering,steer,https://arxiv.org/abs/2308.10248,interpretability
4
+ [2310.15213] Function Vectors in Large Language Models,"llm, icl, function vector",https://arxiv.org/abs/2310.15213,interpretability
5
+ [2312.00752] Mamba: Linear-Time Sequence Modeling with Selective State Spaces,"linear, mamba",https://arxiv.org/abs/2312.00752,efficiency
6
+ [2312.06635] Gated Linear Attention Transformers with Hardware-Efficient Training,linear,https://arxiv.org/abs/2312.06635,efficiency
7
+ [2401.06824] Revisiting Jailbreaking for Large Language Models: A Representation Engineering Perspective,"steer, llm",https://arxiv.org/abs/2401.06824,interpretability
8
+ [2402.18312] How to think step-by-step: A mechanistic understanding of chain-of-thought reasoning,"CoT, interp, llm",https://arxiv.org/abs/2402.18312,interpretability
9
+ [2402.18344] Focus on Your Question! Interpreting and Mitigating Toxic CoT Problems in Commonsense Reasoning,"CoT, interp, llm",https://arxiv.org/abs/2402.18344,interpretability
10
+ [2403.01590] The Hidden Attention of Mamba Models,"linear, mamba, attention",https://arxiv.org/abs/2403.01590,interpretability
11
+ [2405.12522] Sparse Autoencoders Enable Scalable and Reliable Circuit Identification in Language Models,"SAE, llm",https://arxiv.org/abs/2405.12522,interpretability
12
+ [2405.14860] Not All Language Model Features Are Linear,"SAE, feature",https://arxiv.org/abs/2405.14860,interpretability
13
+ [2405.15071] Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization,"grok, llm",https://arxiv.org/abs/2405.15071,interpretability
14
+ [2405.21060] Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality,"linear, mamba",https://arxiv.org/abs/2405.21060,efficiency
15
+ [2406.06484] Parallelizing Linear Transformers with the Delta Rule over Sequence Length,linear,https://arxiv.org/abs/2406.06484,efficiency
16
+ [2406.11944] Transcoders Find Interpretable LLM Feature Circuits,SAE,https://arxiv.org/abs/2406.11944,interpretability
17
+ [2407.14494] InterpBench: Semi-Synthetic Transformers for Evaluating Mechanistic interpretability Techniques,"interp, benchmark",https://arxiv.org/abs/2407.14494,interpretability
18
+ [2409.04185] Residual Stream Analysis with Multi-Layer SAEs,"SAE, match, residual stream",https://arxiv.org/abs/2409.04185,interpretability
19
+ [2410.04234] Functional Homotopy: Smoothing Discrete Optimization via Continuous Parameters for LLM Jailbreak Attacks,jailbreak,https://arxiv.org/abs/2410.04234,interpretability
20
+ [2410.06981] Sparse Autoencoders Reveal Universal Feature Spaces Across Large Language Models,"SAE, llm",https://arxiv.org/abs/2410.06981,interpretability
21
+ [2410.07656] Mechanistic Permutability: Match Features Across Layers,"SAE, match, residual stream",https://arxiv.org/abs/2410.07656,interpretability
22
+ [2410.16314] Steering Large Language Models using Conceptors: Improving Addition-Based Activation Engineering,steer,https://arxiv.org/abs/2410.16314,interpretability
23
+ [2411.04330] Scaling Laws for Precision,"quantization, llm, scaling law",https://arxiv.org/abs/2411.04330,interpretability
24
+ [2411.14257] Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models,"hallucination, llm, knowledge",https://arxiv.org/abs/2411.14257,interpretability
25
+ [2501.06254] Rethinking Evaluation of Sparse Autoencoders through the Representation of Polysemous Words,SAE,https://arxiv.org/abs/2501.06254,interpretability
26
+ [2502.03032] Analyze Feature Flow to Enhance Interpretation and Steering in Language Models,"steer, llm, feature",https://arxiv.org/abs/2502.03032,interpretability
27
+ [2502.09245] You Do Not Fully Utilize Transformer's Representation Capacity,"llm, transformers, representation",https://arxiv.org/abs/2502.09245,interpretability
28
+ [2502.12179] Identifiable Steering via Sparse Autoencoding of Multi-Concept Shifts,"steer, SAE",https://arxiv.org/abs/2502.12179,interpretability
29
+ [2502.12446] Multi-Attribute Steering of Language Models via Targeted Intervention,"steer, llm, alignment",https://arxiv.org/abs/2502.12446,interpretability
30
+ "[2502.13490] What are Models Thinking about? Understanding Large Language Model Hallucinations ""Psychology"" through Model Inner State Analysis","llm, hallucination",https://arxiv.org/abs/2502.13490,interpretability
31
+ [2502.13632] Concept Layers: Enhancing interpretability and Intervenability via LLM Conceptualization,"concept, llm, interp",https://arxiv.org/abs/2502.13632,interpretability
32
+ [2502.13913] How Do LLMs Perform Two-Hop Reasoning in Context?,"llm, reasoning, interp",https://arxiv.org/abs/2502.13913,interpretability
33
+ [2502.13946] Why Safeguarded Ships Run Aground? Aligned Large Language Models' Safety Mechanisms Tend to Be Anchored in The Template Region,"llm, alignment, safety",https://arxiv.org/abs/2502.13946,interpretability
34
+ [2502.14010] Which Attention Heads Matter for In-Context Learning?,"llm, icl, attention head",https://arxiv.org/abs/2502.14010,interpretability
35
+ [2502.14258] Does Time Have Its Place? Temporal Heads: Where Language Models Recall Time-specific Information,"attention head, time",https://arxiv.org/abs/2502.14258,interpretability
36
+ [2502.14888] The Multi-Faceted Monosemanticity in Multimodal Representations,"mllm, monosemanticity",https://arxiv.org/abs/2502.14888,interpretability
37
+ [2502.15277] Analyzing the Inner Workings of Transformers in Compositional Generalization,"llm, interp",https://arxiv.org/abs/2502.15277,interpretability
38
+ [2502.15603] Do Multilingual LLMs Think In English?,"mllm, think",https://arxiv.org/abs/2502.15603,interpretability
39
+ [2502.17355] On Relation-Specific Neurons in Large Language Models,"neuron, llm, relation",https://arxiv.org/abs/2502.17355,interpretability
40
+ [2502.17420] The Geometry of Refusal in Large Language Models: Concept Cones and Representational Independence,"safety, refusal, llm",https://arxiv.org/abs/2502.17420,interpretability
41
+ [2502.19964] Do Sparse Autoencoders Generalize? A Case Study of Answerability,"SAE, generalize",https://arxiv.org/abs/2502.19964,interpretability
42
+ [2503.02078] Superscopes: Amplifying Internal Feature Representations for Language Model Interpretation,"feature, llm",https://arxiv.org/abs/2503.02078,interpretability
43
+ [2503.02989] Effectively Steer LLM To Follow Preference via Building Confident Directions,"steer, llm",https://arxiv.org/abs/2503.02989,interpretability
44
+ [2503.03862] Not-Just-Scaling Laws: Towards a Better Understanding of the Downstream Impact of Language Model Design Decisions,"scaling law, llm, downstream",https://arxiv.org/abs/2503.03862,interpretability
45
+ [2503.07572] Optimizing Test-Time Compute via Meta Reinforcement Fine-Tuning,"test time scaling, RL, finetuning",https://arxiv.org/abs/2503.07572,interpretability
46
+ [2503.09573] Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models,"diffusion, llm, autogressive",https://arxiv.org/abs/2503.09573,interpretability
47
+ [2503.21073] Shared Global and Local Geometry of Language Model Embeddings,"geometry, llm",https://arxiv.org/abs/2503.21073,interpretability
48
+ "[2503.21676] How do language models learn facts? Dynamics, curricula and hallucinations","knowledge, hallucinations, llm",https://arxiv.org/abs/2503.21676,interpretability
49
+ [2503.22720] Why Representation Engineering Works: A Theoretical and Empirical Study in Vision-Language Models,"representation, vision, llm",https://arxiv.org/abs/2503.22720,interpretability
50
+ [2503.23084] The Reasoning-Memorization Interplay in Language Models Is Mediated by a Single Direction,"reasoning, memory, llm, direction",https://arxiv.org/abs/2503.23084,interpretability
51
+ [2503.23306] Focus Directions Make Your Language Models Pay More Attention to Relevant Contexts,"direction, llm, attention",https://arxiv.org/abs/2503.23306,interpretability
52
+ [2503.24071] From Colors to Classes: Emergence of Concepts in Vision Transformers,"concept, llm, vision",https://arxiv.org/abs/2503.24071,interpretability
53
+ [2503.24277] Evaluating and Designing Sparse Autoencoders by Approximating Quasi-Orthogonality,"sae, evaluation",https://arxiv.org/abs/2503.24277,interpretability
54
+ [2504.00194] Identifying Sparsely Active Circuits Through Local Loss Landscape Decomposition,"sparsity, circuits",https://arxiv.org/abs/2504.00194,interpretability
55
+ [2504.01100] Repetitions are not all alike: distinct mechanisms sustain repetition in language models,"repetition, llm",https://arxiv.org/abs/2504.01100,interpretability
56
+ [2504.01871] Interpreting Emergent Planning in Model-Free Reinforcement Learning,"rl, llm, planning",https://arxiv.org/abs/2504.01871,interpretability
57
+ [2504.02620] Efficient Model Editing with Task-Localized Sparse Fine-tuning,"edit, llm, sparsity",https://arxiv.org/abs/2504.02620,interpretability
58
+ [2504.02708] The Hidden Space of Safety: Understanding Preference-Tuned LLMs in Multilingual context,"safety, multilingual",https://arxiv.org/abs/2504.02708,interpretability
59
+ [2504.02732] Why do LLMs attend to the first token?,"attention, llm, attention sink",https://arxiv.org/abs/2504.02732,interpretability
60
+ [2504.02821] Sparse Autoencoders Learn Monosemantic Features in Vision-Language Models,"sae, llm, vision, mllm",https://arxiv.org/abs/2504.02821,interpretability
61
+ [2504.02862] Towards Understanding How Knowledge Evolves in Large Vision-Language Models,"knowledge, vision, llm, mllm",https://arxiv.org/abs/2504.02862,interpretability
62
+ "[2504.02904] How Post-Training Reshapes LLMs: A Mechanistic View on Knowledge, Truthfulness, Refusal, and Confidence","llm, post-training",https://arxiv.org/abs/2504.02904,interpretability
63
+ [2504.02922] Robustly identifying concepts introduced during chat fine-tuning using crosscoders,"chat, llm, crosscoder",https://arxiv.org/abs/2504.02922,interpretability
64
+ [2504.02956] Understanding Aha Moments: from External Observations to Internal Mechanisms,"r1, o1, tts, aha",https://arxiv.org/abs/2504.02956,interpretability
65
+ [2504.03022] The Dual-Route Model of Induction,"induction, llm",https://arxiv.org/abs/2504.03022,interpretability
66
+ [2504.03635] Do Larger Language Models Imply Better Reasoning? A Pretraining Scaling Law for Reasoning,"scaling law, reasoning, llm",https://arxiv.org/abs/2504.03635,interpretability
67
+ [2504.03889] Using Attention Sinks to Identify and Evaluate Dormant Heads in Pretrained LLMs,"attention sink, llm, attention head",https://arxiv.org/abs/2504.03889,interpretability
68
+ [2504.03933] Language Models Are Implicitly Continuous,"llm, continuous",https://arxiv.org/abs/2504.03933,interpretability
69
+ [2504.04215] Towards Understanding and Improving Refusal in Compressed Models via Mechanistic interpretability,"refusal, mi",https://arxiv.org/abs/2504.04215,interpretability
70
+ [2504.04238] Sensitivity Meets Sparsity: The Impact of Extremely Sparse Parameter Patterns on Theory-of-Mind of Large Language Models,"llm, sparsity",https://arxiv.org/abs/2504.04238,interpretability
71
+ [2504.04264] Lost in Multilinguality: Dissecting Cross-lingual Factual Inconsistency in Transformer Language Models,"mllm, llm",https://arxiv.org/abs/2504.04264,interpretability
72
+ [2504.04635] Steering off Course: Reliability Challenges in Steering Language Models,"steer, llm",https://arxiv.org/abs/2504.04635,interpretability
73
+ [2504.04994] Following the Whispers of Values: Unraveling Neural Mechanisms Behind Value-Oriented Behaviors in LLMs,"neural mechanism, llm",https://arxiv.org/abs/2504.04994,interpretability
74
+ [2504.14218] Understanding the Repeat Curse in Large Language Models from a Feature Perspective,"repetition, llm, sae, feature",https://arxiv.org/abs/2504.14218,interpretability
75
+ [2504.14496] Functional Abstraction of Knowledge Recall in Large Language Models,"knowledge, recall, llm",https://arxiv.org/abs/2504.14496,interpretability
76
+ [2504.15133] EasyEdit2: An Easy-to-use Steering Framework for Editing Large Language Models,"steer, llm, edit",https://arxiv.org/abs/2504.15133,interpretability
77
+ [2504.15471] Bigram Subnetworks: Mapping to Next Tokens in Transformer Language Models,"next token, transformer",https://arxiv.org/abs/2504.15471,interpretability
78
+ [2504.15473] Emergence and Evolution of Interpretable Concepts in Diffusion Models,"diffusion, concept",https://arxiv.org/abs/2504.15473,interpretability
79
+ [2504.15630] Exploiting Contextual Knowledge in LLMs through V-usable Information based Layer Enhancement,"knowledge, llm",https://arxiv.org/abs/2504.15630,interpretability
80
+ [2504.16871] Exploring How LLMs Capture and Represent Domain-Specific Knowledge,"knowledge, llm",https://arxiv.org/abs/2504.16871,interpretability
81
+ [2505.13514] Induction Head Toxicity Mechanistically Explains Repetition Curse in Large Language Models,"induction head, repetition curse, llm",https://arxiv.org/abs/2505.13514,interpretability
82
+ [2505.13737] Causal Head Gating: A Framework for Interpreting Roles of Attention Heads in Transformers,"causal head gating, attention heads, transformers",https://arxiv.org/abs/2505.13737,interpretability
83
+ [2505.13763] Language Models Are Capable of Metacognitive Monitoring and Control of Their Internal Activations,"metacognition, internal activations, llm",https://arxiv.org/abs/2505.13763,interpretability
84
+ [2505.13898] Do Language Models Use Their Depth Efficiently?,"model depth, efficiency, language models",https://arxiv.org/abs/2505.13898,interpretability
85
+ [2505.14158] Temporal Alignment of Time Sensitive Facts with Activation Engineering,"temporal alignment, activation engineering",https://arxiv.org/abs/2505.14158,interpretability
86
+ [2505.14158] Temporal Alignment of Time Sensitive Facts with Activation Engineering,"temporal alignment, activation engineering",https://arxiv.org/abs/2505.14158,interpretability
87
+ [2505.14178] Tokenization Constraints in LLMs: A Study of Symbolic and Arithmetic Reasoning Limits,"tokenization, llm, reasoning limits",https://arxiv.org/abs/2505.14178,interpretability
88
+ [2505.14185] Safety Subspaces are Not Distinct: A Fine-Tuning Case Study,"safety subspaces, fine-tuning",https://arxiv.org/abs/2505.14185,interpretability
89
+ [2505.14233] Mechanistic Fine-tuning for In-context Learning,"mechanistic fine-tuning, in-context learning",https://arxiv.org/abs/2505.14233,interpretability
90
+ [2505.14257] Aligning Attention Distribution to Information Flow for Hallucination Mitigation in Large Vision-Language Models,"attention alignment, hallucination mitigation, lvlm",https://arxiv.org/abs/2505.14257,interpretability
91
+ [2505.14352] Towards eliciting latent knowledge from LLMs with mechanistic interpretability,"latent knowledge, llm, mechanistic interpretability",https://arxiv.org/abs/2505.14352,interpretability
92
+ "[2505.14406] Pierce the Mists, Greet the Sky: Decipher Knowledge Overshadowing via Knowledge Circuit Analysis","knowledge overshadowing, circuit analysis",https://arxiv.org/abs/2505.14406,interpretability
93
+ [2505.14467] Void in Language Models,"void, language models",https://arxiv.org/abs/2505.14467,interpretability
94
+ [2505.14536] Breaking Bad Tokens: Detoxification of LLMs Using Sparse Autoencoders,"detoxification, llm, sparse autoencoders",https://arxiv.org/abs/2505.14536,interpretability
95
+ [2505.14685] Language Models use Lookbacks to Track Beliefs,"lookbacks, belief tracking, language models",https://arxiv.org/abs/2505.14685,interpretability
96
+ [2505.17122] Shallow Preference Signals: Large Language Model Aligns Even Better with Truncated Data?,"llm, alignment, preference",https://arxiv.org/abs/2505.17122,agent_rl
97
+ "[2505.17923] Language models can learn implicit multi-hop reasoning, but only if they have lots of training data","llm, reasoning, multi-hop",https://arxiv.org/abs/2505.17923,agent_rl
98
+ [2505.17630] GIM: Improved interpretability for Large Language Models,"llm, interp",https://arxiv.org/abs/2505.17630,interpretability
99
+ [2505.17936] Understanding Gated Neurons in Transformers from Their Input-Output Functionality,"transformers, interp",https://arxiv.org/abs/2505.17936,interpretability
100
+ [2505.17760] But what is your honest answer? Aiding LLM-judges with honest alternatives using steering vectors,"steer, llm, honest",https://arxiv.org/abs/2505.17760,interpretability
101
+ [2505.17322] From Compression to Expansion: A Layerwise Analysis of In-Context Learning,"in-context learning, llm",https://arxiv.org/abs/2505.17322,interpretability
102
+ [2505.17073] Mechanistic interpretability of GPT-like Models on Summarization Tasks,"mechanistic interp, llm, summarization",https://arxiv.org/abs/2505.17073,interpretability
103
+ [2505.17697] Activation Control for Efficiently Eliciting Long Chain-of-thought Ability of Language Models,"activation, CoT, llm, steer",https://arxiv.org/abs/2505.17697,interpretability
104
+ [2505.17712] Understanding How Value Neurons Shape the Generation of Specified Values in LLMs,"interp, value neuron, llm",https://arxiv.org/abs/2505.17712,interpretability
105
+ [2505.17812] Seeing It or Not? Interpretable Vision-aware Latent Steering to Mitigate Object Hallucinations,"interp, vision, steer, llm",https://arxiv.org/abs/2505.17812,interpretability
106
+ [2505.17769] Inference-Time Decomposition of Activations (ITDA): A Scalable Approach to Interpreting Large Language Models,"interp, activation, llm",https://arxiv.org/abs/2505.17769,interpretability
107
+ [2505.17260] The Rise of Parameter Specialization for Knowledge Storage in Large Language Models,"knowledge, llm",https://arxiv.org/abs/2505.17260,interpretability
108
+ [2505.17863] The emergence of sparse attention: impact of data distribution and benefits of repetition,"attention, sparse, llm",https://arxiv.org/abs/2505.17863,interpretability
109
+ [2505.17071] What's in a prompt? Language models encode literary style in prompt embeddings,"prompt, llm, interp",https://arxiv.org/abs/2505.17071,interpretability
110
+ [2505.17646] Understanding Pre-training and Fine-tuning from Loss Landscape Perspectives,"pre-training, fine-tuning, llm",https://arxiv.org/abs/2505.17646,interpretability
111
+ [2505.17078] GloSS over Toxicity: Understanding and Mitigating Toxicity in LLMs via Global Toxic Subspace,"toxicity, llm, interp",https://arxiv.org/abs/2505.17078,interpretability
112
+ "[2505.19440] The Birth of Knowledge: Emergent Features across Time, Space, and Scale in Large Language Models","emergent features, llm",https://arxiv.org/abs/2505.19440,interpretability
113
+ [2505.18672] Does Representation Intervention Really Identify Desired Concepts and Elicit Alignment?,"representation intervention, alignment",https://arxiv.org/abs/2505.18672,interpretability
114
+ [2505.18752] Unifying Attention Heads and Task Vectors via Hidden State Geometry in In-Context Learning,"attention heads, task vectors, in-context learning",https://arxiv.org/abs/2505.18752,interpretability
115
+ [2505.18933] REACT: Representation Extraction And Controllable Tuning to Overcome Overfitting in LLM Knowledge Editing,"representation extraction, controllable tuning, knowledge editing",https://arxiv.org/abs/2505.18933,interpretability
116
+ [2505.18588] Safety Alignment via Constrained Knowledge Unlearning,"safety alignment, knowledge unlearning",https://arxiv.org/abs/2505.18588,interpretability
117
+ [2505.18706] Steering LLM Reasoning Through Bias-Only Adaptation,"steering, llm, bias adaptation",https://arxiv.org/abs/2505.18706,interpretability
118
+ [2505.19488] Understanding Transformer from the Perspective of Associative Memory,"transformer, associative memory",https://arxiv.org/abs/2505.19488,interpretability
119
+ "[2505.20076] Grokking ExPLAIND: Unifying Model, Data, and Training Attribution to Study Model Behavior","grokking, model attribution, data attribution",https://arxiv.org/abs/2505.20076,interpretability
120
+ [2505.18235] The Origins of Representation Manifolds in Large Language Models,"representation manifolds, llm",https://arxiv.org/abs/2505.18235,interpretability
121
+ [2505.20063] SAEs Are Good for Steering -- If You Select the Right Features,"SAE, steering",https://arxiv.org/abs/2505.20063,interpretability
122
+ [2505.20045] Uncertainty-Aware Attention Heads: Efficient Unsupervised Uncertainty Quantification for LLMs,"attention heads, uncertainty quantification, llm",https://arxiv.org/abs/2505.20045,efficiency
123
+ [2505.16178] Understanding Fact Recall in Language Models: Why Two-Stage Training Encourages Memorization but Mixed Training Teaches Knowledge,"llm, memorization, knowledge",https://arxiv.org/abs/2505.16178,interpretability
124
+ [2505.16284] Only Large Weights (And Not Skip Connections) Can Prevent the Perils of Rank Collapse,"rank collapse, weights",https://arxiv.org/abs/2505.16284,efficiency
125
+ [2505.22586] Precise In-Parameter Concept Erasure in Large Language Models,"llm, erasure, interp",https://arxiv.org/abs/2505.22586,interpretability
126
+ [2505.22630] Stochastic Chameleons: Irrelevant Context Hallucinations Reveal Class-Based (Mis)Generalization in LLMs,"llm, hallucination, generalization",https://arxiv.org/abs/2505.22630,interpretability
127
+ [2505.21800] From Directions to Cones: Exploring Multidimensional Representations of Propositional Facts in LLMs,"llm, representation, interp",https://arxiv.org/abs/2505.21800,interpretability
128
+ [2505.21785] Born a Transformer -- Always a Transformer?,"transformer, architecture",https://arxiv.org/abs/2505.21785,efficiency
129
+ [2505.22506] Sparsification and Reconstruction from the Perspective of Representation Geometry,"sparsification, representation, geometry",https://arxiv.org/abs/2505.22506,efficiency
130
+ [2505.22411] Mitigating Overthinking in Large Reasoning Models via Manifold Steering,"llm, reasoning, steer",https://arxiv.org/abs/2505.22411,interpretability
131
+ [2505.22572] Fusion Steering: Prompt-Specific Activation Control,"steer, activation, llm",https://arxiv.org/abs/2505.22572,interpretability
132
+ [2505.21772] Calibrating LLM Confidence by Probing Perturbed Representation Stability,"llm, calibration, representation",https://arxiv.org/abs/2505.21772,interpretability
133
+ [2505.22255] Train Sparse Autoencoders Efficiently by Utilizing Features Correlation,"SAE, efficiency, training",https://arxiv.org/abs/2505.22255,efficiency
134
+ [2505.22617] The Entropy Mechanism of Reinforcement Learning for Reasoning Language Models,"rl, llm, reasoning",https://arxiv.org/abs/2505.22617,agent_rl
135
+ [2506.01034] Less is More: Local Intrinsic Dimensions of Contextual Language Models,"llm, dimension",https://arxiv.org/abs/2506.01034,Interpretability
136
+ [2506.01042] Probing Neural Topology of Large Language Models,"llm, probing, topology",https://arxiv.org/abs/2506.01042,Interpretability
137
+ [2506.01074] How Programming Concepts and Neurons Are Shared in Code Language Models - ACL 2025 Findings,"code, llm, neuron",https://arxiv.org/abs/2506.01074,Interpretability
138
+ "[2506.01115] Attention Retrieves, MLP Memorizes: Disentangling Trainable Components in the Transformer","attention, mlp, transformer",https://arxiv.org/abs/2506.01115,Interpretability
139
+ [2506.00653] Linear Representation Transferability Hypothesis: Leveraging Small Models to Steer Large Models,"steer, llm, transferability",https://arxiv.org/abs/2506.00653,Interpretability
140
+ [2506.00382] Spectral Insights into Data-Oblivious Critical Layers in Large Language Models - ACL 2025 Findings,"llm, spectral",https://arxiv.org/abs/2506.00382,Interpretability
141
+ [2506.00085] COSMIC: Generalized Refusal Direction Identification in LLM Activations - ACL 2025 Findings,"llm, activation",https://arxiv.org/abs/2506.00085,Interpretability
142
+ [2506.00823] Probing the Geometry of Truth: Consistency and Generalization of Truth Directions in LLMs Across Logical Transformations and Question Answering Tasks - ACL 2025 Findings,"llm, probing, truth",https://arxiv.org/abs/2506.00823,Interpretability
143
+ [2506.00772] LIFT the Veil for the Truth: Principal Weights Emerge after Rank Reduction for Reasoning-Focused Supervised Fine-Tuning - ICML 2025,"reasoning, fine-tuning",https://arxiv.org/abs/2506.00772,Interpretability
144
+ [2506.01939] Beyond the 80/20 Rule: High-Entropy Minority Tokens Drive Effective Reinforcement Learning for LLM Reasoning - Qwen,"rl, llm, reasoning",https://arxiv.org/abs/2506.01939,Agent/RL
145
+ [2506.00799] Uni-LoRA: One Vector is All You Need,"LoRA, efficient",https://arxiv.org/abs/2506.00799,Efficiency
146
+ [2505.24293] Large Language Models are Locally Linear Mappings,"llm, linear",https://arxiv.org/abs/2505.24293,Interpretability
147
+ [2505.24731] Circuit Stability Characterizes Language Model Generalization,"llm, generalization, circuit",https://arxiv.org/abs/2505.24731,Interpretability
148
+ [2505.24832] How much do language models memorize?,"llm, memorize",https://arxiv.org/abs/2505.24832,Interpretability
149
+ [2505.24244] Mamba Knockout for Unraveling Factual Information Flow - ACL 2025,"mamba, factual",https://arxiv.org/abs/2505.24244,Interpretability
150
+ [2505.24688] Soft Reasoning: Navigating Solution Spaces in Large Language Models through Controlled Embedding Exploration - ICML 2025,"reasoning, llm, embedding",https://arxiv.org/abs/2505.24688,Interpretability
151
+ [2505.24428] Model Unlearning via Sparse Autoencoder Subspace Guided Projections,"unlearning, sae",https://arxiv.org/abs/2505.24428,Interpretability
152
+ [2505.24009] Diversity of Transformer Layers: One Aspect of Parameter Scaling Laws,"transformer, scaling laws",https://arxiv.org/abs/2505.24009,Interpretability
153
+ [2505.24539] Localizing Persona Representations in LLMs,"llm, persona",https://arxiv.org/abs/2505.24539,Interpretability
154
+ [2505.24535] Beyond Linear Steering: Unified Multi-Attribute Control for Language Models,"steer, llm",https://arxiv.org/abs/2505.24535,Interpretability
155
+ [2505.24362] Knowing Before Saying: LLM Representations Encode Information About Chain-of-Thought Success Before Completion,"llm, CoT",https://arxiv.org/abs/2505.24362,Interpretability
156
+ [2505.24360] Interpreting Large Text-to-Image Diffusion Models with Dictionary Learning,"diffusion, interp",https://arxiv.org/abs/2505.24360,Interpretability
157
+ [2505.23911] One Task Vector is not Enough: A Large-Scale Study for In-Context Learning,in-context learning,https://arxiv.org/abs/2505.23911,Interpretability
158
+ [2505.24473] Train One Sparse Autoencoder Across Multiple Sparsity Budgets to Preserve Interpretability and Accuracy,"sae, interp",https://arxiv.org/abs/2505.24473,Interpretability
159
+ [2505.23013] Scalable Complexity Control Facilitates Reasoning Ability of LLMs,"reasoning, llm, complexity",https://arxiv.org/abs/2505.23013,Interpretability
160
+ [2505.23556] Understanding Refusal in Language Models with Sparse Autoencoders,"refusal, llm, SAE",https://arxiv.org/abs/2505.23556,Interpretability
161
+ [2505.23653] How does Transformer Learn Implicit Reasoning?,"transformer, reasoning, implicit",https://arxiv.org/abs/2505.23653,Interpretability
162
+ [2505.23657] Active Layer-Contrastive Decoding Reduces Hallucination in Large Language Model Generation,"hallucination, llm, decoding",https://arxiv.org/abs/2505.23657,Efficiency
163
+ [2505.23701] Can LLMs Reason Abstractly Over Math Word Problems Without CoT? Disentangling Abstract Formulation From Arithmetic Computation,"llm, reasoning, CoT",https://arxiv.org/abs/2505.23701,Interpretability
164
+ [2505.22689] SlimLLM: Accurate Structured Pruning for Large Language Models,"pruning, llm, efficiency",https://arxiv.org/abs/2505.22689,Efficiency
165
+ "[2505.22756] Decomposing Elements of Problem Solving: What ""Math"" Does RL Teach?","RL, math, problem solving",https://arxiv.org/abs/2505.22756,Agent/RL
166
+ [2506.02132] Model Internal Sleuthing: Finding Lexical Identity and Inflectional Morphology in Modern Language Models,"llm, interp, morphology",https://arxiv.org/abs/2506.02132,Interpretability
167
+ [2506.02996] Linear Spatial World Models Emerge in Large Language Models,"llm, world models, linear",https://arxiv.org/abs/2506.02996,Interpretability
168
+ [2506.02701] On Entity Identification in Language Models,"llm, entity identification",https://arxiv.org/abs/2506.02701,Interpretability
169
+ [2506.02867] Demystifying Reasoning Dynamics with Mutual Information: Thinking Tokens are Information Peaks in LLM Reasoning,"llm, reasoning, mutual information",https://arxiv.org/abs/2506.02867,Interpretability
170
+ [2506.03434] Time Course MechInterp: Analyzing the Evolution of Components and Knowledge in Large Language Models,"llm, interp, mechanistic interpretability",https://arxiv.org/abs/2506.03434,Interpretability
171
+ [2506.04142] Establishing Trustworthy LLM Evaluation via Shortcut Neuron Analysis,"llm, evaluation, shortcut neurons",https://arxiv.org/abs/2506.04142,Interpretability
172
+ [2506.03292] HyperSteer: Activation Steering at Scale with Hypernetworks,"steer, llm, hypernetworks",https://arxiv.org/abs/2506.03292,Agent/RL
173
+ [2506.03426] Adaptive Task Vectors for Large Language Models,"llm, task vectors",https://arxiv.org/abs/2506.03426,Agent/RL
174
+ [2506.01347] The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning,"RL, LLM, reasoning",https://arxiv.org/abs/2506.01347,Agent/RL
175
+ [2506.17673] FaithfulSAE: Towards Capturing Faithful Features with Sparse Autoencoders without External Dataset Dependencies,"SAE, interp, llm",https://arxiv.org/abs/2506.17673,Interpretability
176
+ [2506.18053] Mechanistic Interpretability in the Presence of Architectural Obfuscation,"interp, mechanistic",https://arxiv.org/abs/2506.18053,Interpretability
177
+ [2506.18167] Understanding Reasoning in Thinking Language Models via Steering Vectors - Neel Nanda,"steer, llm, reasoning",https://arxiv.org/abs/2506.18167,Interpretability
178
+ [2506.18141] Sparse Feature Coactivation Reveals Composable Semantic Modules in Large Language Models,"interp, llm, feature",https://arxiv.org/abs/2506.18141,Interpretability
179
+ [2506.18887] Steering Conceptual Bias via Transformer Latent-Subspace Activation,"steer, llm, bias",https://arxiv.org/abs/2506.18887,Interpretability
180
+ [2506.18233] The 4th Dimension for Scaling Model Size,"scaling, efficiency",https://arxiv.org/abs/2506.18233,Efficiency
181
+ [2506.18852] Mechanistic Interpretability Needs Philosophy,"interp, mechanistic, philosophy",https://arxiv.org/abs/2506.18852,Interpretability
182
+ [2506.17859] In-Context Learning Strategies Emerge Rationally,"in-context learning, llm",https://arxiv.org/abs/2506.17859,Interpretability
183
+ [2506.16975] Latent Concept Disentanglement in Transformer-based Language Models,"llm, interp, disentanglement",https://arxiv.org/abs/2506.16975,Interpretability
184
+ [2506.16078] Probing the Robustness of Large Language Models Safety to Latent Perturbations,"llm, safety, robustness",https://arxiv.org/abs/2506.16078,Interpretability
185
+ [2506.16678] Mechanisms vs. Outcomes: Probing for Syntax Fails to Explain Performance on Targeted Syntactic Evaluations,"llm, interp, syntax",https://arxiv.org/abs/2506.16678,Interpretability
186
+ [2506.17052] From Concepts to Components: Concept-Agnostic Attention Module Discovery in Transformers,"llm, attention, interp",https://arxiv.org/abs/2506.17052,Interpretability
187
+ [2506.17090] Better Language Model Inversion by Compactly Representing Next-Token Distributions,"llm, inversion",https://arxiv.org/abs/2506.17090,Interpretability
188
+ [2506.15872] Hidden Breakthroughs in Language Model Training,"llm, training",https://arxiv.org/abs/2506.15872,Efficiency
189
+ [2506.15710] RAST: Reasoning Activation in LLMs via Small-model Transfer,"llm, reasoning",https://arxiv.org/abs/2506.15710,Interpretability
190
+ [2506.15735] ContextBench: Modifying Contexts for Targeted Latent Activation,"llm, activation, interp",https://arxiv.org/abs/2506.15735,Interpretability
191
+ [2506.16406] Drag-and-Drop LLMs: Zero-Shot Prompt-to-Weights,"llm, prompting",https://arxiv.org/abs/2506.16406,Interpretability
192
+ [2506.15963] On the Theoretical Understanding of Identifiable Sparse Autoencoders and Beyond,"SAE, interp, theory",https://arxiv.org/abs/2506.15963,Interpretability
193
+ "[2506.15679] Dense SAE Latents Are Features, Not Bugs","SAE, interp, features",https://arxiv.org/abs/2506.15679,Interpretability
194
+ [2506.15606] LoX: Low-Rank Extrapolation Robustifies LLM Safety Against Fine-tuning,"llm, safety, fine-tuning",https://arxiv.org/abs/2506.15606,Interpretability
195
+ [2506.15647] Exploring and Exploiting the Inherent Efficiency within Large Reasoning Models for Self-Guided Efficiency Enhancement,"llm, reasoning, efficiency",https://arxiv.org/abs/2506.15647,Efficiency
196
+ "[2506.12152] Because we have LLMs, we Can and Should Pursue Agentic Interpretability","agent, interp, llm",https://arxiv.org/abs/2506.12152,Interpretability
197
+ [2506.12576] Enabling Precise Topic Alignment in Large Language Models Via Sparse Autoencoders,"llm, SAE, alignment",https://arxiv.org/abs/2506.12576,Interpretability
198
+ [2506.12217] From Emergence to Control: Probing and Modulating Self-Reflection in Language Models,"llm, control, interp",https://arxiv.org/abs/2506.12217,Interpretability
199
+ [2506.13206] Thought Crime: Backdoors and Emergent Misalignment in Reasoning Models,"llm, safety, reasoning",https://arxiv.org/abs/2506.13206,Interpretability
200
+ [2506.13216] Capability Salience Vector: Fine-grained Alignment of Loss and Capabilities for Downstream Task Scaling Law,"llm, scaling",https://arxiv.org/abs/2506.13216,Efficiency
201
+ [2506.13688] What Happens During the Loss Plateau? Understanding Abrupt Learning in Transformers,"llm, training",https://arxiv.org/abs/2506.13688,Efficiency
202
+ [2506.12880] Universal Jailbreak Suffixes Are Strong Attention Hijackers,"llm, safety, attention",https://arxiv.org/abs/2506.12880,Interpretability
203
+ [2506.13674] Prefix-Tuning+: Modernizing Prefix-Tuning through Attention Independent Prefix Data,"llm, tuning",https://arxiv.org/abs/2506.13674,Efficiency
204
+ [2506.12119] Can Mixture-of-Experts Surpass Dense LLMs Under Strictly Equal Resources?,"llm, MoE, efficiency",https://arxiv.org/abs/2506.12119,Efficiency
205
+ [2506.13752] Steering LLM Thinking with Budget Guidance,"steer, llm, reasoning",https://arxiv.org/abs/2506.13752,Interpretability
206
+ [2506.13734] Instruction Following by Boosting Attention of Large Language Models,"llm, attention",https://arxiv.org/abs/2506.13734,Interpretability
207
+ [2506.11618] Convergent Linear Representations of Emergent Misalignment - Neel Nanda,"llm, misalignment, interp",https://arxiv.org/abs/2506.11618,Interpretability
208
+ [2506.11613] Model Organisms for Emergent Misalignment - Neel Nanda,"llm, misalignment, interp",https://arxiv.org/abs/2506.11613,Interpretability
209
+ [2506.11976] How Visual Representations Map to Language Feature Space in Multimodal LLMs - Neel Nanda,"llm, multimodal, interp",https://arxiv.org/abs/2506.11976,Interpretability
210
+ [2506.11088] Two Birds with One Stone: Improving Factuality and Faithfulness of LLMs via Dynamic Interactive Subspace Editing,"llm, factuality, faithfulness",https://arxiv.org/abs/2506.11088,Interpretability
211
+ [2506.11769] Long-Short Alignment for Effective Long-Context Modeling in LLMs - ICML 2025,"llm, long-context",https://arxiv.org/abs/2506.11769,Efficiency
212
+ [2506.11123] Sparse Autoencoders Bridge The Deep Learning Model and The Brain,"SAE, interp",https://arxiv.org/abs/2506.11123,Interpretability
213
+ [2506.10641] Spelling-out is not Straightforward: LLMs' Capability of Tokenization from Token to Characters,"llm, tokenization",https://arxiv.org/abs/2506.10641,Interpretability
214
+ [2506.10920] Decomposing MLP Activations into Interpretable Features via Semi-Nonnegative Matrix Factorization,"llm, interp, features",https://arxiv.org/abs/2506.10920,Interpretability
215
+ [2506.10887] Generalization or Hallucination? Understanding Out-of-Context Reasoning in Transformers,"llm, reasoning, generalization",https://arxiv.org/abs/2506.10887,Interpretability
216
+ [2506.10922] Robustly Improving LLM Fairness in Realistic Settings via Interpretability,"llm, fairness, interp",https://arxiv.org/abs/2506.10922,Interpretability
217
+ "[2506.09099] Too Big to Think: Capacity, Memorization, and Generalization in Pre-Trained Transformers","llm, generalization, memorization",https://arxiv.org/abs/2506.09099,Interpretability
218
+ [2506.09277] Did I Faithfully Say What I Thought? Bridging the Gap Between Neural Activity and Self-Explanations in Large Language Models,"llm, interp, self-explanation",https://arxiv.org/abs/2506.09277,Interpretability
219
+ [2506.09890] The Emergence of Abstract Thought in Large Language Models Beyond Any Language,"llm, abstract thought",https://arxiv.org/abs/2506.09890,Interpretability
220
+ [2506.09251] Extrapolation by Association: Length Generalization Transfer in Transformers,"llm, generalization",https://arxiv.org/abs/2506.09251,Efficiency
221
+ [2506.08427] Know-MRI: A Knowledge Mechanisms Revealer&Interpreter for Large Language Models,"llm, interp, knowledge",https://arxiv.org/abs/2506.08427,Interpretability
222
+ [2506.08359] DEAL: Disentangling Transformer Head Activations for LLM Steering,"steer, llm, interp",https://arxiv.org/abs/2506.08359,Interpretability
223
+ [2506.08572] The Geometries of Truth Are Orthogonal Across Tasks,"llm, truth",https://arxiv.org/abs/2506.08572,Interpretability
224
+ [2506.08473] AsFT: Anchoring Safety During LLM Fine-Tuning Within Narrow Safety Basin,"llm, safety, fine-tuning",https://arxiv.org/abs/2506.08473,Interpretability
225
+ "[2506.09048] Understanding Task Vectors in In-Context Learning: Emergence, Functionality, and Limitations","llm, in-context learning",https://arxiv.org/abs/2506.09048,Interpretability
226
+ [2506.08184] Unable to forget: Proactive lnterference Reveals Working Memory Limits in LLMs Beyond Context Length,"llm, memory",https://arxiv.org/abs/2506.08184,Interpretability
227
+ [2506.08966] Pre-trained Language Models Learn Remarkably Accurate Representations of Numbers,"llm, representation",https://arxiv.org/abs/2506.08966,Interpretability
228
+ "[2506.09047] Same Task, Different Circuits: Disentangling Modality-Specific Mechanisms in VLMs","llm, multimodal, interp",https://arxiv.org/abs/2506.09047,Interpretability
229
+ [2506.08552] Efficient Post-Training Refinement of Latent Reasoning in Large Language Models,"llm, reasoning, efficiency",https://arxiv.org/abs/2506.08552,Efficiency
230
+ [2506.07406] InverseScope: Scalable Activation Inversion for Interpreting Large Language Models,"llm, interp, activation inversion",https://arxiv.org/abs/2506.07406,Interpretability
231
+ [2506.07691] Training Superior Sparse Autoencoders for Instruct Models,"SAE, llm, training",https://arxiv.org/abs/2506.07691,Interpretability
232
+ [2506.07335] Improving LLM Reasoning through Interpretable Role-Playing Steering,"steer, llm, reasoning",https://arxiv.org/abs/2506.07335,Interpretability
233
+ [2506.06686] Learning Distribution-Wise Control in Representation Space for Language Models,"llm, control, representation",https://arxiv.org/abs/2506.06686,Interpretability
234
+ [2506.06609] Transferring Features Across Language Models With Model Stitching,"llm, transfer learning",https://arxiv.org/abs/2506.06609,Efficiency
235
+ [2506.07240] Overclocking LLM Reasoning: Monitoring and Controlling Thinking Path Lengths in LLMs,"llm, reasoning, control",https://arxiv.org/abs/2506.07240,Interpretability
236
+ [2506.06105] Text-to-LoRA: Instant Transformer Adaption,"llm, LoRA, adaption",https://arxiv.org/abs/2506.06105,Efficiency
237
+ [2506.06607] Training-Free Tokenizer Transplantation via Orthogonal Matching Pursuit,"tokenizer, llm",https://arxiv.org/abs/2506.06607,Efficiency
efficiency/papers.csv CHANGED
@@ -1,26 +1,26 @@
1
  title,keywords,url,type
2
- """[2312.00752] Mamba: Linear-Time Sequence Modeling with Selective State Spaces""","linear, mamba",https://arxiv.org/abs/2312.00752,efficiency
3
- """[2312.06635] Gated Linear Attention Transformers with Hardware-Efficient Training""",linear,https://arxiv.org/abs/2312.06635,efficiency
4
- """[2405.21060] Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality""","linear, mamba",https://arxiv.org/abs/2405.21060,efficiency
5
- """[2406.06484] Parallelizing Linear Transformers with the Delta Rule over Sequence Length""",linear,https://arxiv.org/abs/2406.06484,efficiency
6
- """[2505.20045] Uncertainty-Aware Attention Heads: Efficient Unsupervised Uncertainty Quantification for LLMs""","attention heads, uncertainty quantification, llm",https://arxiv.org/abs/2505.20045,efficiency
7
- """[2505.16284] Only Large Weights (And Not Skip Connections) Can Prevent the Perils of Rank Collapse""","rank collapse, weights",https://arxiv.org/abs/2505.16284,efficiency
8
- """[2505.21785] Born a Transformer -- Always a Transformer?""","transformer, architecture",https://arxiv.org/abs/2505.21785,efficiency
9
- """[2505.22506] Sparsification and Reconstruction from the Perspective of Representation Geometry""","sparsification, representation, geometry",https://arxiv.org/abs/2505.22506,efficiency
10
- """[2505.22255] Train Sparse Autoencoders Efficiently by Utilizing Features Correlation""","SAE, efficiency, training",https://arxiv.org/abs/2505.22255,efficiency
11
- """[2506.00799] Uni-LoRA: One Vector is All You Need""","LoRA, efficient",https://arxiv.org/abs/2506.00799,Efficiency
12
- """[2505.23657] Active Layer-Contrastive Decoding Reduces Hallucination in Large Language Model Generation""","hallucination, llm, decoding",https://arxiv.org/abs/2505.23657,Efficiency
13
- """[2505.22689] SlimLLM: Accurate Structured Pruning for Large Language Models""","pruning, llm, efficiency",https://arxiv.org/abs/2505.22689,Efficiency
14
- """[2506.18233] The 4th Dimension for Scaling Model Size""","scaling, efficiency",https://arxiv.org/abs/2506.18233,Efficiency
15
- """[2506.15872] Hidden Breakthroughs in Language Model Training""","llm, training",https://arxiv.org/abs/2506.15872,Efficiency
16
- """[2506.15647] Exploring and Exploiting the Inherent Efficiency within Large Reasoning Models for Self-Guided Efficiency Enhancement""","llm, reasoning, efficiency",https://arxiv.org/abs/2506.15647,Efficiency
17
- """[2506.13216] Capability Salience Vector: Fine-grained Alignment of Loss and Capabilities for Downstream Task Scaling Law""","llm, scaling",https://arxiv.org/abs/2506.13216,Efficiency
18
- """[2506.13688] What Happens During the Loss Plateau? Understanding Abrupt Learning in Transformers""","llm, training",https://arxiv.org/abs/2506.13688,Efficiency
19
- """[2506.13674] Prefix-Tuning+: Modernizing Prefix-Tuning through Attention Independent Prefix Data""","llm, tuning",https://arxiv.org/abs/2506.13674,Efficiency
20
- """[2506.12119] Can Mixture-of-Experts Surpass Dense LLMs Under Strictly Equal Resources?""","llm, MoE, efficiency",https://arxiv.org/abs/2506.12119,Efficiency
21
- """[2506.11769] Long-Short Alignment for Effective Long-Context Modeling in LLMs - ICML 2025""","llm, long-context",https://arxiv.org/abs/2506.11769,Efficiency
22
- """[2506.09251] Extrapolation by Association: Length Generalization Transfer in Transformers""","llm, generalization",https://arxiv.org/abs/2506.09251,Efficiency
23
- """[2506.08552] Efficient Post-Training Refinement of Latent Reasoning in Large Language Models""","llm, reasoning, efficiency",https://arxiv.org/abs/2506.08552,Efficiency
24
- """[2506.06609] Transferring Features Across Language Models With Model Stitching""","llm, transfer learning",https://arxiv.org/abs/2506.06609,Efficiency
25
- """[2506.06105] Text-to-LoRA: Instant Transformer Adaption""","llm, LoRA, adaption",https://arxiv.org/abs/2506.06105,Efficiency
26
- """[2506.06607] Training-Free Tokenizer Transplantation via Orthogonal Matching Pursuit""","tokenizer, llm",https://arxiv.org/abs/2506.06607,Efficiency
 
1
  title,keywords,url,type
2
+ [2312.00752] Mamba: Linear-Time Sequence Modeling with Selective State Spaces,"linear, mamba",https://arxiv.org/abs/2312.00752,efficiency
3
+ [2312.06635] Gated Linear Attention Transformers with Hardware-Efficient Training,linear,https://arxiv.org/abs/2312.06635,efficiency
4
+ [2405.21060] Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality,"linear, mamba",https://arxiv.org/abs/2405.21060,efficiency
5
+ [2406.06484] Parallelizing Linear Transformers with the Delta Rule over Sequence Length,linear,https://arxiv.org/abs/2406.06484,efficiency
6
+ [2505.20045] Uncertainty-Aware Attention Heads: Efficient Unsupervised Uncertainty Quantification for LLMs,"attention heads, uncertainty quantification, llm",https://arxiv.org/abs/2505.20045,efficiency
7
+ [2505.16284] Only Large Weights (And Not Skip Connections) Can Prevent the Perils of Rank Collapse,"rank collapse, weights",https://arxiv.org/abs/2505.16284,efficiency
8
+ [2505.21785] Born a Transformer -- Always a Transformer?,"transformer, architecture",https://arxiv.org/abs/2505.21785,efficiency
9
+ [2505.22506] Sparsification and Reconstruction from the Perspective of Representation Geometry,"sparsification, representation, geometry",https://arxiv.org/abs/2505.22506,efficiency
10
+ [2505.22255] Train Sparse Autoencoders Efficiently by Utilizing Features Correlation,"SAE, efficiency, training",https://arxiv.org/abs/2505.22255,efficiency
11
+ [2506.00799] Uni-LoRA: One Vector is All You Need,"LoRA, efficient",https://arxiv.org/abs/2506.00799,Efficiency
12
+ [2505.23657] Active Layer-Contrastive Decoding Reduces Hallucination in Large Language Model Generation,"hallucination, llm, decoding",https://arxiv.org/abs/2505.23657,Efficiency
13
+ [2505.22689] SlimLLM: Accurate Structured Pruning for Large Language Models,"pruning, llm, efficiency",https://arxiv.org/abs/2505.22689,Efficiency
14
+ [2506.18233] The 4th Dimension for Scaling Model Size,"scaling, efficiency",https://arxiv.org/abs/2506.18233,Efficiency
15
+ [2506.15872] Hidden Breakthroughs in Language Model Training,"llm, training",https://arxiv.org/abs/2506.15872,Efficiency
16
+ [2506.15647] Exploring and Exploiting the Inherent Efficiency within Large Reasoning Models for Self-Guided Efficiency Enhancement,"llm, reasoning, efficiency",https://arxiv.org/abs/2506.15647,Efficiency
17
+ [2506.13216] Capability Salience Vector: Fine-grained Alignment of Loss and Capabilities for Downstream Task Scaling Law,"llm, scaling",https://arxiv.org/abs/2506.13216,Efficiency
18
+ [2506.13688] What Happens During the Loss Plateau? Understanding Abrupt Learning in Transformers,"llm, training",https://arxiv.org/abs/2506.13688,Efficiency
19
+ [2506.13674] Prefix-Tuning+: Modernizing Prefix-Tuning through Attention Independent Prefix Data,"llm, tuning",https://arxiv.org/abs/2506.13674,Efficiency
20
+ [2506.12119] Can Mixture-of-Experts Surpass Dense LLMs Under Strictly Equal Resources?,"llm, MoE, efficiency",https://arxiv.org/abs/2506.12119,Efficiency
21
+ [2506.11769] Long-Short Alignment for Effective Long-Context Modeling in LLMs - ICML 2025,"llm, long-context",https://arxiv.org/abs/2506.11769,Efficiency
22
+ [2506.09251] Extrapolation by Association: Length Generalization Transfer in Transformers,"llm, generalization",https://arxiv.org/abs/2506.09251,Efficiency
23
+ [2506.08552] Efficient Post-Training Refinement of Latent Reasoning in Large Language Models,"llm, reasoning, efficiency",https://arxiv.org/abs/2506.08552,Efficiency
24
+ [2506.06609] Transferring Features Across Language Models With Model Stitching,"llm, transfer learning",https://arxiv.org/abs/2506.06609,Efficiency
25
+ [2506.06105] Text-to-LoRA: Instant Transformer Adaption,"llm, LoRA, adaption",https://arxiv.org/abs/2506.06105,Efficiency
26
+ [2506.06607] Training-Free Tokenizer Transplantation via Orthogonal Matching Pursuit,"tokenizer, llm",https://arxiv.org/abs/2506.06607,Efficiency
interpretability/papers.csv CHANGED
@@ -1,204 +1,204 @@
1
  title,keywords,url,type
2
- """[2210.01117] Omnigrok: Grokking Beyond Algorithmic Data""","grok, llm, interp",https://arxiv.org/abs/2210.01117,interpretability
3
- """[2308.10248] Steering Language Models With Activation Engineering""",steer,https://arxiv.org/abs/2308.10248,interpretability
4
- """[2310.15213] Function Vectors in Large Language Models""","llm, icl, function vector",https://arxiv.org/abs/2310.15213,interpretability
5
- """[2401.06824] Revisiting Jailbreaking for Large Language Models: A Representation Engineering Perspective""","steer, llm",https://arxiv.org/abs/2401.06824,interpretability
6
- """[2402.18312] How to think step-by-step: A mechanistic understanding of chain-of-thought reasoning""","CoT, interp, llm",https://arxiv.org/abs/2402.18312,interpretability
7
- """[2402.18344] Focus on Your Question! Interpreting and Mitigating Toxic CoT Problems in Commonsense Reasoning""","CoT, interp, llm",https://arxiv.org/abs/2402.18344,interpretability
8
- """[2403.01590] The Hidden Attention of Mamba Models""","linear, mamba, attention",https://arxiv.org/abs/2403.01590,interpretability
9
- """[2405.12522] Sparse Autoencoders Enable Scalable and Reliable Circuit Identification in Language Models""","SAE, llm",https://arxiv.org/abs/2405.12522,interpretability
10
- """[2405.14860] Not All Language Model Features Are Linear""","SAE, feature",https://arxiv.org/abs/2405.14860,interpretability
11
- """[2405.15071] Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization""","grok, llm",https://arxiv.org/abs/2405.15071,interpretability
12
- """[2406.11944] Transcoders Find Interpretable LLM Feature Circuits""",SAE,https://arxiv.org/abs/2406.11944,interpretability
13
- """[2407.14494] InterpBench: Semi-Synthetic Transformers for Evaluating Mechanistic interpretability Techniques""","interp, benchmark",https://arxiv.org/abs/2407.14494,interpretability
14
- """[2409.04185] Residual Stream Analysis with Multi-Layer SAEs""","SAE, match, residual stream",https://arxiv.org/abs/2409.04185,interpretability
15
- """[2410.04234] Functional Homotopy: Smoothing Discrete Optimization via Continuous Parameters for LLM Jailbreak Attacks""",jailbreak,https://arxiv.org/abs/2410.04234,interpretability
16
- """[2410.06981] Sparse Autoencoders Reveal Universal Feature Spaces Across Large Language Models""","SAE, llm",https://arxiv.org/abs/2410.06981,interpretability
17
- """[2410.07656] Mechanistic Permutability: Match Features Across Layers""","SAE, match, residual stream",https://arxiv.org/abs/2410.07656,interpretability
18
- """[2410.16314] Steering Large Language Models using Conceptors: Improving Addition-Based Activation Engineering""",steer,https://arxiv.org/abs/2410.16314,interpretability
19
- """[2411.04330] Scaling Laws for Precision""","quantization, llm, scaling law",https://arxiv.org/abs/2411.04330,interpretability
20
- """[2411.14257] Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models""","hallucination, llm, knowledge",https://arxiv.org/abs/2411.14257,interpretability
21
- """[2501.06254] Rethinking Evaluation of Sparse Autoencoders through the Representation of Polysemous Words""",SAE,https://arxiv.org/abs/2501.06254,interpretability
22
- """[2502.03032] Analyze Feature Flow to Enhance Interpretation and Steering in Language Models""","steer, llm, feature",https://arxiv.org/abs/2502.03032,interpretability
23
- """[2502.09245] You Do Not Fully Utilize Transformer's Representation Capacity""","llm, transformers, representation",https://arxiv.org/abs/2502.09245,interpretability
24
- """[2502.12179] Identifiable Steering via Sparse Autoencoding of Multi-Concept Shifts""","steer, SAE",https://arxiv.org/abs/2502.12179,interpretability
25
- """[2502.12446] Multi-Attribute Steering of Language Models via Targeted Intervention""","steer, llm, alignment",https://arxiv.org/abs/2502.12446,interpretability
26
- """[2502.13490] What are Models Thinking about? Understanding Large Language Model Hallucinations ""Psychology"" through Model Inner State Analysis""","llm, hallucination",https://arxiv.org/abs/2502.13490,interpretability
27
- """[2502.13632] Concept Layers: Enhancing interpretability and Intervenability via LLM Conceptualization""","concept, llm, interp",https://arxiv.org/abs/2502.13632,interpretability
28
- """[2502.13913] How Do LLMs Perform Two-Hop Reasoning in Context?""","llm, reasoning, interp",https://arxiv.org/abs/2502.13913,interpretability
29
- """[2502.13946] Why Safeguarded Ships Run Aground? Aligned Large Language Models' Safety Mechanisms Tend to Be Anchored in The Template Region""","llm, alignment, safety",https://arxiv.org/abs/2502.13946,interpretability
30
- """[2502.14010] Which Attention Heads Matter for In-Context Learning?""","llm, icl, attention head",https://arxiv.org/abs/2502.14010,interpretability
31
- """[2502.14258] Does Time Have Its Place? Temporal Heads: Where Language Models Recall Time-specific Information""","attention head, time",https://arxiv.org/abs/2502.14258,interpretability
32
- """[2502.14888] The Multi-Faceted Monosemanticity in Multimodal Representations""","mllm, monosemanticity",https://arxiv.org/abs/2502.14888,interpretability
33
- """[2502.15277] Analyzing the Inner Workings of Transformers in Compositional Generalization""","llm, interp",https://arxiv.org/abs/2502.15277,interpretability
34
- """[2502.15603] Do Multilingual LLMs Think In English?""","mllm, think",https://arxiv.org/abs/2502.15603,interpretability
35
- """[2502.17355] On Relation-Specific Neurons in Large Language Models""","neuron, llm, relation",https://arxiv.org/abs/2502.17355,interpretability
36
- """[2502.17420] The Geometry of Refusal in Large Language Models: Concept Cones and Representational Independence""","safety, refusal, llm",https://arxiv.org/abs/2502.17420,interpretability
37
- """[2502.19964] Do Sparse Autoencoders Generalize? A Case Study of Answerability""","SAE, generalize",https://arxiv.org/abs/2502.19964,interpretability
38
- """[2503.02078] Superscopes: Amplifying Internal Feature Representations for Language Model Interpretation""","feature, llm",https://arxiv.org/abs/2503.02078,interpretability
39
- """[2503.02989] Effectively Steer LLM To Follow Preference via Building Confident Directions""","steer, llm",https://arxiv.org/abs/2503.02989,interpretability
40
- """[2503.03862] Not-Just-Scaling Laws: Towards a Better Understanding of the Downstream Impact of Language Model Design Decisions""","scaling law, llm, downstream",https://arxiv.org/abs/2503.03862,interpretability
41
- """[2503.07572] Optimizing Test-Time Compute via Meta Reinforcement Fine-Tuning""","test time scaling, RL, finetuning",https://arxiv.org/abs/2503.07572,interpretability
42
- """[2503.09573] Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models""","diffusion, llm, autogressive",https://arxiv.org/abs/2503.09573,interpretability
43
- """[2503.21073] Shared Global and Local Geometry of Language Model Embeddings""","geometry, llm",https://arxiv.org/abs/2503.21073,interpretability
44
- """[2503.21676] How do language models learn facts? Dynamics, curricula and hallucinations""","knowledge, hallucinations, llm",https://arxiv.org/abs/2503.21676,interpretability
45
- """[2503.22720] Why Representation Engineering Works: A Theoretical and Empirical Study in Vision-Language Models""","representation, vision, llm",https://arxiv.org/abs/2503.22720,interpretability
46
- """[2503.23084] The Reasoning-Memorization Interplay in Language Models Is Mediated by a Single Direction""","reasoning, memory, llm, direction",https://arxiv.org/abs/2503.23084,interpretability
47
- """[2503.23306] Focus Directions Make Your Language Models Pay More Attention to Relevant Contexts""","direction, llm, attention",https://arxiv.org/abs/2503.23306,interpretability
48
- """[2503.24071] From Colors to Classes: Emergence of Concepts in Vision Transformers""","concept, llm, vision",https://arxiv.org/abs/2503.24071,interpretability
49
- """[2503.24277] Evaluating and Designing Sparse Autoencoders by Approximating Quasi-Orthogonality""","sae, evaluation",https://arxiv.org/abs/2503.24277,interpretability
50
- """[2504.00194] Identifying Sparsely Active Circuits Through Local Loss Landscape Decomposition""","sparsity, circuits",https://arxiv.org/abs/2504.00194,interpretability
51
- """[2504.01100] Repetitions are not all alike: distinct mechanisms sustain repetition in language models""","repetition, llm",https://arxiv.org/abs/2504.01100,interpretability
52
- """[2504.01871] Interpreting Emergent Planning in Model-Free Reinforcement Learning""","rl, llm, planning",https://arxiv.org/abs/2504.01871,interpretability
53
- """[2504.02620] Efficient Model Editing with Task-Localized Sparse Fine-tuning""","edit, llm, sparsity",https://arxiv.org/abs/2504.02620,interpretability
54
- """[2504.02708] The Hidden Space of Safety: Understanding Preference-Tuned LLMs in Multilingual context""","safety, multilingual",https://arxiv.org/abs/2504.02708,interpretability
55
- """[2504.02732] Why do LLMs attend to the first token?""","attention, llm, attention sink",https://arxiv.org/abs/2504.02732,interpretability
56
- """[2504.02821] Sparse Autoencoders Learn Monosemantic Features in Vision-Language Models""","sae, llm, vision, mllm",https://arxiv.org/abs/2504.02821,interpretability
57
- """[2504.02862] Towards Understanding How Knowledge Evolves in Large Vision-Language Models""","knowledge, vision, llm, mllm",https://arxiv.org/abs/2504.02862,interpretability
58
- """[2504.02904] How Post-Training Reshapes LLMs: A Mechanistic View on Knowledge, Truthfulness, Refusal, and Confidence""","llm, post-training",https://arxiv.org/abs/2504.02904,interpretability
59
- """[2504.02922] Robustly identifying concepts introduced during chat fine-tuning using crosscoders""","chat, llm, crosscoder",https://arxiv.org/abs/2504.02922,interpretability
60
- """[2504.02956] Understanding Aha Moments: from External Observations to Internal Mechanisms""","r1, o1, tts, aha",https://arxiv.org/abs/2504.02956,interpretability
61
- """[2504.03022] The Dual-Route Model of Induction""","induction, llm",https://arxiv.org/abs/2504.03022,interpretability
62
- """[2504.03635] Do Larger Language Models Imply Better Reasoning? A Pretraining Scaling Law for Reasoning""","scaling law, reasoning, llm",https://arxiv.org/abs/2504.03635,interpretability
63
- """[2504.03889] Using Attention Sinks to Identify and Evaluate Dormant Heads in Pretrained LLMs""","attention sink, llm, attention head",https://arxiv.org/abs/2504.03889,interpretability
64
- """[2504.03933] Language Models Are Implicitly Continuous""","llm, continuous",https://arxiv.org/abs/2504.03933,interpretability
65
- """[2504.04215] Towards Understanding and Improving Refusal in Compressed Models via Mechanistic interpretability""","refusal, mi",https://arxiv.org/abs/2504.04215,interpretability
66
- """[2504.04238] Sensitivity Meets Sparsity: The Impact of Extremely Sparse Parameter Patterns on Theory-of-Mind of Large Language Models""","llm, sparsity",https://arxiv.org/abs/2504.04238,interpretability
67
- """[2504.04264] Lost in Multilinguality: Dissecting Cross-lingual Factual Inconsistency in Transformer Language Models""","mllm, llm",https://arxiv.org/abs/2504.04264,interpretability
68
- """[2504.04635] Steering off Course: Reliability Challenges in Steering Language Models""","steer, llm",https://arxiv.org/abs/2504.04635,interpretability
69
- """[2504.04994] Following the Whispers of Values: Unraveling Neural Mechanisms Behind Value-Oriented Behaviors in LLMs""","neural mechanism, llm",https://arxiv.org/abs/2504.04994,interpretability
70
- """[2504.14218] Understanding the Repeat Curse in Large Language Models from a Feature Perspective""","repetition, llm, sae, feature",https://arxiv.org/abs/2504.14218,interpretability
71
- """[2504.14496] Functional Abstraction of Knowledge Recall in Large Language Models""","knowledge, recall, llm",https://arxiv.org/abs/2504.14496,interpretability
72
- """[2504.15133] EasyEdit2: An Easy-to-use Steering Framework for Editing Large Language Models""","steer, llm, edit",https://arxiv.org/abs/2504.15133,interpretability
73
- """[2504.15471] Bigram Subnetworks: Mapping to Next Tokens in Transformer Language Models""","next token, transformer",https://arxiv.org/abs/2504.15471,interpretability
74
- """[2504.15473] Emergence and Evolution of Interpretable Concepts in Diffusion Models""","diffusion, concept",https://arxiv.org/abs/2504.15473,interpretability
75
- """[2504.15630] Exploiting Contextual Knowledge in LLMs through V-usable Information based Layer Enhancement""","knowledge, llm",https://arxiv.org/abs/2504.15630,interpretability
76
- """[2504.16871] Exploring How LLMs Capture and Represent Domain-Specific Knowledge""","knowledge, llm",https://arxiv.org/abs/2504.16871,interpretability
77
- """[2505.13514] Induction Head Toxicity Mechanistically Explains Repetition Curse in Large Language Models""","induction head, repetition curse, llm",https://arxiv.org/abs/2505.13514,interpretability
78
- """[2505.13737] Causal Head Gating: A Framework for Interpreting Roles of Attention Heads in Transformers""","causal head gating, attention heads, transformers",https://arxiv.org/abs/2505.13737,interpretability
79
- """[2505.13763] Language Models Are Capable of Metacognitive Monitoring and Control of Their Internal Activations""","metacognition, internal activations, llm",https://arxiv.org/abs/2505.13763,interpretability
80
- """[2505.13898] Do Language Models Use Their Depth Efficiently?""","model depth, efficiency, language models",https://arxiv.org/abs/2505.13898,interpretability
81
- """[2505.14158] Temporal Alignment of Time Sensitive Facts with Activation Engineering""","temporal alignment, activation engineering",https://arxiv.org/abs/2505.14158,interpretability
82
- """[2505.14158] Temporal Alignment of Time Sensitive Facts with Activation Engineering""","temporal alignment, activation engineering",https://arxiv.org/abs/2505.14158,interpretability
83
- """[2505.14178] Tokenization Constraints in LLMs: A Study of Symbolic and Arithmetic Reasoning Limits""","tokenization, llm, reasoning limits",https://arxiv.org/abs/2505.14178,interpretability
84
- """[2505.14185] Safety Subspaces are Not Distinct: A Fine-Tuning Case Study""","safety subspaces, fine-tuning",https://arxiv.org/abs/2505.14185,interpretability
85
- """[2505.14233] Mechanistic Fine-tuning for In-context Learning""","mechanistic fine-tuning, in-context learning",https://arxiv.org/abs/2505.14233,interpretability
86
- """[2505.14257] Aligning Attention Distribution to Information Flow for Hallucination Mitigation in Large Vision-Language Models""","attention alignment, hallucination mitigation, lvlm",https://arxiv.org/abs/2505.14257,interpretability
87
- """[2505.14352] Towards eliciting latent knowledge from LLMs with mechanistic interpretability""","latent knowledge, llm, mechanistic interpretability",https://arxiv.org/abs/2505.14352,interpretability
88
- """[2505.14406] Pierce the Mists, Greet the Sky: Decipher Knowledge Overshadowing via Knowledge Circuit Analysis""","knowledge overshadowing, circuit analysis",https://arxiv.org/abs/2505.14406,interpretability
89
- """[2505.14467] Void in Language Models""","void, language models",https://arxiv.org/abs/2505.14467,interpretability
90
- """[2505.14536] Breaking Bad Tokens: Detoxification of LLMs Using Sparse Autoencoders""","detoxification, llm, sparse autoencoders",https://arxiv.org/abs/2505.14536,interpretability
91
- """[2505.14685] Language Models use Lookbacks to Track Beliefs""","lookbacks, belief tracking, language models",https://arxiv.org/abs/2505.14685,interpretability
92
- """[2505.17630] GIM: Improved interpretability for Large Language Models""","llm, interp",https://arxiv.org/abs/2505.17630,interpretability
93
- """[2505.17936] Understanding Gated Neurons in Transformers from Their Input-Output Functionality""","transformers, interp",https://arxiv.org/abs/2505.17936,interpretability
94
- """[2505.17760] But what is your honest answer? Aiding LLM-judges with honest alternatives using steering vectors""","steer, llm, honest",https://arxiv.org/abs/2505.17760,interpretability
95
- """[2505.17322] From Compression to Expansion: A Layerwise Analysis of In-Context Learning""","in-context learning, llm",https://arxiv.org/abs/2505.17322,interpretability
96
- """[2505.17073] Mechanistic interpretability of GPT-like Models on Summarization Tasks""","mechanistic interp, llm, summarization",https://arxiv.org/abs/2505.17073,interpretability
97
- """[2505.17697] Activation Control for Efficiently Eliciting Long Chain-of-thought Ability of Language Models""","activation, CoT, llm, steer",https://arxiv.org/abs/2505.17697,interpretability
98
- """[2505.17712] Understanding How Value Neurons Shape the Generation of Specified Values in LLMs""","interp, value neuron, llm",https://arxiv.org/abs/2505.17712,interpretability
99
- """[2505.17812] Seeing It or Not? Interpretable Vision-aware Latent Steering to Mitigate Object Hallucinations""","interp, vision, steer, llm",https://arxiv.org/abs/2505.17812,interpretability
100
- """[2505.17769] Inference-Time Decomposition of Activations (ITDA): A Scalable Approach to Interpreting Large Language Models""","interp, activation, llm",https://arxiv.org/abs/2505.17769,interpretability
101
- """[2505.17260] The Rise of Parameter Specialization for Knowledge Storage in Large Language Models""","knowledge, llm",https://arxiv.org/abs/2505.17260,interpretability
102
- """[2505.17863] The emergence of sparse attention: impact of data distribution and benefits of repetition""","attention, sparse, llm",https://arxiv.org/abs/2505.17863,interpretability
103
- """[2505.17071] What's in a prompt? Language models encode literary style in prompt embeddings""","prompt, llm, interp",https://arxiv.org/abs/2505.17071,interpretability
104
- """[2505.17646] Understanding Pre-training and Fine-tuning from Loss Landscape Perspectives""","pre-training, fine-tuning, llm",https://arxiv.org/abs/2505.17646,interpretability
105
- """[2505.17078] GloSS over Toxicity: Understanding and Mitigating Toxicity in LLMs via Global Toxic Subspace""","toxicity, llm, interp",https://arxiv.org/abs/2505.17078,interpretability
106
- """[2505.19440] The Birth of Knowledge: Emergent Features across Time, Space, and Scale in Large Language Models""","emergent features, llm",https://arxiv.org/abs/2505.19440,interpretability
107
- """[2505.18672] Does Representation Intervention Really Identify Desired Concepts and Elicit Alignment?""","representation intervention, alignment",https://arxiv.org/abs/2505.18672,interpretability
108
- """[2505.18752] Unifying Attention Heads and Task Vectors via Hidden State Geometry in In-Context Learning""","attention heads, task vectors, in-context learning",https://arxiv.org/abs/2505.18752,interpretability
109
- """[2505.18933] REACT: Representation Extraction And Controllable Tuning to Overcome Overfitting in LLM Knowledge Editing""","representation extraction, controllable tuning, knowledge editing",https://arxiv.org/abs/2505.18933,interpretability
110
- """[2505.18588] Safety Alignment via Constrained Knowledge Unlearning""","safety alignment, knowledge unlearning",https://arxiv.org/abs/2505.18588,interpretability
111
- """[2505.18706] Steering LLM Reasoning Through Bias-Only Adaptation""","steering, llm, bias adaptation",https://arxiv.org/abs/2505.18706,interpretability
112
- """[2505.19488] Understanding Transformer from the Perspective of Associative Memory""","transformer, associative memory",https://arxiv.org/abs/2505.19488,interpretability
113
- """[2505.20076] Grokking ExPLAIND: Unifying Model, Data, and Training Attribution to Study Model Behavior""","grokking, model attribution, data attribution",https://arxiv.org/abs/2505.20076,interpretability
114
- """[2505.18235] The Origins of Representation Manifolds in Large Language Models""","representation manifolds, llm",https://arxiv.org/abs/2505.18235,interpretability
115
- """[2505.20063] SAEs Are Good for Steering -- If You Select the Right Features""","SAE, steering",https://arxiv.org/abs/2505.20063,interpretability
116
- """[2505.16178] Understanding Fact Recall in Language Models: Why Two-Stage Training Encourages Memorization but Mixed Training Teaches Knowledge""","llm, memorization, knowledge",https://arxiv.org/abs/2505.16178,interpretability
117
- """[2505.22586] Precise In-Parameter Concept Erasure in Large Language Models""","llm, erasure, interp",https://arxiv.org/abs/2505.22586,interpretability
118
- """[2505.22630] Stochastic Chameleons: Irrelevant Context Hallucinations Reveal Class-Based (Mis)Generalization in LLMs""","llm, hallucination, generalization",https://arxiv.org/abs/2505.22630,interpretability
119
- """[2505.21800] From Directions to Cones: Exploring Multidimensional Representations of Propositional Facts in LLMs""","llm, representation, interp",https://arxiv.org/abs/2505.21800,interpretability
120
- """[2505.22411] Mitigating Overthinking in Large Reasoning Models via Manifold Steering""","llm, reasoning, steer",https://arxiv.org/abs/2505.22411,interpretability
121
- """[2505.22572] Fusion Steering: Prompt-Specific Activation Control""","steer, activation, llm",https://arxiv.org/abs/2505.22572,interpretability
122
- """[2505.21772] Calibrating LLM Confidence by Probing Perturbed Representation Stability""","llm, calibration, representation",https://arxiv.org/abs/2505.21772,interpretability
123
- """[2506.01034] Less is More: Local Intrinsic Dimensions of Contextual Language Models""","llm, dimension",https://arxiv.org/abs/2506.01034,Interpretability
124
- """[2506.01042] Probing Neural Topology of Large Language Models""","llm, probing, topology",https://arxiv.org/abs/2506.01042,Interpretability
125
- """[2506.01074] How Programming Concepts and Neurons Are Shared in Code Language Models - ACL 2025 Findings""","code, llm, neuron",https://arxiv.org/abs/2506.01074,Interpretability
126
- """[2506.01115] Attention Retrieves, MLP Memorizes: Disentangling Trainable Components in the Transformer""","attention, mlp, transformer",https://arxiv.org/abs/2506.01115,Interpretability
127
- """[2506.00653] Linear Representation Transferability Hypothesis: Leveraging Small Models to Steer Large Models""","steer, llm, transferability",https://arxiv.org/abs/2506.00653,Interpretability
128
- """[2506.00382] Spectral Insights into Data-Oblivious Critical Layers in Large Language Models - ACL 2025 Findings""","llm, spectral",https://arxiv.org/abs/2506.00382,Interpretability
129
- """[2506.00085] COSMIC: Generalized Refusal Direction Identification in LLM Activations - ACL 2025 Findings""","llm, activation",https://arxiv.org/abs/2506.00085,Interpretability
130
- """[2506.00823] Probing the Geometry of Truth: Consistency and Generalization of Truth Directions in LLMs Across Logical Transformations and Question Answering Tasks - ACL 2025 Findings""","llm, probing, truth",https://arxiv.org/abs/2506.00823,Interpretability
131
- """[2506.00772] LIFT the Veil for the Truth: Principal Weights Emerge after Rank Reduction for Reasoning-Focused Supervised Fine-Tuning - ICML 2025""","reasoning, fine-tuning",https://arxiv.org/abs/2506.00772,Interpretability
132
- """[2505.24293] Large Language Models are Locally Linear Mappings""","llm, linear",https://arxiv.org/abs/2505.24293,Interpretability
133
- """[2505.24731] Circuit Stability Characterizes Language Model Generalization""","llm, generalization, circuit",https://arxiv.org/abs/2505.24731,Interpretability
134
- """[2505.24832] How much do language models memorize?""","llm, memorize",https://arxiv.org/abs/2505.24832,Interpretability
135
- """[2505.24244] Mamba Knockout for Unraveling Factual Information Flow - ACL 2025""","mamba, factual",https://arxiv.org/abs/2505.24244,Interpretability
136
- """[2505.24688] Soft Reasoning: Navigating Solution Spaces in Large Language Models through Controlled Embedding Exploration - ICML 2025""","reasoning, llm, embedding",https://arxiv.org/abs/2505.24688,Interpretability
137
- """[2505.24428] Model Unlearning via Sparse Autoencoder Subspace Guided Projections""","unlearning, sae",https://arxiv.org/abs/2505.24428,Interpretability
138
- """[2505.24009] Diversity of Transformer Layers: One Aspect of Parameter Scaling Laws""","transformer, scaling laws",https://arxiv.org/abs/2505.24009,Interpretability
139
- """[2505.24539] Localizing Persona Representations in LLMs""","llm, persona",https://arxiv.org/abs/2505.24539,Interpretability
140
- """[2505.24535] Beyond Linear Steering: Unified Multi-Attribute Control for Language Models""","steer, llm",https://arxiv.org/abs/2505.24535,Interpretability
141
- """[2505.24362] Knowing Before Saying: LLM Representations Encode Information About Chain-of-Thought Success Before Completion""","llm, CoT",https://arxiv.org/abs/2505.24362,Interpretability
142
- """[2505.24360] Interpreting Large Text-to-Image Diffusion Models with Dictionary Learning""","diffusion, interp",https://arxiv.org/abs/2505.24360,Interpretability
143
- """[2505.23911] One Task Vector is not Enough: A Large-Scale Study for In-Context Learning""",in-context learning,https://arxiv.org/abs/2505.23911,Interpretability
144
- """[2505.24473] Train One Sparse Autoencoder Across Multiple Sparsity Budgets to Preserve Interpretability and Accuracy""","sae, interp",https://arxiv.org/abs/2505.24473,Interpretability
145
- """[2505.23013] Scalable Complexity Control Facilitates Reasoning Ability of LLMs""","reasoning, llm, complexity",https://arxiv.org/abs/2505.23013,Interpretability
146
- """[2505.23556] Understanding Refusal in Language Models with Sparse Autoencoders""","refusal, llm, SAE",https://arxiv.org/abs/2505.23556,Interpretability
147
- """[2505.23653] How does Transformer Learn Implicit Reasoning?""","transformer, reasoning, implicit",https://arxiv.org/abs/2505.23653,Interpretability
148
- """[2505.23701] Can LLMs Reason Abstractly Over Math Word Problems Without CoT? Disentangling Abstract Formulation From Arithmetic Computation""","llm, reasoning, CoT",https://arxiv.org/abs/2505.23701,Interpretability
149
- """[2506.02132] Model Internal Sleuthing: Finding Lexical Identity and Inflectional Morphology in Modern Language Models""","llm, interp, morphology",https://arxiv.org/abs/2506.02132,Interpretability
150
- """[2506.02996] Linear Spatial World Models Emerge in Large Language Models""","llm, world models, linear",https://arxiv.org/abs/2506.02996,Interpretability
151
- """[2506.02701] On Entity Identification in Language Models""","llm, entity identification",https://arxiv.org/abs/2506.02701,Interpretability
152
- """[2506.02867] Demystifying Reasoning Dynamics with Mutual Information: Thinking Tokens are Information Peaks in LLM Reasoning""","llm, reasoning, mutual information",https://arxiv.org/abs/2506.02867,Interpretability
153
- """[2506.03434] Time Course MechInterp: Analyzing the Evolution of Components and Knowledge in Large Language Models""","llm, interp, mechanistic interpretability",https://arxiv.org/abs/2506.03434,Interpretability
154
- """[2506.04142] Establishing Trustworthy LLM Evaluation via Shortcut Neuron Analysis""","llm, evaluation, shortcut neurons",https://arxiv.org/abs/2506.04142,Interpretability
155
- """[2506.17673] FaithfulSAE: Towards Capturing Faithful Features with Sparse Autoencoders without External Dataset Dependencies""","SAE, interp, llm",https://arxiv.org/abs/2506.17673,Interpretability
156
- """[2506.18053] Mechanistic Interpretability in the Presence of Architectural Obfuscation""","interp, mechanistic",https://arxiv.org/abs/2506.18053,Interpretability
157
- """[2506.18167] Understanding Reasoning in Thinking Language Models via Steering Vectors - Neel Nanda""","steer, llm, reasoning",https://arxiv.org/abs/2506.18167,Interpretability
158
- """[2506.18141] Sparse Feature Coactivation Reveals Composable Semantic Modules in Large Language Models""","interp, llm, feature",https://arxiv.org/abs/2506.18141,Interpretability
159
- """[2506.18887] Steering Conceptual Bias via Transformer Latent-Subspace Activation""","steer, llm, bias",https://arxiv.org/abs/2506.18887,Interpretability
160
- """[2506.18852] Mechanistic Interpretability Needs Philosophy""","interp, mechanistic, philosophy",https://arxiv.org/abs/2506.18852,Interpretability
161
- """[2506.17859] In-Context Learning Strategies Emerge Rationally""","in-context learning, llm",https://arxiv.org/abs/2506.17859,Interpretability
162
- """[2506.16975] Latent Concept Disentanglement in Transformer-based Language Models""","llm, interp, disentanglement",https://arxiv.org/abs/2506.16975,Interpretability
163
- """[2506.16078] Probing the Robustness of Large Language Models Safety to Latent Perturbations""","llm, safety, robustness",https://arxiv.org/abs/2506.16078,Interpretability
164
- """[2506.16678] Mechanisms vs. Outcomes: Probing for Syntax Fails to Explain Performance on Targeted Syntactic Evaluations""","llm, interp, syntax",https://arxiv.org/abs/2506.16678,Interpretability
165
- """[2506.17052] From Concepts to Components: Concept-Agnostic Attention Module Discovery in Transformers""","llm, attention, interp",https://arxiv.org/abs/2506.17052,Interpretability
166
- """[2506.17090] Better Language Model Inversion by Compactly Representing Next-Token Distributions""","llm, inversion",https://arxiv.org/abs/2506.17090,Interpretability
167
- """[2506.15710] RAST: Reasoning Activation in LLMs via Small-model Transfer""","llm, reasoning",https://arxiv.org/abs/2506.15710,Interpretability
168
- """[2506.15735] ContextBench: Modifying Contexts for Targeted Latent Activation""","llm, activation, interp",https://arxiv.org/abs/2506.15735,Interpretability
169
- """[2506.16406] Drag-and-Drop LLMs: Zero-Shot Prompt-to-Weights""","llm, prompting",https://arxiv.org/abs/2506.16406,Interpretability
170
- """[2506.15963] On the Theoretical Understanding of Identifiable Sparse Autoencoders and Beyond""","SAE, interp, theory",https://arxiv.org/abs/2506.15963,Interpretability
171
- """[2506.15679] Dense SAE Latents Are Features, Not Bugs""","SAE, interp, features",https://arxiv.org/abs/2506.15679,Interpretability
172
- """[2506.15606] LoX: Low-Rank Extrapolation Robustifies LLM Safety Against Fine-tuning""","llm, safety, fine-tuning",https://arxiv.org/abs/2506.15606,Interpretability
173
- """[2506.12152] Because we have LLMs, we Can and Should Pursue Agentic Interpretability""","agent, interp, llm",https://arxiv.org/abs/2506.12152,Interpretability
174
- """[2506.12576] Enabling Precise Topic Alignment in Large Language Models Via Sparse Autoencoders""","llm, SAE, alignment",https://arxiv.org/abs/2506.12576,Interpretability
175
- """[2506.12217] From Emergence to Control: Probing and Modulating Self-Reflection in Language Models""","llm, control, interp",https://arxiv.org/abs/2506.12217,Interpretability
176
- """[2506.13206] Thought Crime: Backdoors and Emergent Misalignment in Reasoning Models""","llm, safety, reasoning",https://arxiv.org/abs/2506.13206,Interpretability
177
- """[2506.12880] Universal Jailbreak Suffixes Are Strong Attention Hijackers""","llm, safety, attention",https://arxiv.org/abs/2506.12880,Interpretability
178
- """[2506.13752] Steering LLM Thinking with Budget Guidance""","steer, llm, reasoning",https://arxiv.org/abs/2506.13752,Interpretability
179
- """[2506.13734] Instruction Following by Boosting Attention of Large Language Models""","llm, attention",https://arxiv.org/abs/2506.13734,Interpretability
180
- """[2506.11618] Convergent Linear Representations of Emergent Misalignment - Neel Nanda""","llm, misalignment, interp",https://arxiv.org/abs/2506.11618,Interpretability
181
- """[2506.11613] Model Organisms for Emergent Misalignment - Neel Nanda""","llm, misalignment, interp",https://arxiv.org/abs/2506.11613,Interpretability
182
- """[2506.11976] How Visual Representations Map to Language Feature Space in Multimodal LLMs - Neel Nanda""","llm, multimodal, interp",https://arxiv.org/abs/2506.11976,Interpretability
183
- """[2506.11088] Two Birds with One Stone: Improving Factuality and Faithfulness of LLMs via Dynamic Interactive Subspace Editing""","llm, factuality, faithfulness",https://arxiv.org/abs/2506.11088,Interpretability
184
- """[2506.11123] Sparse Autoencoders Bridge The Deep Learning Model and The Brain""","SAE, interp",https://arxiv.org/abs/2506.11123,Interpretability
185
- """[2506.10641] Spelling-out is not Straightforward: LLMs' Capability of Tokenization from Token to Characters""","llm, tokenization",https://arxiv.org/abs/2506.10641,Interpretability
186
- """[2506.10920] Decomposing MLP Activations into Interpretable Features via Semi-Nonnegative Matrix Factorization""","llm, interp, features",https://arxiv.org/abs/2506.10920,Interpretability
187
- """[2506.10887] Generalization or Hallucination? Understanding Out-of-Context Reasoning in Transformers""","llm, reasoning, generalization",https://arxiv.org/abs/2506.10887,Interpretability
188
- """[2506.10922] Robustly Improving LLM Fairness in Realistic Settings via Interpretability""","llm, fairness, interp",https://arxiv.org/abs/2506.10922,Interpretability
189
- """[2506.09099] Too Big to Think: Capacity, Memorization, and Generalization in Pre-Trained Transformers""","llm, generalization, memorization",https://arxiv.org/abs/2506.09099,Interpretability
190
- """[2506.09277] Did I Faithfully Say What I Thought? Bridging the Gap Between Neural Activity and Self-Explanations in Large Language Models""","llm, interp, self-explanation",https://arxiv.org/abs/2506.09277,Interpretability
191
- """[2506.09890] The Emergence of Abstract Thought in Large Language Models Beyond Any Language""","llm, abstract thought",https://arxiv.org/abs/2506.09890,Interpretability
192
- """[2506.08427] Know-MRI: A Knowledge Mechanisms Revealer&Interpreter for Large Language Models""","llm, interp, knowledge",https://arxiv.org/abs/2506.08427,Interpretability
193
- """[2506.08359] DEAL: Disentangling Transformer Head Activations for LLM Steering""","steer, llm, interp",https://arxiv.org/abs/2506.08359,Interpretability
194
- """[2506.08572] The Geometries of Truth Are Orthogonal Across Tasks""","llm, truth",https://arxiv.org/abs/2506.08572,Interpretability
195
- """[2506.08473] AsFT: Anchoring Safety During LLM Fine-Tuning Within Narrow Safety Basin""","llm, safety, fine-tuning",https://arxiv.org/abs/2506.08473,Interpretability
196
- """[2506.09048] Understanding Task Vectors in In-Context Learning: Emergence, Functionality, and Limitations""","llm, in-context learning",https://arxiv.org/abs/2506.09048,Interpretability
197
- """[2506.08184] Unable to forget: Proactive lnterference Reveals Working Memory Limits in LLMs Beyond Context Length""","llm, memory",https://arxiv.org/abs/2506.08184,Interpretability
198
- """[2506.08966] Pre-trained Language Models Learn Remarkably Accurate Representations of Numbers""","llm, representation",https://arxiv.org/abs/2506.08966,Interpretability
199
- """[2506.09047] Same Task, Different Circuits: Disentangling Modality-Specific Mechanisms in VLMs""","llm, multimodal, interp",https://arxiv.org/abs/2506.09047,Interpretability
200
- """[2506.07406] InverseScope: Scalable Activation Inversion for Interpreting Large Language Models""","llm, interp, activation inversion",https://arxiv.org/abs/2506.07406,Interpretability
201
- """[2506.07691] Training Superior Sparse Autoencoders for Instruct Models""","SAE, llm, training",https://arxiv.org/abs/2506.07691,Interpretability
202
- """[2506.07335] Improving LLM Reasoning through Interpretable Role-Playing Steering""","steer, llm, reasoning",https://arxiv.org/abs/2506.07335,Interpretability
203
- """[2506.06686] Learning Distribution-Wise Control in Representation Space for Language Models""","llm, control, representation",https://arxiv.org/abs/2506.06686,Interpretability
204
- """[2506.07240] Overclocking LLM Reasoning: Monitoring and Controlling Thinking Path Lengths in LLMs""","llm, reasoning, control",https://arxiv.org/abs/2506.07240,Interpretability
 
1
  title,keywords,url,type
2
+ [2210.01117] Omnigrok: Grokking Beyond Algorithmic Data,"grok, llm, interp",https://arxiv.org/abs/2210.01117,interpretability
3
+ [2308.10248] Steering Language Models With Activation Engineering,steer,https://arxiv.org/abs/2308.10248,interpretability
4
+ [2310.15213] Function Vectors in Large Language Models,"llm, icl, function vector",https://arxiv.org/abs/2310.15213,interpretability
5
+ [2401.06824] Revisiting Jailbreaking for Large Language Models: A Representation Engineering Perspective,"steer, llm",https://arxiv.org/abs/2401.06824,interpretability
6
+ [2402.18312] How to think step-by-step: A mechanistic understanding of chain-of-thought reasoning,"CoT, interp, llm",https://arxiv.org/abs/2402.18312,interpretability
7
+ [2402.18344] Focus on Your Question! Interpreting and Mitigating Toxic CoT Problems in Commonsense Reasoning,"CoT, interp, llm",https://arxiv.org/abs/2402.18344,interpretability
8
+ [2403.01590] The Hidden Attention of Mamba Models,"linear, mamba, attention",https://arxiv.org/abs/2403.01590,interpretability
9
+ [2405.12522] Sparse Autoencoders Enable Scalable and Reliable Circuit Identification in Language Models,"SAE, llm",https://arxiv.org/abs/2405.12522,interpretability
10
+ [2405.14860] Not All Language Model Features Are Linear,"SAE, feature",https://arxiv.org/abs/2405.14860,interpretability
11
+ [2405.15071] Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization,"grok, llm",https://arxiv.org/abs/2405.15071,interpretability
12
+ [2406.11944] Transcoders Find Interpretable LLM Feature Circuits,SAE,https://arxiv.org/abs/2406.11944,interpretability
13
+ [2407.14494] InterpBench: Semi-Synthetic Transformers for Evaluating Mechanistic interpretability Techniques,"interp, benchmark",https://arxiv.org/abs/2407.14494,interpretability
14
+ [2409.04185] Residual Stream Analysis with Multi-Layer SAEs,"SAE, match, residual stream",https://arxiv.org/abs/2409.04185,interpretability
15
+ [2410.04234] Functional Homotopy: Smoothing Discrete Optimization via Continuous Parameters for LLM Jailbreak Attacks,jailbreak,https://arxiv.org/abs/2410.04234,interpretability
16
+ [2410.06981] Sparse Autoencoders Reveal Universal Feature Spaces Across Large Language Models,"SAE, llm",https://arxiv.org/abs/2410.06981,interpretability
17
+ [2410.07656] Mechanistic Permutability: Match Features Across Layers,"SAE, match, residual stream",https://arxiv.org/abs/2410.07656,interpretability
18
+ [2410.16314] Steering Large Language Models using Conceptors: Improving Addition-Based Activation Engineering,steer,https://arxiv.org/abs/2410.16314,interpretability
19
+ [2411.04330] Scaling Laws for Precision,"quantization, llm, scaling law",https://arxiv.org/abs/2411.04330,interpretability
20
+ [2411.14257] Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models,"hallucination, llm, knowledge",https://arxiv.org/abs/2411.14257,interpretability
21
+ [2501.06254] Rethinking Evaluation of Sparse Autoencoders through the Representation of Polysemous Words,SAE,https://arxiv.org/abs/2501.06254,interpretability
22
+ [2502.03032] Analyze Feature Flow to Enhance Interpretation and Steering in Language Models,"steer, llm, feature",https://arxiv.org/abs/2502.03032,interpretability
23
+ [2502.09245] You Do Not Fully Utilize Transformer's Representation Capacity,"llm, transformers, representation",https://arxiv.org/abs/2502.09245,interpretability
24
+ [2502.12179] Identifiable Steering via Sparse Autoencoding of Multi-Concept Shifts,"steer, SAE",https://arxiv.org/abs/2502.12179,interpretability
25
+ [2502.12446] Multi-Attribute Steering of Language Models via Targeted Intervention,"steer, llm, alignment",https://arxiv.org/abs/2502.12446,interpretability
26
+ "[2502.13490] What are Models Thinking about? Understanding Large Language Model Hallucinations ""Psychology"" through Model Inner State Analysis","llm, hallucination",https://arxiv.org/abs/2502.13490,interpretability
27
+ [2502.13632] Concept Layers: Enhancing interpretability and Intervenability via LLM Conceptualization,"concept, llm, interp",https://arxiv.org/abs/2502.13632,interpretability
28
+ [2502.13913] How Do LLMs Perform Two-Hop Reasoning in Context?,"llm, reasoning, interp",https://arxiv.org/abs/2502.13913,interpretability
29
+ [2502.13946] Why Safeguarded Ships Run Aground? Aligned Large Language Models' Safety Mechanisms Tend to Be Anchored in The Template Region,"llm, alignment, safety",https://arxiv.org/abs/2502.13946,interpretability
30
+ [2502.14010] Which Attention Heads Matter for In-Context Learning?,"llm, icl, attention head",https://arxiv.org/abs/2502.14010,interpretability
31
+ [2502.14258] Does Time Have Its Place? Temporal Heads: Where Language Models Recall Time-specific Information,"attention head, time",https://arxiv.org/abs/2502.14258,interpretability
32
+ [2502.14888] The Multi-Faceted Monosemanticity in Multimodal Representations,"mllm, monosemanticity",https://arxiv.org/abs/2502.14888,interpretability
33
+ [2502.15277] Analyzing the Inner Workings of Transformers in Compositional Generalization,"llm, interp",https://arxiv.org/abs/2502.15277,interpretability
34
+ [2502.15603] Do Multilingual LLMs Think In English?,"mllm, think",https://arxiv.org/abs/2502.15603,interpretability
35
+ [2502.17355] On Relation-Specific Neurons in Large Language Models,"neuron, llm, relation",https://arxiv.org/abs/2502.17355,interpretability
36
+ [2502.17420] The Geometry of Refusal in Large Language Models: Concept Cones and Representational Independence,"safety, refusal, llm",https://arxiv.org/abs/2502.17420,interpretability
37
+ [2502.19964] Do Sparse Autoencoders Generalize? A Case Study of Answerability,"SAE, generalize",https://arxiv.org/abs/2502.19964,interpretability
38
+ [2503.02078] Superscopes: Amplifying Internal Feature Representations for Language Model Interpretation,"feature, llm",https://arxiv.org/abs/2503.02078,interpretability
39
+ [2503.02989] Effectively Steer LLM To Follow Preference via Building Confident Directions,"steer, llm",https://arxiv.org/abs/2503.02989,interpretability
40
+ [2503.03862] Not-Just-Scaling Laws: Towards a Better Understanding of the Downstream Impact of Language Model Design Decisions,"scaling law, llm, downstream",https://arxiv.org/abs/2503.03862,interpretability
41
+ [2503.07572] Optimizing Test-Time Compute via Meta Reinforcement Fine-Tuning,"test time scaling, RL, finetuning",https://arxiv.org/abs/2503.07572,interpretability
42
+ [2503.09573] Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models,"diffusion, llm, autogressive",https://arxiv.org/abs/2503.09573,interpretability
43
+ [2503.21073] Shared Global and Local Geometry of Language Model Embeddings,"geometry, llm",https://arxiv.org/abs/2503.21073,interpretability
44
+ "[2503.21676] How do language models learn facts? Dynamics, curricula and hallucinations","knowledge, hallucinations, llm",https://arxiv.org/abs/2503.21676,interpretability
45
+ [2503.22720] Why Representation Engineering Works: A Theoretical and Empirical Study in Vision-Language Models,"representation, vision, llm",https://arxiv.org/abs/2503.22720,interpretability
46
+ [2503.23084] The Reasoning-Memorization Interplay in Language Models Is Mediated by a Single Direction,"reasoning, memory, llm, direction",https://arxiv.org/abs/2503.23084,interpretability
47
+ [2503.23306] Focus Directions Make Your Language Models Pay More Attention to Relevant Contexts,"direction, llm, attention",https://arxiv.org/abs/2503.23306,interpretability
48
+ [2503.24071] From Colors to Classes: Emergence of Concepts in Vision Transformers,"concept, llm, vision",https://arxiv.org/abs/2503.24071,interpretability
49
+ [2503.24277] Evaluating and Designing Sparse Autoencoders by Approximating Quasi-Orthogonality,"sae, evaluation",https://arxiv.org/abs/2503.24277,interpretability
50
+ [2504.00194] Identifying Sparsely Active Circuits Through Local Loss Landscape Decomposition,"sparsity, circuits",https://arxiv.org/abs/2504.00194,interpretability
51
+ [2504.01100] Repetitions are not all alike: distinct mechanisms sustain repetition in language models,"repetition, llm",https://arxiv.org/abs/2504.01100,interpretability
52
+ [2504.01871] Interpreting Emergent Planning in Model-Free Reinforcement Learning,"rl, llm, planning",https://arxiv.org/abs/2504.01871,interpretability
53
+ [2504.02620] Efficient Model Editing with Task-Localized Sparse Fine-tuning,"edit, llm, sparsity",https://arxiv.org/abs/2504.02620,interpretability
54
+ [2504.02708] The Hidden Space of Safety: Understanding Preference-Tuned LLMs in Multilingual context,"safety, multilingual",https://arxiv.org/abs/2504.02708,interpretability
55
+ [2504.02732] Why do LLMs attend to the first token?,"attention, llm, attention sink",https://arxiv.org/abs/2504.02732,interpretability
56
+ [2504.02821] Sparse Autoencoders Learn Monosemantic Features in Vision-Language Models,"sae, llm, vision, mllm",https://arxiv.org/abs/2504.02821,interpretability
57
+ [2504.02862] Towards Understanding How Knowledge Evolves in Large Vision-Language Models,"knowledge, vision, llm, mllm",https://arxiv.org/abs/2504.02862,interpretability
58
+ "[2504.02904] How Post-Training Reshapes LLMs: A Mechanistic View on Knowledge, Truthfulness, Refusal, and Confidence","llm, post-training",https://arxiv.org/abs/2504.02904,interpretability
59
+ [2504.02922] Robustly identifying concepts introduced during chat fine-tuning using crosscoders,"chat, llm, crosscoder",https://arxiv.org/abs/2504.02922,interpretability
60
+ [2504.02956] Understanding Aha Moments: from External Observations to Internal Mechanisms,"r1, o1, tts, aha",https://arxiv.org/abs/2504.02956,interpretability
61
+ [2504.03022] The Dual-Route Model of Induction,"induction, llm",https://arxiv.org/abs/2504.03022,interpretability
62
+ [2504.03635] Do Larger Language Models Imply Better Reasoning? A Pretraining Scaling Law for Reasoning,"scaling law, reasoning, llm",https://arxiv.org/abs/2504.03635,interpretability
63
+ [2504.03889] Using Attention Sinks to Identify and Evaluate Dormant Heads in Pretrained LLMs,"attention sink, llm, attention head",https://arxiv.org/abs/2504.03889,interpretability
64
+ [2504.03933] Language Models Are Implicitly Continuous,"llm, continuous",https://arxiv.org/abs/2504.03933,interpretability
65
+ [2504.04215] Towards Understanding and Improving Refusal in Compressed Models via Mechanistic interpretability,"refusal, mi",https://arxiv.org/abs/2504.04215,interpretability
66
+ [2504.04238] Sensitivity Meets Sparsity: The Impact of Extremely Sparse Parameter Patterns on Theory-of-Mind of Large Language Models,"llm, sparsity",https://arxiv.org/abs/2504.04238,interpretability
67
+ [2504.04264] Lost in Multilinguality: Dissecting Cross-lingual Factual Inconsistency in Transformer Language Models,"mllm, llm",https://arxiv.org/abs/2504.04264,interpretability
68
+ [2504.04635] Steering off Course: Reliability Challenges in Steering Language Models,"steer, llm",https://arxiv.org/abs/2504.04635,interpretability
69
+ [2504.04994] Following the Whispers of Values: Unraveling Neural Mechanisms Behind Value-Oriented Behaviors in LLMs,"neural mechanism, llm",https://arxiv.org/abs/2504.04994,interpretability
70
+ [2504.14218] Understanding the Repeat Curse in Large Language Models from a Feature Perspective,"repetition, llm, sae, feature",https://arxiv.org/abs/2504.14218,interpretability
71
+ [2504.14496] Functional Abstraction of Knowledge Recall in Large Language Models,"knowledge, recall, llm",https://arxiv.org/abs/2504.14496,interpretability
72
+ [2504.15133] EasyEdit2: An Easy-to-use Steering Framework for Editing Large Language Models,"steer, llm, edit",https://arxiv.org/abs/2504.15133,interpretability
73
+ [2504.15471] Bigram Subnetworks: Mapping to Next Tokens in Transformer Language Models,"next token, transformer",https://arxiv.org/abs/2504.15471,interpretability
74
+ [2504.15473] Emergence and Evolution of Interpretable Concepts in Diffusion Models,"diffusion, concept",https://arxiv.org/abs/2504.15473,interpretability
75
+ [2504.15630] Exploiting Contextual Knowledge in LLMs through V-usable Information based Layer Enhancement,"knowledge, llm",https://arxiv.org/abs/2504.15630,interpretability
76
+ [2504.16871] Exploring How LLMs Capture and Represent Domain-Specific Knowledge,"knowledge, llm",https://arxiv.org/abs/2504.16871,interpretability
77
+ [2505.13514] Induction Head Toxicity Mechanistically Explains Repetition Curse in Large Language Models,"induction head, repetition curse, llm",https://arxiv.org/abs/2505.13514,interpretability
78
+ [2505.13737] Causal Head Gating: A Framework for Interpreting Roles of Attention Heads in Transformers,"causal head gating, attention heads, transformers",https://arxiv.org/abs/2505.13737,interpretability
79
+ [2505.13763] Language Models Are Capable of Metacognitive Monitoring and Control of Their Internal Activations,"metacognition, internal activations, llm",https://arxiv.org/abs/2505.13763,interpretability
80
+ [2505.13898] Do Language Models Use Their Depth Efficiently?,"model depth, efficiency, language models",https://arxiv.org/abs/2505.13898,interpretability
81
+ [2505.14158] Temporal Alignment of Time Sensitive Facts with Activation Engineering,"temporal alignment, activation engineering",https://arxiv.org/abs/2505.14158,interpretability
82
+ [2505.14158] Temporal Alignment of Time Sensitive Facts with Activation Engineering,"temporal alignment, activation engineering",https://arxiv.org/abs/2505.14158,interpretability
83
+ [2505.14178] Tokenization Constraints in LLMs: A Study of Symbolic and Arithmetic Reasoning Limits,"tokenization, llm, reasoning limits",https://arxiv.org/abs/2505.14178,interpretability
84
+ [2505.14185] Safety Subspaces are Not Distinct: A Fine-Tuning Case Study,"safety subspaces, fine-tuning",https://arxiv.org/abs/2505.14185,interpretability
85
+ [2505.14233] Mechanistic Fine-tuning for In-context Learning,"mechanistic fine-tuning, in-context learning",https://arxiv.org/abs/2505.14233,interpretability
86
+ [2505.14257] Aligning Attention Distribution to Information Flow for Hallucination Mitigation in Large Vision-Language Models,"attention alignment, hallucination mitigation, lvlm",https://arxiv.org/abs/2505.14257,interpretability
87
+ [2505.14352] Towards eliciting latent knowledge from LLMs with mechanistic interpretability,"latent knowledge, llm, mechanistic interpretability",https://arxiv.org/abs/2505.14352,interpretability
88
+ "[2505.14406] Pierce the Mists, Greet the Sky: Decipher Knowledge Overshadowing via Knowledge Circuit Analysis","knowledge overshadowing, circuit analysis",https://arxiv.org/abs/2505.14406,interpretability
89
+ [2505.14467] Void in Language Models,"void, language models",https://arxiv.org/abs/2505.14467,interpretability
90
+ [2505.14536] Breaking Bad Tokens: Detoxification of LLMs Using Sparse Autoencoders,"detoxification, llm, sparse autoencoders",https://arxiv.org/abs/2505.14536,interpretability
91
+ [2505.14685] Language Models use Lookbacks to Track Beliefs,"lookbacks, belief tracking, language models",https://arxiv.org/abs/2505.14685,interpretability
92
+ [2505.17630] GIM: Improved interpretability for Large Language Models,"llm, interp",https://arxiv.org/abs/2505.17630,interpretability
93
+ [2505.17936] Understanding Gated Neurons in Transformers from Their Input-Output Functionality,"transformers, interp",https://arxiv.org/abs/2505.17936,interpretability
94
+ [2505.17760] But what is your honest answer? Aiding LLM-judges with honest alternatives using steering vectors,"steer, llm, honest",https://arxiv.org/abs/2505.17760,interpretability
95
+ [2505.17322] From Compression to Expansion: A Layerwise Analysis of In-Context Learning,"in-context learning, llm",https://arxiv.org/abs/2505.17322,interpretability
96
+ [2505.17073] Mechanistic interpretability of GPT-like Models on Summarization Tasks,"mechanistic interp, llm, summarization",https://arxiv.org/abs/2505.17073,interpretability
97
+ [2505.17697] Activation Control for Efficiently Eliciting Long Chain-of-thought Ability of Language Models,"activation, CoT, llm, steer",https://arxiv.org/abs/2505.17697,interpretability
98
+ [2505.17712] Understanding How Value Neurons Shape the Generation of Specified Values in LLMs,"interp, value neuron, llm",https://arxiv.org/abs/2505.17712,interpretability
99
+ [2505.17812] Seeing It or Not? Interpretable Vision-aware Latent Steering to Mitigate Object Hallucinations,"interp, vision, steer, llm",https://arxiv.org/abs/2505.17812,interpretability
100
+ [2505.17769] Inference-Time Decomposition of Activations (ITDA): A Scalable Approach to Interpreting Large Language Models,"interp, activation, llm",https://arxiv.org/abs/2505.17769,interpretability
101
+ [2505.17260] The Rise of Parameter Specialization for Knowledge Storage in Large Language Models,"knowledge, llm",https://arxiv.org/abs/2505.17260,interpretability
102
+ [2505.17863] The emergence of sparse attention: impact of data distribution and benefits of repetition,"attention, sparse, llm",https://arxiv.org/abs/2505.17863,interpretability
103
+ [2505.17071] What's in a prompt? Language models encode literary style in prompt embeddings,"prompt, llm, interp",https://arxiv.org/abs/2505.17071,interpretability
104
+ [2505.17646] Understanding Pre-training and Fine-tuning from Loss Landscape Perspectives,"pre-training, fine-tuning, llm",https://arxiv.org/abs/2505.17646,interpretability
105
+ [2505.17078] GloSS over Toxicity: Understanding and Mitigating Toxicity in LLMs via Global Toxic Subspace,"toxicity, llm, interp",https://arxiv.org/abs/2505.17078,interpretability
106
+ "[2505.19440] The Birth of Knowledge: Emergent Features across Time, Space, and Scale in Large Language Models","emergent features, llm",https://arxiv.org/abs/2505.19440,interpretability
107
+ [2505.18672] Does Representation Intervention Really Identify Desired Concepts and Elicit Alignment?,"representation intervention, alignment",https://arxiv.org/abs/2505.18672,interpretability
108
+ [2505.18752] Unifying Attention Heads and Task Vectors via Hidden State Geometry in In-Context Learning,"attention heads, task vectors, in-context learning",https://arxiv.org/abs/2505.18752,interpretability
109
+ [2505.18933] REACT: Representation Extraction And Controllable Tuning to Overcome Overfitting in LLM Knowledge Editing,"representation extraction, controllable tuning, knowledge editing",https://arxiv.org/abs/2505.18933,interpretability
110
+ [2505.18588] Safety Alignment via Constrained Knowledge Unlearning,"safety alignment, knowledge unlearning",https://arxiv.org/abs/2505.18588,interpretability
111
+ [2505.18706] Steering LLM Reasoning Through Bias-Only Adaptation,"steering, llm, bias adaptation",https://arxiv.org/abs/2505.18706,interpretability
112
+ [2505.19488] Understanding Transformer from the Perspective of Associative Memory,"transformer, associative memory",https://arxiv.org/abs/2505.19488,interpretability
113
+ "[2505.20076] Grokking ExPLAIND: Unifying Model, Data, and Training Attribution to Study Model Behavior","grokking, model attribution, data attribution",https://arxiv.org/abs/2505.20076,interpretability
114
+ [2505.18235] The Origins of Representation Manifolds in Large Language Models,"representation manifolds, llm",https://arxiv.org/abs/2505.18235,interpretability
115
+ [2505.20063] SAEs Are Good for Steering -- If You Select the Right Features,"SAE, steering",https://arxiv.org/abs/2505.20063,interpretability
116
+ [2505.16178] Understanding Fact Recall in Language Models: Why Two-Stage Training Encourages Memorization but Mixed Training Teaches Knowledge,"llm, memorization, knowledge",https://arxiv.org/abs/2505.16178,interpretability
117
+ [2505.22586] Precise In-Parameter Concept Erasure in Large Language Models,"llm, erasure, interp",https://arxiv.org/abs/2505.22586,interpretability
118
+ [2505.22630] Stochastic Chameleons: Irrelevant Context Hallucinations Reveal Class-Based (Mis)Generalization in LLMs,"llm, hallucination, generalization",https://arxiv.org/abs/2505.22630,interpretability
119
+ [2505.21800] From Directions to Cones: Exploring Multidimensional Representations of Propositional Facts in LLMs,"llm, representation, interp",https://arxiv.org/abs/2505.21800,interpretability
120
+ [2505.22411] Mitigating Overthinking in Large Reasoning Models via Manifold Steering,"llm, reasoning, steer",https://arxiv.org/abs/2505.22411,interpretability
121
+ [2505.22572] Fusion Steering: Prompt-Specific Activation Control,"steer, activation, llm",https://arxiv.org/abs/2505.22572,interpretability
122
+ [2505.21772] Calibrating LLM Confidence by Probing Perturbed Representation Stability,"llm, calibration, representation",https://arxiv.org/abs/2505.21772,interpretability
123
+ [2506.01034] Less is More: Local Intrinsic Dimensions of Contextual Language Models,"llm, dimension",https://arxiv.org/abs/2506.01034,Interpretability
124
+ [2506.01042] Probing Neural Topology of Large Language Models,"llm, probing, topology",https://arxiv.org/abs/2506.01042,Interpretability
125
+ [2506.01074] How Programming Concepts and Neurons Are Shared in Code Language Models - ACL 2025 Findings,"code, llm, neuron",https://arxiv.org/abs/2506.01074,Interpretability
126
+ "[2506.01115] Attention Retrieves, MLP Memorizes: Disentangling Trainable Components in the Transformer","attention, mlp, transformer",https://arxiv.org/abs/2506.01115,Interpretability
127
+ [2506.00653] Linear Representation Transferability Hypothesis: Leveraging Small Models to Steer Large Models,"steer, llm, transferability",https://arxiv.org/abs/2506.00653,Interpretability
128
+ [2506.00382] Spectral Insights into Data-Oblivious Critical Layers in Large Language Models - ACL 2025 Findings,"llm, spectral",https://arxiv.org/abs/2506.00382,Interpretability
129
+ [2506.00085] COSMIC: Generalized Refusal Direction Identification in LLM Activations - ACL 2025 Findings,"llm, activation",https://arxiv.org/abs/2506.00085,Interpretability
130
+ [2506.00823] Probing the Geometry of Truth: Consistency and Generalization of Truth Directions in LLMs Across Logical Transformations and Question Answering Tasks - ACL 2025 Findings,"llm, probing, truth",https://arxiv.org/abs/2506.00823,Interpretability
131
+ [2506.00772] LIFT the Veil for the Truth: Principal Weights Emerge after Rank Reduction for Reasoning-Focused Supervised Fine-Tuning - ICML 2025,"reasoning, fine-tuning",https://arxiv.org/abs/2506.00772,Interpretability
132
+ [2505.24293] Large Language Models are Locally Linear Mappings,"llm, linear",https://arxiv.org/abs/2505.24293,Interpretability
133
+ [2505.24731] Circuit Stability Characterizes Language Model Generalization,"llm, generalization, circuit",https://arxiv.org/abs/2505.24731,Interpretability
134
+ [2505.24832] How much do language models memorize?,"llm, memorize",https://arxiv.org/abs/2505.24832,Interpretability
135
+ [2505.24244] Mamba Knockout for Unraveling Factual Information Flow - ACL 2025,"mamba, factual",https://arxiv.org/abs/2505.24244,Interpretability
136
+ [2505.24688] Soft Reasoning: Navigating Solution Spaces in Large Language Models through Controlled Embedding Exploration - ICML 2025,"reasoning, llm, embedding",https://arxiv.org/abs/2505.24688,Interpretability
137
+ [2505.24428] Model Unlearning via Sparse Autoencoder Subspace Guided Projections,"unlearning, sae",https://arxiv.org/abs/2505.24428,Interpretability
138
+ [2505.24009] Diversity of Transformer Layers: One Aspect of Parameter Scaling Laws,"transformer, scaling laws",https://arxiv.org/abs/2505.24009,Interpretability
139
+ [2505.24539] Localizing Persona Representations in LLMs,"llm, persona",https://arxiv.org/abs/2505.24539,Interpretability
140
+ [2505.24535] Beyond Linear Steering: Unified Multi-Attribute Control for Language Models,"steer, llm",https://arxiv.org/abs/2505.24535,Interpretability
141
+ [2505.24362] Knowing Before Saying: LLM Representations Encode Information About Chain-of-Thought Success Before Completion,"llm, CoT",https://arxiv.org/abs/2505.24362,Interpretability
142
+ [2505.24360] Interpreting Large Text-to-Image Diffusion Models with Dictionary Learning,"diffusion, interp",https://arxiv.org/abs/2505.24360,Interpretability
143
+ [2505.23911] One Task Vector is not Enough: A Large-Scale Study for In-Context Learning,in-context learning,https://arxiv.org/abs/2505.23911,Interpretability
144
+ [2505.24473] Train One Sparse Autoencoder Across Multiple Sparsity Budgets to Preserve Interpretability and Accuracy,"sae, interp",https://arxiv.org/abs/2505.24473,Interpretability
145
+ [2505.23013] Scalable Complexity Control Facilitates Reasoning Ability of LLMs,"reasoning, llm, complexity",https://arxiv.org/abs/2505.23013,Interpretability
146
+ [2505.23556] Understanding Refusal in Language Models with Sparse Autoencoders,"refusal, llm, SAE",https://arxiv.org/abs/2505.23556,Interpretability
147
+ [2505.23653] How does Transformer Learn Implicit Reasoning?,"transformer, reasoning, implicit",https://arxiv.org/abs/2505.23653,Interpretability
148
+ [2505.23701] Can LLMs Reason Abstractly Over Math Word Problems Without CoT? Disentangling Abstract Formulation From Arithmetic Computation,"llm, reasoning, CoT",https://arxiv.org/abs/2505.23701,Interpretability
149
+ [2506.02132] Model Internal Sleuthing: Finding Lexical Identity and Inflectional Morphology in Modern Language Models,"llm, interp, morphology",https://arxiv.org/abs/2506.02132,Interpretability
150
+ [2506.02996] Linear Spatial World Models Emerge in Large Language Models,"llm, world models, linear",https://arxiv.org/abs/2506.02996,Interpretability
151
+ [2506.02701] On Entity Identification in Language Models,"llm, entity identification",https://arxiv.org/abs/2506.02701,Interpretability
152
+ [2506.02867] Demystifying Reasoning Dynamics with Mutual Information: Thinking Tokens are Information Peaks in LLM Reasoning,"llm, reasoning, mutual information",https://arxiv.org/abs/2506.02867,Interpretability
153
+ [2506.03434] Time Course MechInterp: Analyzing the Evolution of Components and Knowledge in Large Language Models,"llm, interp, mechanistic interpretability",https://arxiv.org/abs/2506.03434,Interpretability
154
+ [2506.04142] Establishing Trustworthy LLM Evaluation via Shortcut Neuron Analysis,"llm, evaluation, shortcut neurons",https://arxiv.org/abs/2506.04142,Interpretability
155
+ [2506.17673] FaithfulSAE: Towards Capturing Faithful Features with Sparse Autoencoders without External Dataset Dependencies,"SAE, interp, llm",https://arxiv.org/abs/2506.17673,Interpretability
156
+ [2506.18053] Mechanistic Interpretability in the Presence of Architectural Obfuscation,"interp, mechanistic",https://arxiv.org/abs/2506.18053,Interpretability
157
+ [2506.18167] Understanding Reasoning in Thinking Language Models via Steering Vectors - Neel Nanda,"steer, llm, reasoning",https://arxiv.org/abs/2506.18167,Interpretability
158
+ [2506.18141] Sparse Feature Coactivation Reveals Composable Semantic Modules in Large Language Models,"interp, llm, feature",https://arxiv.org/abs/2506.18141,Interpretability
159
+ [2506.18887] Steering Conceptual Bias via Transformer Latent-Subspace Activation,"steer, llm, bias",https://arxiv.org/abs/2506.18887,Interpretability
160
+ [2506.18852] Mechanistic Interpretability Needs Philosophy,"interp, mechanistic, philosophy",https://arxiv.org/abs/2506.18852,Interpretability
161
+ [2506.17859] In-Context Learning Strategies Emerge Rationally,"in-context learning, llm",https://arxiv.org/abs/2506.17859,Interpretability
162
+ [2506.16975] Latent Concept Disentanglement in Transformer-based Language Models,"llm, interp, disentanglement",https://arxiv.org/abs/2506.16975,Interpretability
163
+ [2506.16078] Probing the Robustness of Large Language Models Safety to Latent Perturbations,"llm, safety, robustness",https://arxiv.org/abs/2506.16078,Interpretability
164
+ [2506.16678] Mechanisms vs. Outcomes: Probing for Syntax Fails to Explain Performance on Targeted Syntactic Evaluations,"llm, interp, syntax",https://arxiv.org/abs/2506.16678,Interpretability
165
+ [2506.17052] From Concepts to Components: Concept-Agnostic Attention Module Discovery in Transformers,"llm, attention, interp",https://arxiv.org/abs/2506.17052,Interpretability
166
+ [2506.17090] Better Language Model Inversion by Compactly Representing Next-Token Distributions,"llm, inversion",https://arxiv.org/abs/2506.17090,Interpretability
167
+ [2506.15710] RAST: Reasoning Activation in LLMs via Small-model Transfer,"llm, reasoning",https://arxiv.org/abs/2506.15710,Interpretability
168
+ [2506.15735] ContextBench: Modifying Contexts for Targeted Latent Activation,"llm, activation, interp",https://arxiv.org/abs/2506.15735,Interpretability
169
+ [2506.16406] Drag-and-Drop LLMs: Zero-Shot Prompt-to-Weights,"llm, prompting",https://arxiv.org/abs/2506.16406,Interpretability
170
+ [2506.15963] On the Theoretical Understanding of Identifiable Sparse Autoencoders and Beyond,"SAE, interp, theory",https://arxiv.org/abs/2506.15963,Interpretability
171
+ "[2506.15679] Dense SAE Latents Are Features, Not Bugs","SAE, interp, features",https://arxiv.org/abs/2506.15679,Interpretability
172
+ [2506.15606] LoX: Low-Rank Extrapolation Robustifies LLM Safety Against Fine-tuning,"llm, safety, fine-tuning",https://arxiv.org/abs/2506.15606,Interpretability
173
+ "[2506.12152] Because we have LLMs, we Can and Should Pursue Agentic Interpretability","agent, interp, llm",https://arxiv.org/abs/2506.12152,Interpretability
174
+ [2506.12576] Enabling Precise Topic Alignment in Large Language Models Via Sparse Autoencoders,"llm, SAE, alignment",https://arxiv.org/abs/2506.12576,Interpretability
175
+ [2506.12217] From Emergence to Control: Probing and Modulating Self-Reflection in Language Models,"llm, control, interp",https://arxiv.org/abs/2506.12217,Interpretability
176
+ [2506.13206] Thought Crime: Backdoors and Emergent Misalignment in Reasoning Models,"llm, safety, reasoning",https://arxiv.org/abs/2506.13206,Interpretability
177
+ [2506.12880] Universal Jailbreak Suffixes Are Strong Attention Hijackers,"llm, safety, attention",https://arxiv.org/abs/2506.12880,Interpretability
178
+ [2506.13752] Steering LLM Thinking with Budget Guidance,"steer, llm, reasoning",https://arxiv.org/abs/2506.13752,Interpretability
179
+ [2506.13734] Instruction Following by Boosting Attention of Large Language Models,"llm, attention",https://arxiv.org/abs/2506.13734,Interpretability
180
+ [2506.11618] Convergent Linear Representations of Emergent Misalignment - Neel Nanda,"llm, misalignment, interp",https://arxiv.org/abs/2506.11618,Interpretability
181
+ [2506.11613] Model Organisms for Emergent Misalignment - Neel Nanda,"llm, misalignment, interp",https://arxiv.org/abs/2506.11613,Interpretability
182
+ [2506.11976] How Visual Representations Map to Language Feature Space in Multimodal LLMs - Neel Nanda,"llm, multimodal, interp",https://arxiv.org/abs/2506.11976,Interpretability
183
+ [2506.11088] Two Birds with One Stone: Improving Factuality and Faithfulness of LLMs via Dynamic Interactive Subspace Editing,"llm, factuality, faithfulness",https://arxiv.org/abs/2506.11088,Interpretability
184
+ [2506.11123] Sparse Autoencoders Bridge The Deep Learning Model and The Brain,"SAE, interp",https://arxiv.org/abs/2506.11123,Interpretability
185
+ [2506.10641] Spelling-out is not Straightforward: LLMs' Capability of Tokenization from Token to Characters,"llm, tokenization",https://arxiv.org/abs/2506.10641,Interpretability
186
+ [2506.10920] Decomposing MLP Activations into Interpretable Features via Semi-Nonnegative Matrix Factorization,"llm, interp, features",https://arxiv.org/abs/2506.10920,Interpretability
187
+ [2506.10887] Generalization or Hallucination? Understanding Out-of-Context Reasoning in Transformers,"llm, reasoning, generalization",https://arxiv.org/abs/2506.10887,Interpretability
188
+ [2506.10922] Robustly Improving LLM Fairness in Realistic Settings via Interpretability,"llm, fairness, interp",https://arxiv.org/abs/2506.10922,Interpretability
189
+ "[2506.09099] Too Big to Think: Capacity, Memorization, and Generalization in Pre-Trained Transformers","llm, generalization, memorization",https://arxiv.org/abs/2506.09099,Interpretability
190
+ [2506.09277] Did I Faithfully Say What I Thought? Bridging the Gap Between Neural Activity and Self-Explanations in Large Language Models,"llm, interp, self-explanation",https://arxiv.org/abs/2506.09277,Interpretability
191
+ [2506.09890] The Emergence of Abstract Thought in Large Language Models Beyond Any Language,"llm, abstract thought",https://arxiv.org/abs/2506.09890,Interpretability
192
+ [2506.08427] Know-MRI: A Knowledge Mechanisms Revealer&Interpreter for Large Language Models,"llm, interp, knowledge",https://arxiv.org/abs/2506.08427,Interpretability
193
+ [2506.08359] DEAL: Disentangling Transformer Head Activations for LLM Steering,"steer, llm, interp",https://arxiv.org/abs/2506.08359,Interpretability
194
+ [2506.08572] The Geometries of Truth Are Orthogonal Across Tasks,"llm, truth",https://arxiv.org/abs/2506.08572,Interpretability
195
+ [2506.08473] AsFT: Anchoring Safety During LLM Fine-Tuning Within Narrow Safety Basin,"llm, safety, fine-tuning",https://arxiv.org/abs/2506.08473,Interpretability
196
+ "[2506.09048] Understanding Task Vectors in In-Context Learning: Emergence, Functionality, and Limitations","llm, in-context learning",https://arxiv.org/abs/2506.09048,Interpretability
197
+ [2506.08184] Unable to forget: Proactive lnterference Reveals Working Memory Limits in LLMs Beyond Context Length,"llm, memory",https://arxiv.org/abs/2506.08184,Interpretability
198
+ [2506.08966] Pre-trained Language Models Learn Remarkably Accurate Representations of Numbers,"llm, representation",https://arxiv.org/abs/2506.08966,Interpretability
199
+ "[2506.09047] Same Task, Different Circuits: Disentangling Modality-Specific Mechanisms in VLMs","llm, multimodal, interp",https://arxiv.org/abs/2506.09047,Interpretability
200
+ [2506.07406] InverseScope: Scalable Activation Inversion for Interpreting Large Language Models,"llm, interp, activation inversion",https://arxiv.org/abs/2506.07406,Interpretability
201
+ [2506.07691] Training Superior Sparse Autoencoders for Instruct Models,"SAE, llm, training",https://arxiv.org/abs/2506.07691,Interpretability
202
+ [2506.07335] Improving LLM Reasoning through Interpretable Role-Playing Steering,"steer, llm, reasoning",https://arxiv.org/abs/2506.07335,Interpretability
203
+ [2506.06686] Learning Distribution-Wise Control in Representation Space for Language Models,"llm, control, representation",https://arxiv.org/abs/2506.06686,Interpretability
204
+ [2506.07240] Overclocking LLM Reasoning: Monitoring and Controlling Thinking Path Lengths in LLMs,"llm, reasoning, control",https://arxiv.org/abs/2506.07240,Interpretability