MikaStars39's picture
Upload folder using huggingface_hub
b83c198 verified
title,keywords,url,type
[2210.01117] Omnigrok: Grokking Beyond Algorithmic Data,"grok, llm, interp",https://arxiv.org/abs/2210.01117,interpretability
[2308.10248] Steering Language Models With Activation Engineering,steer,https://arxiv.org/abs/2308.10248,interpretability
[2310.15213] Function Vectors in Large Language Models,"llm, icl, function vector",https://arxiv.org/abs/2310.15213,interpretability
[2401.06824] Revisiting Jailbreaking for Large Language Models: A Representation Engineering Perspective,"steer, llm",https://arxiv.org/abs/2401.06824,interpretability
[2402.18312] How to think step-by-step: A mechanistic understanding of chain-of-thought reasoning,"CoT, interp, llm",https://arxiv.org/abs/2402.18312,interpretability
[2402.18344] Focus on Your Question! Interpreting and Mitigating Toxic CoT Problems in Commonsense Reasoning,"CoT, interp, llm",https://arxiv.org/abs/2402.18344,interpretability
[2403.01590] The Hidden Attention of Mamba Models,"linear, mamba, attention",https://arxiv.org/abs/2403.01590,interpretability
[2405.12522] Sparse Autoencoders Enable Scalable and Reliable Circuit Identification in Language Models,"SAE, llm",https://arxiv.org/abs/2405.12522,interpretability
[2405.14860] Not All Language Model Features Are Linear,"SAE, feature",https://arxiv.org/abs/2405.14860,interpretability
[2405.15071] Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization,"grok, llm",https://arxiv.org/abs/2405.15071,interpretability
[2406.11944] Transcoders Find Interpretable LLM Feature Circuits,SAE,https://arxiv.org/abs/2406.11944,interpretability
[2407.14494] InterpBench: Semi-Synthetic Transformers for Evaluating Mechanistic interpretability Techniques,"interp, benchmark",https://arxiv.org/abs/2407.14494,interpretability
[2409.04185] Residual Stream Analysis with Multi-Layer SAEs,"SAE, match, residual stream",https://arxiv.org/abs/2409.04185,interpretability
[2410.04234] Functional Homotopy: Smoothing Discrete Optimization via Continuous Parameters for LLM Jailbreak Attacks,jailbreak,https://arxiv.org/abs/2410.04234,interpretability
[2410.06981] Sparse Autoencoders Reveal Universal Feature Spaces Across Large Language Models,"SAE, llm",https://arxiv.org/abs/2410.06981,interpretability
[2410.07656] Mechanistic Permutability: Match Features Across Layers,"SAE, match, residual stream",https://arxiv.org/abs/2410.07656,interpretability
[2410.16314] Steering Large Language Models using Conceptors: Improving Addition-Based Activation Engineering,steer,https://arxiv.org/abs/2410.16314,interpretability
[2411.04330] Scaling Laws for Precision,"quantization, llm, scaling law",https://arxiv.org/abs/2411.04330,interpretability
[2411.14257] Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models,"hallucination, llm, knowledge",https://arxiv.org/abs/2411.14257,interpretability
[2501.06254] Rethinking Evaluation of Sparse Autoencoders through the Representation of Polysemous Words,SAE,https://arxiv.org/abs/2501.06254,interpretability
[2502.03032] Analyze Feature Flow to Enhance Interpretation and Steering in Language Models,"steer, llm, feature",https://arxiv.org/abs/2502.03032,interpretability
[2502.09245] You Do Not Fully Utilize Transformer's Representation Capacity,"llm, transformers, representation",https://arxiv.org/abs/2502.09245,interpretability
[2502.12179] Identifiable Steering via Sparse Autoencoding of Multi-Concept Shifts,"steer, SAE",https://arxiv.org/abs/2502.12179,interpretability
[2502.12446] Multi-Attribute Steering of Language Models via Targeted Intervention,"steer, llm, alignment",https://arxiv.org/abs/2502.12446,interpretability
"[2502.13490] What are Models Thinking about? Understanding Large Language Model Hallucinations ""Psychology"" through Model Inner State Analysis","llm, hallucination",https://arxiv.org/abs/2502.13490,interpretability
[2502.13632] Concept Layers: Enhancing interpretability and Intervenability via LLM Conceptualization,"concept, llm, interp",https://arxiv.org/abs/2502.13632,interpretability
[2502.13913] How Do LLMs Perform Two-Hop Reasoning in Context?,"llm, reasoning, interp",https://arxiv.org/abs/2502.13913,interpretability
[2502.13946] Why Safeguarded Ships Run Aground? Aligned Large Language Models' Safety Mechanisms Tend to Be Anchored in The Template Region,"llm, alignment, safety",https://arxiv.org/abs/2502.13946,interpretability
[2502.14010] Which Attention Heads Matter for In-Context Learning?,"llm, icl, attention head",https://arxiv.org/abs/2502.14010,interpretability
[2502.14258] Does Time Have Its Place? Temporal Heads: Where Language Models Recall Time-specific Information,"attention head, time",https://arxiv.org/abs/2502.14258,interpretability
[2502.14888] The Multi-Faceted Monosemanticity in Multimodal Representations,"mllm, monosemanticity",https://arxiv.org/abs/2502.14888,interpretability
[2502.15277] Analyzing the Inner Workings of Transformers in Compositional Generalization,"llm, interp",https://arxiv.org/abs/2502.15277,interpretability
[2502.15603] Do Multilingual LLMs Think In English?,"mllm, think",https://arxiv.org/abs/2502.15603,interpretability
[2502.17355] On Relation-Specific Neurons in Large Language Models,"neuron, llm, relation",https://arxiv.org/abs/2502.17355,interpretability
[2502.17420] The Geometry of Refusal in Large Language Models: Concept Cones and Representational Independence,"safety, refusal, llm",https://arxiv.org/abs/2502.17420,interpretability
[2502.19964] Do Sparse Autoencoders Generalize? A Case Study of Answerability,"SAE, generalize",https://arxiv.org/abs/2502.19964,interpretability
[2503.02078] Superscopes: Amplifying Internal Feature Representations for Language Model Interpretation,"feature, llm",https://arxiv.org/abs/2503.02078,interpretability
[2503.02989] Effectively Steer LLM To Follow Preference via Building Confident Directions,"steer, llm",https://arxiv.org/abs/2503.02989,interpretability
[2503.03862] Not-Just-Scaling Laws: Towards a Better Understanding of the Downstream Impact of Language Model Design Decisions,"scaling law, llm, downstream",https://arxiv.org/abs/2503.03862,interpretability
[2503.07572] Optimizing Test-Time Compute via Meta Reinforcement Fine-Tuning,"test time scaling, RL, finetuning",https://arxiv.org/abs/2503.07572,interpretability
[2503.09573] Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models,"diffusion, llm, autogressive",https://arxiv.org/abs/2503.09573,interpretability
[2503.21073] Shared Global and Local Geometry of Language Model Embeddings,"geometry, llm",https://arxiv.org/abs/2503.21073,interpretability
"[2503.21676] How do language models learn facts? Dynamics, curricula and hallucinations","knowledge, hallucinations, llm",https://arxiv.org/abs/2503.21676,interpretability
[2503.22720] Why Representation Engineering Works: A Theoretical and Empirical Study in Vision-Language Models,"representation, vision, llm",https://arxiv.org/abs/2503.22720,interpretability
[2503.23084] The Reasoning-Memorization Interplay in Language Models Is Mediated by a Single Direction,"reasoning, memory, llm, direction",https://arxiv.org/abs/2503.23084,interpretability
[2503.23306] Focus Directions Make Your Language Models Pay More Attention to Relevant Contexts,"direction, llm, attention",https://arxiv.org/abs/2503.23306,interpretability
[2503.24071] From Colors to Classes: Emergence of Concepts in Vision Transformers,"concept, llm, vision",https://arxiv.org/abs/2503.24071,interpretability
[2503.24277] Evaluating and Designing Sparse Autoencoders by Approximating Quasi-Orthogonality,"sae, evaluation",https://arxiv.org/abs/2503.24277,interpretability
[2504.00194] Identifying Sparsely Active Circuits Through Local Loss Landscape Decomposition,"sparsity, circuits",https://arxiv.org/abs/2504.00194,interpretability
[2504.01100] Repetitions are not all alike: distinct mechanisms sustain repetition in language models,"repetition, llm",https://arxiv.org/abs/2504.01100,interpretability
[2504.01871] Interpreting Emergent Planning in Model-Free Reinforcement Learning,"rl, llm, planning",https://arxiv.org/abs/2504.01871,interpretability
[2504.02620] Efficient Model Editing with Task-Localized Sparse Fine-tuning,"edit, llm, sparsity",https://arxiv.org/abs/2504.02620,interpretability
[2504.02708] The Hidden Space of Safety: Understanding Preference-Tuned LLMs in Multilingual context,"safety, multilingual",https://arxiv.org/abs/2504.02708,interpretability
[2504.02732] Why do LLMs attend to the first token?,"attention, llm, attention sink",https://arxiv.org/abs/2504.02732,interpretability
[2504.02821] Sparse Autoencoders Learn Monosemantic Features in Vision-Language Models,"sae, llm, vision, mllm",https://arxiv.org/abs/2504.02821,interpretability
[2504.02862] Towards Understanding How Knowledge Evolves in Large Vision-Language Models,"knowledge, vision, llm, mllm",https://arxiv.org/abs/2504.02862,interpretability
"[2504.02904] How Post-Training Reshapes LLMs: A Mechanistic View on Knowledge, Truthfulness, Refusal, and Confidence","llm, post-training",https://arxiv.org/abs/2504.02904,interpretability
[2504.02922] Robustly identifying concepts introduced during chat fine-tuning using crosscoders,"chat, llm, crosscoder",https://arxiv.org/abs/2504.02922,interpretability
[2504.02956] Understanding Aha Moments: from External Observations to Internal Mechanisms,"r1, o1, tts, aha",https://arxiv.org/abs/2504.02956,interpretability
[2504.03022] The Dual-Route Model of Induction,"induction, llm",https://arxiv.org/abs/2504.03022,interpretability
[2504.03635] Do Larger Language Models Imply Better Reasoning? A Pretraining Scaling Law for Reasoning,"scaling law, reasoning, llm",https://arxiv.org/abs/2504.03635,interpretability
[2504.03889] Using Attention Sinks to Identify and Evaluate Dormant Heads in Pretrained LLMs,"attention sink, llm, attention head",https://arxiv.org/abs/2504.03889,interpretability
[2504.03933] Language Models Are Implicitly Continuous,"llm, continuous",https://arxiv.org/abs/2504.03933,interpretability
[2504.04215] Towards Understanding and Improving Refusal in Compressed Models via Mechanistic interpretability,"refusal, mi",https://arxiv.org/abs/2504.04215,interpretability
[2504.04238] Sensitivity Meets Sparsity: The Impact of Extremely Sparse Parameter Patterns on Theory-of-Mind of Large Language Models,"llm, sparsity",https://arxiv.org/abs/2504.04238,interpretability
[2504.04264] Lost in Multilinguality: Dissecting Cross-lingual Factual Inconsistency in Transformer Language Models,"mllm, llm",https://arxiv.org/abs/2504.04264,interpretability
[2504.04635] Steering off Course: Reliability Challenges in Steering Language Models,"steer, llm",https://arxiv.org/abs/2504.04635,interpretability
[2504.04994] Following the Whispers of Values: Unraveling Neural Mechanisms Behind Value-Oriented Behaviors in LLMs,"neural mechanism, llm",https://arxiv.org/abs/2504.04994,interpretability
[2504.14218] Understanding the Repeat Curse in Large Language Models from a Feature Perspective,"repetition, llm, sae, feature",https://arxiv.org/abs/2504.14218,interpretability
[2504.14496] Functional Abstraction of Knowledge Recall in Large Language Models,"knowledge, recall, llm",https://arxiv.org/abs/2504.14496,interpretability
[2504.15133] EasyEdit2: An Easy-to-use Steering Framework for Editing Large Language Models,"steer, llm, edit",https://arxiv.org/abs/2504.15133,interpretability
[2504.15471] Bigram Subnetworks: Mapping to Next Tokens in Transformer Language Models,"next token, transformer",https://arxiv.org/abs/2504.15471,interpretability
[2504.15473] Emergence and Evolution of Interpretable Concepts in Diffusion Models,"diffusion, concept",https://arxiv.org/abs/2504.15473,interpretability
[2504.15630] Exploiting Contextual Knowledge in LLMs through V-usable Information based Layer Enhancement,"knowledge, llm",https://arxiv.org/abs/2504.15630,interpretability
[2504.16871] Exploring How LLMs Capture and Represent Domain-Specific Knowledge,"knowledge, llm",https://arxiv.org/abs/2504.16871,interpretability
[2505.13514] Induction Head Toxicity Mechanistically Explains Repetition Curse in Large Language Models,"induction head, repetition curse, llm",https://arxiv.org/abs/2505.13514,interpretability
[2505.13737] Causal Head Gating: A Framework for Interpreting Roles of Attention Heads in Transformers,"causal head gating, attention heads, transformers",https://arxiv.org/abs/2505.13737,interpretability
[2505.13763] Language Models Are Capable of Metacognitive Monitoring and Control of Their Internal Activations,"metacognition, internal activations, llm",https://arxiv.org/abs/2505.13763,interpretability
[2505.13898] Do Language Models Use Their Depth Efficiently?,"model depth, efficiency, language models",https://arxiv.org/abs/2505.13898,interpretability
[2505.14158] Temporal Alignment of Time Sensitive Facts with Activation Engineering,"temporal alignment, activation engineering",https://arxiv.org/abs/2505.14158,interpretability
[2505.14158] Temporal Alignment of Time Sensitive Facts with Activation Engineering,"temporal alignment, activation engineering",https://arxiv.org/abs/2505.14158,interpretability
[2505.14178] Tokenization Constraints in LLMs: A Study of Symbolic and Arithmetic Reasoning Limits,"tokenization, llm, reasoning limits",https://arxiv.org/abs/2505.14178,interpretability
[2505.14185] Safety Subspaces are Not Distinct: A Fine-Tuning Case Study,"safety subspaces, fine-tuning",https://arxiv.org/abs/2505.14185,interpretability
[2505.14233] Mechanistic Fine-tuning for In-context Learning,"mechanistic fine-tuning, in-context learning",https://arxiv.org/abs/2505.14233,interpretability
[2505.14257] Aligning Attention Distribution to Information Flow for Hallucination Mitigation in Large Vision-Language Models,"attention alignment, hallucination mitigation, lvlm",https://arxiv.org/abs/2505.14257,interpretability
[2505.14352] Towards eliciting latent knowledge from LLMs with mechanistic interpretability,"latent knowledge, llm, mechanistic interpretability",https://arxiv.org/abs/2505.14352,interpretability
"[2505.14406] Pierce the Mists, Greet the Sky: Decipher Knowledge Overshadowing via Knowledge Circuit Analysis","knowledge overshadowing, circuit analysis",https://arxiv.org/abs/2505.14406,interpretability
[2505.14467] Void in Language Models,"void, language models",https://arxiv.org/abs/2505.14467,interpretability
[2505.14536] Breaking Bad Tokens: Detoxification of LLMs Using Sparse Autoencoders,"detoxification, llm, sparse autoencoders",https://arxiv.org/abs/2505.14536,interpretability
[2505.14685] Language Models use Lookbacks to Track Beliefs,"lookbacks, belief tracking, language models",https://arxiv.org/abs/2505.14685,interpretability
[2505.17630] GIM: Improved interpretability for Large Language Models,"llm, interp",https://arxiv.org/abs/2505.17630,interpretability
[2505.17936] Understanding Gated Neurons in Transformers from Their Input-Output Functionality,"transformers, interp",https://arxiv.org/abs/2505.17936,interpretability
[2505.17760] But what is your honest answer? Aiding LLM-judges with honest alternatives using steering vectors,"steer, llm, honest",https://arxiv.org/abs/2505.17760,interpretability
[2505.17322] From Compression to Expansion: A Layerwise Analysis of In-Context Learning,"in-context learning, llm",https://arxiv.org/abs/2505.17322,interpretability
[2505.17073] Mechanistic interpretability of GPT-like Models on Summarization Tasks,"mechanistic interp, llm, summarization",https://arxiv.org/abs/2505.17073,interpretability
[2505.17697] Activation Control for Efficiently Eliciting Long Chain-of-thought Ability of Language Models,"activation, CoT, llm, steer",https://arxiv.org/abs/2505.17697,interpretability
[2505.17712] Understanding How Value Neurons Shape the Generation of Specified Values in LLMs,"interp, value neuron, llm",https://arxiv.org/abs/2505.17712,interpretability
[2505.17812] Seeing It or Not? Interpretable Vision-aware Latent Steering to Mitigate Object Hallucinations,"interp, vision, steer, llm",https://arxiv.org/abs/2505.17812,interpretability
[2505.17769] Inference-Time Decomposition of Activations (ITDA): A Scalable Approach to Interpreting Large Language Models,"interp, activation, llm",https://arxiv.org/abs/2505.17769,interpretability
[2505.17260] The Rise of Parameter Specialization for Knowledge Storage in Large Language Models,"knowledge, llm",https://arxiv.org/abs/2505.17260,interpretability
[2505.17863] The emergence of sparse attention: impact of data distribution and benefits of repetition,"attention, sparse, llm",https://arxiv.org/abs/2505.17863,interpretability
[2505.17071] What's in a prompt? Language models encode literary style in prompt embeddings,"prompt, llm, interp",https://arxiv.org/abs/2505.17071,interpretability
[2505.17646] Understanding Pre-training and Fine-tuning from Loss Landscape Perspectives,"pre-training, fine-tuning, llm",https://arxiv.org/abs/2505.17646,interpretability
[2505.17078] GloSS over Toxicity: Understanding and Mitigating Toxicity in LLMs via Global Toxic Subspace,"toxicity, llm, interp",https://arxiv.org/abs/2505.17078,interpretability
"[2505.19440] The Birth of Knowledge: Emergent Features across Time, Space, and Scale in Large Language Models","emergent features, llm",https://arxiv.org/abs/2505.19440,interpretability
[2505.18672] Does Representation Intervention Really Identify Desired Concepts and Elicit Alignment?,"representation intervention, alignment",https://arxiv.org/abs/2505.18672,interpretability
[2505.18752] Unifying Attention Heads and Task Vectors via Hidden State Geometry in In-Context Learning,"attention heads, task vectors, in-context learning",https://arxiv.org/abs/2505.18752,interpretability
[2505.18933] REACT: Representation Extraction And Controllable Tuning to Overcome Overfitting in LLM Knowledge Editing,"representation extraction, controllable tuning, knowledge editing",https://arxiv.org/abs/2505.18933,interpretability
[2505.18588] Safety Alignment via Constrained Knowledge Unlearning,"safety alignment, knowledge unlearning",https://arxiv.org/abs/2505.18588,interpretability
[2505.18706] Steering LLM Reasoning Through Bias-Only Adaptation,"steering, llm, bias adaptation",https://arxiv.org/abs/2505.18706,interpretability
[2505.19488] Understanding Transformer from the Perspective of Associative Memory,"transformer, associative memory",https://arxiv.org/abs/2505.19488,interpretability
"[2505.20076] Grokking ExPLAIND: Unifying Model, Data, and Training Attribution to Study Model Behavior","grokking, model attribution, data attribution",https://arxiv.org/abs/2505.20076,interpretability
[2505.18235] The Origins of Representation Manifolds in Large Language Models,"representation manifolds, llm",https://arxiv.org/abs/2505.18235,interpretability
[2505.20063] SAEs Are Good for Steering -- If You Select the Right Features,"SAE, steering",https://arxiv.org/abs/2505.20063,interpretability
[2505.16178] Understanding Fact Recall in Language Models: Why Two-Stage Training Encourages Memorization but Mixed Training Teaches Knowledge,"llm, memorization, knowledge",https://arxiv.org/abs/2505.16178,interpretability
[2505.22586] Precise In-Parameter Concept Erasure in Large Language Models,"llm, erasure, interp",https://arxiv.org/abs/2505.22586,interpretability
[2505.22630] Stochastic Chameleons: Irrelevant Context Hallucinations Reveal Class-Based (Mis)Generalization in LLMs,"llm, hallucination, generalization",https://arxiv.org/abs/2505.22630,interpretability
[2505.21800] From Directions to Cones: Exploring Multidimensional Representations of Propositional Facts in LLMs,"llm, representation, interp",https://arxiv.org/abs/2505.21800,interpretability
[2505.22411] Mitigating Overthinking in Large Reasoning Models via Manifold Steering,"llm, reasoning, steer",https://arxiv.org/abs/2505.22411,interpretability
[2505.22572] Fusion Steering: Prompt-Specific Activation Control,"steer, activation, llm",https://arxiv.org/abs/2505.22572,interpretability
[2505.21772] Calibrating LLM Confidence by Probing Perturbed Representation Stability,"llm, calibration, representation",https://arxiv.org/abs/2505.21772,interpretability
[2506.01034] Less is More: Local Intrinsic Dimensions of Contextual Language Models,"llm, dimension",https://arxiv.org/abs/2506.01034,Interpretability
[2506.01042] Probing Neural Topology of Large Language Models,"llm, probing, topology",https://arxiv.org/abs/2506.01042,Interpretability
[2506.01074] How Programming Concepts and Neurons Are Shared in Code Language Models - ACL 2025 Findings,"code, llm, neuron",https://arxiv.org/abs/2506.01074,Interpretability
"[2506.01115] Attention Retrieves, MLP Memorizes: Disentangling Trainable Components in the Transformer","attention, mlp, transformer",https://arxiv.org/abs/2506.01115,Interpretability
[2506.00653] Linear Representation Transferability Hypothesis: Leveraging Small Models to Steer Large Models,"steer, llm, transferability",https://arxiv.org/abs/2506.00653,Interpretability
[2506.00382] Spectral Insights into Data-Oblivious Critical Layers in Large Language Models - ACL 2025 Findings,"llm, spectral",https://arxiv.org/abs/2506.00382,Interpretability
[2506.00085] COSMIC: Generalized Refusal Direction Identification in LLM Activations - ACL 2025 Findings,"llm, activation",https://arxiv.org/abs/2506.00085,Interpretability
[2506.00823] Probing the Geometry of Truth: Consistency and Generalization of Truth Directions in LLMs Across Logical Transformations and Question Answering Tasks - ACL 2025 Findings,"llm, probing, truth",https://arxiv.org/abs/2506.00823,Interpretability
[2506.00772] LIFT the Veil for the Truth: Principal Weights Emerge after Rank Reduction for Reasoning-Focused Supervised Fine-Tuning - ICML 2025,"reasoning, fine-tuning",https://arxiv.org/abs/2506.00772,Interpretability
[2505.24293] Large Language Models are Locally Linear Mappings,"llm, linear",https://arxiv.org/abs/2505.24293,Interpretability
[2505.24731] Circuit Stability Characterizes Language Model Generalization,"llm, generalization, circuit",https://arxiv.org/abs/2505.24731,Interpretability
[2505.24832] How much do language models memorize?,"llm, memorize",https://arxiv.org/abs/2505.24832,Interpretability
[2505.24244] Mamba Knockout for Unraveling Factual Information Flow - ACL 2025,"mamba, factual",https://arxiv.org/abs/2505.24244,Interpretability
[2505.24688] Soft Reasoning: Navigating Solution Spaces in Large Language Models through Controlled Embedding Exploration - ICML 2025,"reasoning, llm, embedding",https://arxiv.org/abs/2505.24688,Interpretability
[2505.24428] Model Unlearning via Sparse Autoencoder Subspace Guided Projections,"unlearning, sae",https://arxiv.org/abs/2505.24428,Interpretability
[2505.24009] Diversity of Transformer Layers: One Aspect of Parameter Scaling Laws,"transformer, scaling laws",https://arxiv.org/abs/2505.24009,Interpretability
[2505.24539] Localizing Persona Representations in LLMs,"llm, persona",https://arxiv.org/abs/2505.24539,Interpretability
[2505.24535] Beyond Linear Steering: Unified Multi-Attribute Control for Language Models,"steer, llm",https://arxiv.org/abs/2505.24535,Interpretability
[2505.24362] Knowing Before Saying: LLM Representations Encode Information About Chain-of-Thought Success Before Completion,"llm, CoT",https://arxiv.org/abs/2505.24362,Interpretability
[2505.24360] Interpreting Large Text-to-Image Diffusion Models with Dictionary Learning,"diffusion, interp",https://arxiv.org/abs/2505.24360,Interpretability
[2505.23911] One Task Vector is not Enough: A Large-Scale Study for In-Context Learning,in-context learning,https://arxiv.org/abs/2505.23911,Interpretability
[2505.24473] Train One Sparse Autoencoder Across Multiple Sparsity Budgets to Preserve Interpretability and Accuracy,"sae, interp",https://arxiv.org/abs/2505.24473,Interpretability
[2505.23013] Scalable Complexity Control Facilitates Reasoning Ability of LLMs,"reasoning, llm, complexity",https://arxiv.org/abs/2505.23013,Interpretability
[2505.23556] Understanding Refusal in Language Models with Sparse Autoencoders,"refusal, llm, SAE",https://arxiv.org/abs/2505.23556,Interpretability
[2505.23653] How does Transformer Learn Implicit Reasoning?,"transformer, reasoning, implicit",https://arxiv.org/abs/2505.23653,Interpretability
[2505.23701] Can LLMs Reason Abstractly Over Math Word Problems Without CoT? Disentangling Abstract Formulation From Arithmetic Computation,"llm, reasoning, CoT",https://arxiv.org/abs/2505.23701,Interpretability
[2506.02132] Model Internal Sleuthing: Finding Lexical Identity and Inflectional Morphology in Modern Language Models,"llm, interp, morphology",https://arxiv.org/abs/2506.02132,Interpretability
[2506.02996] Linear Spatial World Models Emerge in Large Language Models,"llm, world models, linear",https://arxiv.org/abs/2506.02996,Interpretability
[2506.02701] On Entity Identification in Language Models,"llm, entity identification",https://arxiv.org/abs/2506.02701,Interpretability
[2506.02867] Demystifying Reasoning Dynamics with Mutual Information: Thinking Tokens are Information Peaks in LLM Reasoning,"llm, reasoning, mutual information",https://arxiv.org/abs/2506.02867,Interpretability
[2506.03434] Time Course MechInterp: Analyzing the Evolution of Components and Knowledge in Large Language Models,"llm, interp, mechanistic interpretability",https://arxiv.org/abs/2506.03434,Interpretability
[2506.04142] Establishing Trustworthy LLM Evaluation via Shortcut Neuron Analysis,"llm, evaluation, shortcut neurons",https://arxiv.org/abs/2506.04142,Interpretability
[2506.17673] FaithfulSAE: Towards Capturing Faithful Features with Sparse Autoencoders without External Dataset Dependencies,"SAE, interp, llm",https://arxiv.org/abs/2506.17673,Interpretability
[2506.18053] Mechanistic Interpretability in the Presence of Architectural Obfuscation,"interp, mechanistic",https://arxiv.org/abs/2506.18053,Interpretability
[2506.18167] Understanding Reasoning in Thinking Language Models via Steering Vectors - Neel Nanda,"steer, llm, reasoning",https://arxiv.org/abs/2506.18167,Interpretability
[2506.18141] Sparse Feature Coactivation Reveals Composable Semantic Modules in Large Language Models,"interp, llm, feature",https://arxiv.org/abs/2506.18141,Interpretability
[2506.18887] Steering Conceptual Bias via Transformer Latent-Subspace Activation,"steer, llm, bias",https://arxiv.org/abs/2506.18887,Interpretability
[2506.18852] Mechanistic Interpretability Needs Philosophy,"interp, mechanistic, philosophy",https://arxiv.org/abs/2506.18852,Interpretability
[2506.17859] In-Context Learning Strategies Emerge Rationally,"in-context learning, llm",https://arxiv.org/abs/2506.17859,Interpretability
[2506.16975] Latent Concept Disentanglement in Transformer-based Language Models,"llm, interp, disentanglement",https://arxiv.org/abs/2506.16975,Interpretability
[2506.16078] Probing the Robustness of Large Language Models Safety to Latent Perturbations,"llm, safety, robustness",https://arxiv.org/abs/2506.16078,Interpretability
[2506.16678] Mechanisms vs. Outcomes: Probing for Syntax Fails to Explain Performance on Targeted Syntactic Evaluations,"llm, interp, syntax",https://arxiv.org/abs/2506.16678,Interpretability
[2506.17052] From Concepts to Components: Concept-Agnostic Attention Module Discovery in Transformers,"llm, attention, interp",https://arxiv.org/abs/2506.17052,Interpretability
[2506.17090] Better Language Model Inversion by Compactly Representing Next-Token Distributions,"llm, inversion",https://arxiv.org/abs/2506.17090,Interpretability
[2506.15710] RAST: Reasoning Activation in LLMs via Small-model Transfer,"llm, reasoning",https://arxiv.org/abs/2506.15710,Interpretability
[2506.15735] ContextBench: Modifying Contexts for Targeted Latent Activation,"llm, activation, interp",https://arxiv.org/abs/2506.15735,Interpretability
[2506.16406] Drag-and-Drop LLMs: Zero-Shot Prompt-to-Weights,"llm, prompting",https://arxiv.org/abs/2506.16406,Interpretability
[2506.15963] On the Theoretical Understanding of Identifiable Sparse Autoencoders and Beyond,"SAE, interp, theory",https://arxiv.org/abs/2506.15963,Interpretability
"[2506.15679] Dense SAE Latents Are Features, Not Bugs","SAE, interp, features",https://arxiv.org/abs/2506.15679,Interpretability
[2506.15606] LoX: Low-Rank Extrapolation Robustifies LLM Safety Against Fine-tuning,"llm, safety, fine-tuning",https://arxiv.org/abs/2506.15606,Interpretability
"[2506.12152] Because we have LLMs, we Can and Should Pursue Agentic Interpretability","agent, interp, llm",https://arxiv.org/abs/2506.12152,Interpretability
[2506.12576] Enabling Precise Topic Alignment in Large Language Models Via Sparse Autoencoders,"llm, SAE, alignment",https://arxiv.org/abs/2506.12576,Interpretability
[2506.12217] From Emergence to Control: Probing and Modulating Self-Reflection in Language Models,"llm, control, interp",https://arxiv.org/abs/2506.12217,Interpretability
[2506.13206] Thought Crime: Backdoors and Emergent Misalignment in Reasoning Models,"llm, safety, reasoning",https://arxiv.org/abs/2506.13206,Interpretability
[2506.12880] Universal Jailbreak Suffixes Are Strong Attention Hijackers,"llm, safety, attention",https://arxiv.org/abs/2506.12880,Interpretability
[2506.13752] Steering LLM Thinking with Budget Guidance,"steer, llm, reasoning",https://arxiv.org/abs/2506.13752,Interpretability
[2506.13734] Instruction Following by Boosting Attention of Large Language Models,"llm, attention",https://arxiv.org/abs/2506.13734,Interpretability
[2506.11618] Convergent Linear Representations of Emergent Misalignment - Neel Nanda,"llm, misalignment, interp",https://arxiv.org/abs/2506.11618,Interpretability
[2506.11613] Model Organisms for Emergent Misalignment - Neel Nanda,"llm, misalignment, interp",https://arxiv.org/abs/2506.11613,Interpretability
[2506.11976] How Visual Representations Map to Language Feature Space in Multimodal LLMs - Neel Nanda,"llm, multimodal, interp",https://arxiv.org/abs/2506.11976,Interpretability
[2506.11088] Two Birds with One Stone: Improving Factuality and Faithfulness of LLMs via Dynamic Interactive Subspace Editing,"llm, factuality, faithfulness",https://arxiv.org/abs/2506.11088,Interpretability
[2506.11123] Sparse Autoencoders Bridge The Deep Learning Model and The Brain,"SAE, interp",https://arxiv.org/abs/2506.11123,Interpretability
[2506.10641] Spelling-out is not Straightforward: LLMs' Capability of Tokenization from Token to Characters,"llm, tokenization",https://arxiv.org/abs/2506.10641,Interpretability
[2506.10920] Decomposing MLP Activations into Interpretable Features via Semi-Nonnegative Matrix Factorization,"llm, interp, features",https://arxiv.org/abs/2506.10920,Interpretability
[2506.10887] Generalization or Hallucination? Understanding Out-of-Context Reasoning in Transformers,"llm, reasoning, generalization",https://arxiv.org/abs/2506.10887,Interpretability
[2506.10922] Robustly Improving LLM Fairness in Realistic Settings via Interpretability,"llm, fairness, interp",https://arxiv.org/abs/2506.10922,Interpretability
"[2506.09099] Too Big to Think: Capacity, Memorization, and Generalization in Pre-Trained Transformers","llm, generalization, memorization",https://arxiv.org/abs/2506.09099,Interpretability
[2506.09277] Did I Faithfully Say What I Thought? Bridging the Gap Between Neural Activity and Self-Explanations in Large Language Models,"llm, interp, self-explanation",https://arxiv.org/abs/2506.09277,Interpretability
[2506.09890] The Emergence of Abstract Thought in Large Language Models Beyond Any Language,"llm, abstract thought",https://arxiv.org/abs/2506.09890,Interpretability
[2506.08427] Know-MRI: A Knowledge Mechanisms Revealer&Interpreter for Large Language Models,"llm, interp, knowledge",https://arxiv.org/abs/2506.08427,Interpretability
[2506.08359] DEAL: Disentangling Transformer Head Activations for LLM Steering,"steer, llm, interp",https://arxiv.org/abs/2506.08359,Interpretability
[2506.08572] The Geometries of Truth Are Orthogonal Across Tasks,"llm, truth",https://arxiv.org/abs/2506.08572,Interpretability
[2506.08473] AsFT: Anchoring Safety During LLM Fine-Tuning Within Narrow Safety Basin,"llm, safety, fine-tuning",https://arxiv.org/abs/2506.08473,Interpretability
"[2506.09048] Understanding Task Vectors in In-Context Learning: Emergence, Functionality, and Limitations","llm, in-context learning",https://arxiv.org/abs/2506.09048,Interpretability
[2506.08184] Unable to forget: Proactive lnterference Reveals Working Memory Limits in LLMs Beyond Context Length,"llm, memory",https://arxiv.org/abs/2506.08184,Interpretability
[2506.08966] Pre-trained Language Models Learn Remarkably Accurate Representations of Numbers,"llm, representation",https://arxiv.org/abs/2506.08966,Interpretability
"[2506.09047] Same Task, Different Circuits: Disentangling Modality-Specific Mechanisms in VLMs","llm, multimodal, interp",https://arxiv.org/abs/2506.09047,Interpretability
[2506.07406] InverseScope: Scalable Activation Inversion for Interpreting Large Language Models,"llm, interp, activation inversion",https://arxiv.org/abs/2506.07406,Interpretability
[2506.07691] Training Superior Sparse Autoencoders for Instruct Models,"SAE, llm, training",https://arxiv.org/abs/2506.07691,Interpretability
[2506.07335] Improving LLM Reasoning through Interpretable Role-Playing Steering,"steer, llm, reasoning",https://arxiv.org/abs/2506.07335,Interpretability
[2506.06686] Learning Distribution-Wise Control in Representation Space for Language Models,"llm, control, representation",https://arxiv.org/abs/2506.06686,Interpretability
[2506.07240] Overclocking LLM Reasoning: Monitoring and Controlling Thinking Path Lengths in LLMs,"llm, reasoning, control",https://arxiv.org/abs/2506.07240,Interpretability