MikaStars39 commited on
Commit
19cf168
·
verified ·
1 Parent(s): 4e728be

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -1,4 +1,21 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: mit
3
  size_categories:
4
  - n<1K
 
1
  ---
2
+ configs:
3
+ - config_name: all
4
+ data_files:
5
+ - split: train
6
+ path: all/*
7
+ - config_name: efficiency
8
+ data_files:
9
+ - split: train
10
+ path: efficiency/*
11
+ - config_name: interpretability
12
+ data_files:
13
+ - split: train
14
+ path: interpretability/*
15
+ - config_name: agent_rl
16
+ data_files:
17
+ - split: train
18
+ path: agent_rl/*
19
  license: mit
20
  size_categories:
21
  - n<1K
agent_rl/papers.csv ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ title,keywords,url,type
2
+ [2505.17122] Shallow Preference Signals: Large Language Model Aligns Even Better with Truncated Data?,"llm, alignment, preference",https://arxiv.org/abs/2505.17122,agent_rl
3
+ "[2505.17923] Language models can learn implicit multi-hop reasoning, but only if they have lots of training data","llm, reasoning, multi-hop",https://arxiv.org/abs/2505.17923,agent_rl
4
+ [2505.22617] The Entropy Mechanism of Reinforcement Learning for Reasoning Language Models,"rl, llm, reasoning",https://arxiv.org/abs/2505.22617,agent_rl
all/papers.csv ADDED
@@ -0,0 +1,134 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ title,keywords,url,type
2
+ [2210.01117] Omnigrok: Grokking Beyond Algorithmic Data,"grok, llm, interp",https://arxiv.org/abs/2210.01117,interpretability
3
+ [2308.10248] Steering Language Models With Activation Engineering,steer,https://arxiv.org/abs/2308.10248,interpretability
4
+ [2310.15213] Function Vectors in Large Language Models,"llm, icl, function vector",https://arxiv.org/abs/2310.15213,interpretability
5
+ [2312.00752] Mamba: Linear-Time Sequence Modeling with Selective State Spaces,"linear, mamba",https://arxiv.org/abs/2312.00752,interpretability
6
+ [2312.06635] Gated Linear Attention Transformers with Hardware-Efficient Training,linear,https://arxiv.org/abs/2312.06635,interpretability
7
+ [2401.06824] Revisiting Jailbreaking for Large Language Models: A Representation Engineering Perspective,"steer, llm",https://arxiv.org/abs/2401.06824,interpretability
8
+ [2402.18312] How to think step-by-step: A mechanistic understanding of chain-of-thought reasoning,"CoT, interp, llm",https://arxiv.org/abs/2402.18312,interpretability
9
+ [2402.18344] Focus on Your Question! Interpreting and Mitigating Toxic CoT Problems in Commonsense Reasoning,"CoT, interp, llm",https://arxiv.org/abs/2402.18344,interpretability
10
+ [2403.01590] The Hidden Attention of Mamba Models,"linear, mamba, attention",https://arxiv.org/abs/2403.01590,interpretability
11
+ [2405.12522] Sparse Autoencoders Enable Scalable and Reliable Circuit Identification in Language Models,"SAE, llm",https://arxiv.org/abs/2405.12522,interpretability
12
+ [2405.14860] Not All Language Model Features Are Linear,"SAE, feature",https://arxiv.org/abs/2405.14860,interpretability
13
+ [2405.15071] Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization,"grok, llm",https://arxiv.org/abs/2405.15071,interpretability
14
+ [2405.21060] Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality,"linear, mamba",https://arxiv.org/abs/2405.21060,interpretability
15
+ [2406.06484] Parallelizing Linear Transformers with the Delta Rule over Sequence Length,linear,https://arxiv.org/abs/2406.06484,interpretability
16
+ [2406.11944] Transcoders Find Interpretable LLM Feature Circuits,SAE,https://arxiv.org/abs/2406.11944,interpretability
17
+ [2407.14494] InterpBench: Semi-Synthetic Transformers for Evaluating Mechanistic interpretability Techniques,"interp, benchmark",https://arxiv.org/abs/2407.14494,interpretability
18
+ [2409.04185] Residual Stream Analysis with Multi-Layer SAEs,"SAE, match, residual stream",https://arxiv.org/abs/2409.04185,interpretability
19
+ [2410.04234] Functional Homotopy: Smoothing Discrete Optimization via Continuous Parameters for LLM Jailbreak Attacks,jailbreak,https://arxiv.org/abs/2410.04234,interpretability
20
+ [2410.06981] Sparse Autoencoders Reveal Universal Feature Spaces Across Large Language Models,"SAE, llm",https://arxiv.org/abs/2410.06981,interpretability
21
+ [2410.07656] Mechanistic Permutability: Match Features Across Layers,"SAE, match, residual stream",https://arxiv.org/abs/2410.07656,interpretability
22
+ [2410.16314] Steering Large Language Models using Conceptors: Improving Addition-Based Activation Engineering,steer,https://arxiv.org/abs/2410.16314,interpretability
23
+ [2411.04330] Scaling Laws for Precision,"quantization, llm, scaling law",https://arxiv.org/abs/2411.04330,interpretability
24
+ [2411.14257] Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models,"hallucination, llm, knowledge",https://arxiv.org/abs/2411.14257,interpretability
25
+ [2501.06254] Rethinking Evaluation of Sparse Autoencoders through the Representation of Polysemous Words,SAE,https://arxiv.org/abs/2501.06254,interpretability
26
+ [2502.03032] Analyze Feature Flow to Enhance Interpretation and Steering in Language Models,"steer, llm, feature",https://arxiv.org/abs/2502.03032,interpretability
27
+ [2502.09245] You Do Not Fully Utilize Transformer's Representation Capacity,"llm, transformers, representation",https://arxiv.org/abs/2502.09245,interpretability
28
+ [2502.12179] Identifiable Steering via Sparse Autoencoding of Multi-Concept Shifts,"steer, SAE",https://arxiv.org/abs/2502.12179,interpretability
29
+ [2502.12446] Multi-Attribute Steering of Language Models via Targeted Intervention,"steer, llm, alignment",https://arxiv.org/abs/2502.12446,interpretability
30
+ "[2502.13490] What are Models Thinking about? Understanding Large Language Model Hallucinations ""Psychology"" through Model Inner State Analysis","llm, hallucination",https://arxiv.org/abs/2502.13490,interpretability
31
+ [2502.13632] Concept Layers: Enhancing interpretability and Intervenability via LLM Conceptualization,"concept, llm, interp",https://arxiv.org/abs/2502.13632,interpretability
32
+ [2502.13913] How Do LLMs Perform Two-Hop Reasoning in Context?,"llm, reasoning, interp",https://arxiv.org/abs/2502.13913,interpretability
33
+ [2502.13946] Why Safeguarded Ships Run Aground? Aligned Large Language Models' Safety Mechanisms Tend to Be Anchored in The Template Region,"llm, alignment, safety",https://arxiv.org/abs/2502.13946,interpretability
34
+ [2502.14010] Which Attention Heads Matter for In-Context Learning?,"llm, icl, attention head",https://arxiv.org/abs/2502.14010,interpretability
35
+ [2502.14258] Does Time Have Its Place? Temporal Heads: Where Language Models Recall Time-specific Information,"attention head, time",https://arxiv.org/abs/2502.14258,interpretability
36
+ [2502.14888] The Multi-Faceted Monosemanticity in Multimodal Representations,"mllm, monosemanticity",https://arxiv.org/abs/2502.14888,interpretability
37
+ [2502.15277] Analyzing the Inner Workings of Transformers in Compositional Generalization,"llm, interp",https://arxiv.org/abs/2502.15277,interpretability
38
+ [2502.15603] Do Multilingual LLMs Think In English?,"mllm, think",https://arxiv.org/abs/2502.15603,interpretability
39
+ [2502.17355] On Relation-Specific Neurons in Large Language Models,"neuron, llm, relation",https://arxiv.org/abs/2502.17355,interpretability
40
+ [2502.17420] The Geometry of Refusal in Large Language Models: Concept Cones and Representational Independence,"safety, refusal, llm",https://arxiv.org/abs/2502.17420,interpretability
41
+ [2502.19964] Do Sparse Autoencoders Generalize? A Case Study of Answerability,"SAE, generalize",https://arxiv.org/abs/2502.19964,interpretability
42
+ [2503.02078] Superscopes: Amplifying Internal Feature Representations for Language Model Interpretation,"feature, llm",https://arxiv.org/abs/2503.02078,interpretability
43
+ [2503.02989] Effectively Steer LLM To Follow Preference via Building Confident Directions,"steer, llm",https://arxiv.org/abs/2503.02989,interpretability
44
+ [2503.03862] Not-Just-Scaling Laws: Towards a Better Understanding of the Downstream Impact of Language Model Design Decisions,"scaling law, llm, downstream",https://arxiv.org/abs/2503.03862,interpretability
45
+ [2503.07572] Optimizing Test-Time Compute via Meta Reinforcement Fine-Tuning,"test time scaling, RL, finetuning",https://arxiv.org/abs/2503.07572,interpretability
46
+ [2503.09573] Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models,"diffusion, llm, autogressive",https://arxiv.org/abs/2503.09573,interpretability
47
+ [2503.21073] Shared Global and Local Geometry of Language Model Embeddings,"geometry, llm",https://arxiv.org/abs/2503.21073,interpretability
48
+ "[2503.21676] How do language models learn facts? Dynamics, curricula and hallucinations","knowledge, hallucinations, llm",https://arxiv.org/abs/2503.21676,interpretability
49
+ [2503.22720] Why Representation Engineering Works: A Theoretical and Empirical Study in Vision-Language Models,"representation, vision, llm",https://arxiv.org/abs/2503.22720,interpretability
50
+ [2503.23084] The Reasoning-Memorization Interplay in Language Models Is Mediated by a Single Direction,"reasoning, memory, llm, direction",https://arxiv.org/abs/2503.23084,interpretability
51
+ [2503.23306] Focus Directions Make Your Language Models Pay More Attention to Relevant Contexts,"direction, llm, attention",https://arxiv.org/abs/2503.23306,interpretability
52
+ [2503.24071] From Colors to Classes: Emergence of Concepts in Vision Transformers,"concept, llm, vision",https://arxiv.org/abs/2503.24071,interpretability
53
+ [2503.24277] Evaluating and Designing Sparse Autoencoders by Approximating Quasi-Orthogonality,"sae, evaluation",https://arxiv.org/abs/2503.24277,interpretability
54
+ [2504.00194] Identifying Sparsely Active Circuits Through Local Loss Landscape Decomposition,"sparsity, circuits",https://arxiv.org/abs/2504.00194,interpretability
55
+ [2504.01100] Repetitions are not all alike: distinct mechanisms sustain repetition in language models,"repetition, llm",https://arxiv.org/abs/2504.01100,interpretability
56
+ [2504.01871] Interpreting Emergent Planning in Model-Free Reinforcement Learning,"rl, llm, planning",https://arxiv.org/abs/2504.01871,interpretability
57
+ [2504.02620] Efficient Model Editing with Task-Localized Sparse Fine-tuning,"edit, llm, sparsity",https://arxiv.org/abs/2504.02620,interpretability
58
+ [2504.02708] The Hidden Space of Safety: Understanding Preference-Tuned LLMs in Multilingual context,"safety, multilingual",https://arxiv.org/abs/2504.02708,interpretability
59
+ [2504.02732] Why do LLMs attend to the first token?,"attention, llm, attention sink",https://arxiv.org/abs/2504.02732,interpretability
60
+ [2504.02821] Sparse Autoencoders Learn Monosemantic Features in Vision-Language Models,"sae, llm, vision, mllm",https://arxiv.org/abs/2504.02821,interpretability
61
+ [2504.02862] Towards Understanding How Knowledge Evolves in Large Vision-Language Models,"knowledge, vision, llm, mllm",https://arxiv.org/abs/2504.02862,interpretability
62
+ "[2504.02904] How Post-Training Reshapes LLMs: A Mechanistic View on Knowledge, Truthfulness, Refusal, and Confidence","llm, post-training",https://arxiv.org/abs/2504.02904,interpretability
63
+ [2504.02922] Robustly identifying concepts introduced during chat fine-tuning using crosscoders,"chat, llm, crosscoder",https://arxiv.org/abs/2504.02922,interpretability
64
+ [2504.02956] Understanding Aha Moments: from External Observations to Internal Mechanisms,"r1, o1, tts, aha",https://arxiv.org/abs/2504.02956,interpretability
65
+ [2504.03022] The Dual-Route Model of Induction,"induction, llm",https://arxiv.org/abs/2504.03022,interpretability
66
+ [2504.03635] Do Larger Language Models Imply Better Reasoning? A Pretraining Scaling Law for Reasoning,"scaling law, reasoning, llm",https://arxiv.org/abs/2504.03635,interpretability
67
+ [2504.03889] Using Attention Sinks to Identify and Evaluate Dormant Heads in Pretrained LLMs,"attention sink, llm, attention head",https://arxiv.org/abs/2504.03889,interpretability
68
+ [2504.03933] Language Models Are Implicitly Continuous,"llm, continuous",https://arxiv.org/abs/2504.03933,interpretability
69
+ [2504.04215] Towards Understanding and Improving Refusal in Compressed Models via Mechanistic interpretability,"refusal, mi",https://arxiv.org/abs/2504.04215,interpretability
70
+ [2504.04238] Sensitivity Meets Sparsity: The Impact of Extremely Sparse Parameter Patterns on Theory-of-Mind of Large Language Models,"llm, sparsity",https://arxiv.org/abs/2504.04238,interpretability
71
+ [2504.04264] Lost in Multilinguality: Dissecting Cross-lingual Factual Inconsistency in Transformer Language Models,"mllm, llm",https://arxiv.org/abs/2504.04264,interpretability
72
+ [2504.04635] Steering off Course: Reliability Challenges in Steering Language Models,"steer, llm",https://arxiv.org/abs/2504.04635,interpretability
73
+ [2504.04994] Following the Whispers of Values: Unraveling Neural Mechanisms Behind Value-Oriented Behaviors in LLMs,"neural mechanism, llm",https://arxiv.org/abs/2504.04994,interpretability
74
+ [2504.14218] Understanding the Repeat Curse in Large Language Models from a Feature Perspective,"repetition, llm, sae, feature",https://arxiv.org/abs/2504.14218,interpretability
75
+ [2504.14496] Functional Abstraction of Knowledge Recall in Large Language Models,"knowledge, recall, llm",https://arxiv.org/abs/2504.14496,interpretability
76
+ [2504.15133] EasyEdit2: An Easy-to-use Steering Framework for Editing Large Language Models,"steer, llm, edit",https://arxiv.org/abs/2504.15133,interpretability
77
+ [2504.15471] Bigram Subnetworks: Mapping to Next Tokens in Transformer Language Models,"next token, transformer",https://arxiv.org/abs/2504.15471,interpretability
78
+ [2504.15473] Emergence and Evolution of Interpretable Concepts in Diffusion Models,"diffusion, concept",https://arxiv.org/abs/2504.15473,interpretability
79
+ [2504.15630] Exploiting Contextual Knowledge in LLMs through V-usable Information based Layer Enhancement,"knowledge, llm",https://arxiv.org/abs/2504.15630,interpretability
80
+ [2504.16871] Exploring How LLMs Capture and Represent Domain-Specific Knowledge,"knowledge, llm",https://arxiv.org/abs/2504.16871,interpretability
81
+ [2505.13514] Induction Head Toxicity Mechanistically Explains Repetition Curse in Large Language Models,"induction head, repetition curse, llm",https://arxiv.org/abs/2505.13514,interpretability
82
+ [2505.13737] Causal Head Gating: A Framework for Interpreting Roles of Attention Heads in Transformers,"causal head gating, attention heads, transformers",https://arxiv.org/abs/2505.13737,interpretability
83
+ [2505.13763] Language Models Are Capable of Metacognitive Monitoring and Control of Their Internal Activations,"metacognition, internal activations, llm",https://arxiv.org/abs/2505.13763,interpretability
84
+ [2505.13898] Do Language Models Use Their Depth Efficiently?,"model depth, efficiency, language models",https://arxiv.org/abs/2505.13898,interpretability
85
+ [2505.14158] Temporal Alignment of Time Sensitive Facts with Activation Engineering,"temporal alignment, activation engineering",https://arxiv.org/abs/2505.14158,interpretability
86
+ [2505.14158] Temporal Alignment of Time Sensitive Facts with Activation Engineering,"temporal alignment, activation engineering",https://arxiv.org/abs/2505.14158,interpretability
87
+ [2505.14178] Tokenization Constraints in LLMs: A Study of Symbolic and Arithmetic Reasoning Limits,"tokenization, llm, reasoning limits",https://arxiv.org/abs/2505.14178,interpretability
88
+ [2505.14185] Safety Subspaces are Not Distinct: A Fine-Tuning Case Study,"safety subspaces, fine-tuning",https://arxiv.org/abs/2505.14185,interpretability
89
+ [2505.14233] Mechanistic Fine-tuning for In-context Learning,"mechanistic fine-tuning, in-context learning",https://arxiv.org/abs/2505.14233,interpretability
90
+ [2505.14257] Aligning Attention Distribution to Information Flow for Hallucination Mitigation in Large Vision-Language Models,"attention alignment, hallucination mitigation, lvlm",https://arxiv.org/abs/2505.14257,interpretability
91
+ [2505.14352] Towards eliciting latent knowledge from LLMs with mechanistic interpretability,"latent knowledge, llm, mechanistic interpretability",https://arxiv.org/abs/2505.14352,interpretability
92
+ "[2505.14406] Pierce the Mists, Greet the Sky: Decipher Knowledge Overshadowing via Knowledge Circuit Analysis","knowledge overshadowing, circuit analysis",https://arxiv.org/abs/2505.14406,interpretability
93
+ [2505.14467] Void in Language Models,"void, language models",https://arxiv.org/abs/2505.14467,interpretability
94
+ [2505.14536] Breaking Bad Tokens: Detoxification of LLMs Using Sparse Autoencoders,"detoxification, llm, sparse autoencoders",https://arxiv.org/abs/2505.14536,interpretability
95
+ [2505.14685] Language Models use Lookbacks to Track Beliefs,"lookbacks, belief tracking, language models",https://arxiv.org/abs/2505.14685,interpretability
96
+ [2505.17122] Shallow Preference Signals: Large Language Model Aligns Even Better with Truncated Data?,"llm, alignment, preference",https://arxiv.org/abs/2505.17122,agent_rl
97
+ "[2505.17923] Language models can learn implicit multi-hop reasoning, but only if they have lots of training data","llm, reasoning, multi-hop",https://arxiv.org/abs/2505.17923,agent_rl
98
+ [2505.17630] GIM: Improved interpretability for Large Language Models,"llm, interp",https://arxiv.org/abs/2505.17630,interpretability
99
+ [2505.17936] Understanding Gated Neurons in Transformers from Their Input-Output Functionality,"transformers, interp",https://arxiv.org/abs/2505.17936,interpretability
100
+ [2505.17760] But what is your honest answer? Aiding LLM-judges with honest alternatives using steering vectors,"steer, llm, honest",https://arxiv.org/abs/2505.17760,interpretability
101
+ [2505.17322] From Compression to Expansion: A Layerwise Analysis of In-Context Learning,"in-context learning, llm",https://arxiv.org/abs/2505.17322,interpretability
102
+ [2505.17073] Mechanistic interpretability of GPT-like Models on Summarization Tasks,"mechanistic interp, llm, summarization",https://arxiv.org/abs/2505.17073,interpretability
103
+ [2505.17697] Activation Control for Efficiently Eliciting Long Chain-of-thought Ability of Language Models,"activation, CoT, llm, steer",https://arxiv.org/abs/2505.17697,interpretability
104
+ [2505.17712] Understanding How Value Neurons Shape the Generation of Specified Values in LLMs,"interp, value neuron, llm",https://arxiv.org/abs/2505.17712,interpretability
105
+ [2505.17812] Seeing It or Not? Interpretable Vision-aware Latent Steering to Mitigate Object Hallucinations,"interp, vision, steer, llm",https://arxiv.org/abs/2505.17812,interpretability
106
+ [2505.17769] Inference-Time Decomposition of Activations (ITDA): A Scalable Approach to Interpreting Large Language Models,"interp, activation, llm",https://arxiv.org/abs/2505.17769,interpretability
107
+ [2505.17260] The Rise of Parameter Specialization for Knowledge Storage in Large Language Models,"knowledge, llm",https://arxiv.org/abs/2505.17260,interpretability
108
+ [2505.17863] The emergence of sparse attention: impact of data distribution and benefits of repetition,"attention, sparse, llm",https://arxiv.org/abs/2505.17863,interpretability
109
+ [2505.17071] What's in a prompt? Language models encode literary style in prompt embeddings,"prompt, llm, interp",https://arxiv.org/abs/2505.17071,interpretability
110
+ [2505.17646] Understanding Pre-training and Fine-tuning from Loss Landscape Perspectives,"pre-training, fine-tuning, llm",https://arxiv.org/abs/2505.17646,interpretability
111
+ [2505.17078] GloSS over Toxicity: Understanding and Mitigating Toxicity in LLMs via Global Toxic Subspace,"toxicity, llm, interp",https://arxiv.org/abs/2505.17078,interpretability
112
+ "[2505.19440] The Birth of Knowledge: Emergent Features across Time, Space, and Scale in Large Language Models","emergent features, llm",https://arxiv.org/abs/2505.19440,interpretability
113
+ [2505.18672] Does Representation Intervention Really Identify Desired Concepts and Elicit Alignment?,"representation intervention, alignment",https://arxiv.org/abs/2505.18672,interpretability
114
+ [2505.18752] Unifying Attention Heads and Task Vectors via Hidden State Geometry in In-Context Learning,"attention heads, task vectors, in-context learning",https://arxiv.org/abs/2505.18752,interpretability
115
+ [2505.18933] REACT: Representation Extraction And Controllable Tuning to Overcome Overfitting in LLM Knowledge Editing,"representation extraction, controllable tuning, knowledge editing",https://arxiv.org/abs/2505.18933,interpretability
116
+ [2505.18588] Safety Alignment via Constrained Knowledge Unlearning,"safety alignment, knowledge unlearning",https://arxiv.org/abs/2505.18588,interpretability
117
+ [2505.18706] Steering LLM Reasoning Through Bias-Only Adaptation,"steering, llm, bias adaptation",https://arxiv.org/abs/2505.18706,interpretability
118
+ [2505.19488] Understanding Transformer from the Perspective of Associative Memory,"transformer, associative memory",https://arxiv.org/abs/2505.19488,interpretability
119
+ "[2505.20076] Grokking ExPLAIND: Unifying Model, Data, and Training Attribution to Study Model Behavior","grokking, model attribution, data attribution",https://arxiv.org/abs/2505.20076,interpretability
120
+ [2505.18235] The Origins of Representation Manifolds in Large Language Models,"representation manifolds, llm",https://arxiv.org/abs/2505.18235,interpretability
121
+ [2505.20063] SAEs Are Good for Steering -- If You Select the Right Features,"SAE, steering",https://arxiv.org/abs/2505.20063,interpretability
122
+ [2505.20045] Uncertainty-Aware Attention Heads: Efficient Unsupervised Uncertainty Quantification for LLMs,"attention heads, uncertainty quantification, llm",https://arxiv.org/abs/2505.20045,efficiency
123
+ [2505.16178] Understanding Fact Recall in Language Models: Why Two-Stage Training Encourages Memorization but Mixed Training Teaches Knowledge,"llm, memorization, knowledge",https://arxiv.org/abs/2505.16178,interpretability
124
+ [2505.16284] Only Large Weights (And Not Skip Connections) Can Prevent the Perils of Rank Collapse,"rank collapse, weights",https://arxiv.org/abs/2505.16284,efficiency
125
+ [2505.22586] Precise In-Parameter Concept Erasure in Large Language Models,"llm, erasure, interp",https://arxiv.org/abs/2505.22586,interpretability
126
+ [2505.22630] Stochastic Chameleons: Irrelevant Context Hallucinations Reveal Class-Based (Mis)Generalization in LLMs,"llm, hallucination, generalization",https://arxiv.org/abs/2505.22630,interpretability
127
+ [2505.21800] From Directions to Cones: Exploring Multidimensional Representations of Propositional Facts in LLMs,"llm, representation, interp",https://arxiv.org/abs/2505.21800,interpretability
128
+ [2505.21785] Born a Transformer -- Always a Transformer?,"transformer, architecture",https://arxiv.org/abs/2505.21785,efficiency
129
+ [2505.22506] Sparsification and Reconstruction from the Perspective of Representation Geometry,"sparsification, representation, geometry",https://arxiv.org/abs/2505.22506,efficiency
130
+ [2505.22411] Mitigating Overthinking in Large Reasoning Models via Manifold Steering,"llm, reasoning, steer",https://arxiv.org/abs/2505.22411,interpretability
131
+ [2505.22572] Fusion Steering: Prompt-Specific Activation Control,"steer, activation, llm",https://arxiv.org/abs/2505.22572,interpretability
132
+ [2505.21772] Calibrating LLM Confidence by Probing Perturbed Representation Stability,"llm, calibration, representation",https://arxiv.org/abs/2505.21772,interpretability
133
+ [2505.22255] Train Sparse Autoencoders Efficiently by Utilizing Features Correlation,"SAE, efficiency, training",https://arxiv.org/abs/2505.22255,efficiency
134
+ [2505.22617] The Entropy Mechanism of Reinforcement Learning for Reasoning Language Models,"rl, llm, reasoning",https://arxiv.org/abs/2505.22617,agent_rl
efficiency/papers.csv ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ title,keywords,url,type
2
+ [2505.20045] Uncertainty-Aware Attention Heads: Efficient Unsupervised Uncertainty Quantification for LLMs,"attention heads, uncertainty quantification, llm",https://arxiv.org/abs/2505.20045,efficiency
3
+ [2505.16284] Only Large Weights (And Not Skip Connections) Can Prevent the Perils of Rank Collapse,"rank collapse, weights",https://arxiv.org/abs/2505.16284,efficiency
4
+ [2505.21785] Born a Transformer -- Always a Transformer?,"transformer, architecture",https://arxiv.org/abs/2505.21785,efficiency
5
+ [2505.22506] Sparsification and Reconstruction from the Perspective of Representation Geometry,"sparsification, representation, geometry",https://arxiv.org/abs/2505.22506,efficiency
6
+ [2505.22255] Train Sparse Autoencoders Efficiently by Utilizing Features Correlation,"SAE, efficiency, training",https://arxiv.org/abs/2505.22255,efficiency
interpretability/papers.csv ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ title,keywords,url,type
2
+ [2210.01117] Omnigrok: Grokking Beyond Algorithmic Data,"grok, llm, interp",https://arxiv.org/abs/2210.01117,interpretability
3
+ [2308.10248] Steering Language Models With Activation Engineering,steer,https://arxiv.org/abs/2308.10248,interpretability
4
+ [2310.15213] Function Vectors in Large Language Models,"llm, icl, function vector",https://arxiv.org/abs/2310.15213,interpretability
5
+ [2312.00752] Mamba: Linear-Time Sequence Modeling with Selective State Spaces,"linear, mamba",https://arxiv.org/abs/2312.00752,interpretability
6
+ [2312.06635] Gated Linear Attention Transformers with Hardware-Efficient Training,linear,https://arxiv.org/abs/2312.06635,interpretability
7
+ [2401.06824] Revisiting Jailbreaking for Large Language Models: A Representation Engineering Perspective,"steer, llm",https://arxiv.org/abs/2401.06824,interpretability
8
+ [2402.18312] How to think step-by-step: A mechanistic understanding of chain-of-thought reasoning,"CoT, interp, llm",https://arxiv.org/abs/2402.18312,interpretability
9
+ [2402.18344] Focus on Your Question! Interpreting and Mitigating Toxic CoT Problems in Commonsense Reasoning,"CoT, interp, llm",https://arxiv.org/abs/2402.18344,interpretability
10
+ [2403.01590] The Hidden Attention of Mamba Models,"linear, mamba, attention",https://arxiv.org/abs/2403.01590,interpretability
11
+ [2405.12522] Sparse Autoencoders Enable Scalable and Reliable Circuit Identification in Language Models,"SAE, llm",https://arxiv.org/abs/2405.12522,interpretability
12
+ [2405.14860] Not All Language Model Features Are Linear,"SAE, feature",https://arxiv.org/abs/2405.14860,interpretability
13
+ [2405.15071] Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization,"grok, llm",https://arxiv.org/abs/2405.15071,interpretability
14
+ [2405.21060] Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality,"linear, mamba",https://arxiv.org/abs/2405.21060,interpretability
15
+ [2406.06484] Parallelizing Linear Transformers with the Delta Rule over Sequence Length,linear,https://arxiv.org/abs/2406.06484,interpretability
16
+ [2406.11944] Transcoders Find Interpretable LLM Feature Circuits,SAE,https://arxiv.org/abs/2406.11944,interpretability
17
+ [2407.14494] InterpBench: Semi-Synthetic Transformers for Evaluating Mechanistic interpretability Techniques,"interp, benchmark",https://arxiv.org/abs/2407.14494,interpretability
18
+ [2409.04185] Residual Stream Analysis with Multi-Layer SAEs,"SAE, match, residual stream",https://arxiv.org/abs/2409.04185,interpretability
19
+ [2410.04234] Functional Homotopy: Smoothing Discrete Optimization via Continuous Parameters for LLM Jailbreak Attacks,jailbreak,https://arxiv.org/abs/2410.04234,interpretability
20
+ [2410.06981] Sparse Autoencoders Reveal Universal Feature Spaces Across Large Language Models,"SAE, llm",https://arxiv.org/abs/2410.06981,interpretability
21
+ [2410.07656] Mechanistic Permutability: Match Features Across Layers,"SAE, match, residual stream",https://arxiv.org/abs/2410.07656,interpretability
22
+ [2410.16314] Steering Large Language Models using Conceptors: Improving Addition-Based Activation Engineering,steer,https://arxiv.org/abs/2410.16314,interpretability
23
+ [2411.04330] Scaling Laws for Precision,"quantization, llm, scaling law",https://arxiv.org/abs/2411.04330,interpretability
24
+ [2411.14257] Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models,"hallucination, llm, knowledge",https://arxiv.org/abs/2411.14257,interpretability
25
+ [2501.06254] Rethinking Evaluation of Sparse Autoencoders through the Representation of Polysemous Words,SAE,https://arxiv.org/abs/2501.06254,interpretability
26
+ [2502.03032] Analyze Feature Flow to Enhance Interpretation and Steering in Language Models,"steer, llm, feature",https://arxiv.org/abs/2502.03032,interpretability
27
+ [2502.09245] You Do Not Fully Utilize Transformer's Representation Capacity,"llm, transformers, representation",https://arxiv.org/abs/2502.09245,interpretability
28
+ [2502.12179] Identifiable Steering via Sparse Autoencoding of Multi-Concept Shifts,"steer, SAE",https://arxiv.org/abs/2502.12179,interpretability
29
+ [2502.12446] Multi-Attribute Steering of Language Models via Targeted Intervention,"steer, llm, alignment",https://arxiv.org/abs/2502.12446,interpretability
30
+ "[2502.13490] What are Models Thinking about? Understanding Large Language Model Hallucinations ""Psychology"" through Model Inner State Analysis","llm, hallucination",https://arxiv.org/abs/2502.13490,interpretability
31
+ [2502.13632] Concept Layers: Enhancing interpretability and Intervenability via LLM Conceptualization,"concept, llm, interp",https://arxiv.org/abs/2502.13632,interpretability
32
+ [2502.13913] How Do LLMs Perform Two-Hop Reasoning in Context?,"llm, reasoning, interp",https://arxiv.org/abs/2502.13913,interpretability
33
+ [2502.13946] Why Safeguarded Ships Run Aground? Aligned Large Language Models' Safety Mechanisms Tend to Be Anchored in The Template Region,"llm, alignment, safety",https://arxiv.org/abs/2502.13946,interpretability
34
+ [2502.14010] Which Attention Heads Matter for In-Context Learning?,"llm, icl, attention head",https://arxiv.org/abs/2502.14010,interpretability
35
+ [2502.14258] Does Time Have Its Place? Temporal Heads: Where Language Models Recall Time-specific Information,"attention head, time",https://arxiv.org/abs/2502.14258,interpretability
36
+ [2502.14888] The Multi-Faceted Monosemanticity in Multimodal Representations,"mllm, monosemanticity",https://arxiv.org/abs/2502.14888,interpretability
37
+ [2502.15277] Analyzing the Inner Workings of Transformers in Compositional Generalization,"llm, interp",https://arxiv.org/abs/2502.15277,interpretability
38
+ [2502.15603] Do Multilingual LLMs Think In English?,"mllm, think",https://arxiv.org/abs/2502.15603,interpretability
39
+ [2502.17355] On Relation-Specific Neurons in Large Language Models,"neuron, llm, relation",https://arxiv.org/abs/2502.17355,interpretability
40
+ [2502.17420] The Geometry of Refusal in Large Language Models: Concept Cones and Representational Independence,"safety, refusal, llm",https://arxiv.org/abs/2502.17420,interpretability
41
+ [2502.19964] Do Sparse Autoencoders Generalize? A Case Study of Answerability,"SAE, generalize",https://arxiv.org/abs/2502.19964,interpretability
42
+ [2503.02078] Superscopes: Amplifying Internal Feature Representations for Language Model Interpretation,"feature, llm",https://arxiv.org/abs/2503.02078,interpretability
43
+ [2503.02989] Effectively Steer LLM To Follow Preference via Building Confident Directions,"steer, llm",https://arxiv.org/abs/2503.02989,interpretability
44
+ [2503.03862] Not-Just-Scaling Laws: Towards a Better Understanding of the Downstream Impact of Language Model Design Decisions,"scaling law, llm, downstream",https://arxiv.org/abs/2503.03862,interpretability
45
+ [2503.07572] Optimizing Test-Time Compute via Meta Reinforcement Fine-Tuning,"test time scaling, RL, finetuning",https://arxiv.org/abs/2503.07572,interpretability
46
+ [2503.09573] Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models,"diffusion, llm, autogressive",https://arxiv.org/abs/2503.09573,interpretability
47
+ [2503.21073] Shared Global and Local Geometry of Language Model Embeddings,"geometry, llm",https://arxiv.org/abs/2503.21073,interpretability
48
+ "[2503.21676] How do language models learn facts? Dynamics, curricula and hallucinations","knowledge, hallucinations, llm",https://arxiv.org/abs/2503.21676,interpretability
49
+ [2503.22720] Why Representation Engineering Works: A Theoretical and Empirical Study in Vision-Language Models,"representation, vision, llm",https://arxiv.org/abs/2503.22720,interpretability
50
+ [2503.23084] The Reasoning-Memorization Interplay in Language Models Is Mediated by a Single Direction,"reasoning, memory, llm, direction",https://arxiv.org/abs/2503.23084,interpretability
51
+ [2503.23306] Focus Directions Make Your Language Models Pay More Attention to Relevant Contexts,"direction, llm, attention",https://arxiv.org/abs/2503.23306,interpretability
52
+ [2503.24071] From Colors to Classes: Emergence of Concepts in Vision Transformers,"concept, llm, vision",https://arxiv.org/abs/2503.24071,interpretability
53
+ [2503.24277] Evaluating and Designing Sparse Autoencoders by Approximating Quasi-Orthogonality,"sae, evaluation",https://arxiv.org/abs/2503.24277,interpretability
54
+ [2504.00194] Identifying Sparsely Active Circuits Through Local Loss Landscape Decomposition,"sparsity, circuits",https://arxiv.org/abs/2504.00194,interpretability
55
+ [2504.01100] Repetitions are not all alike: distinct mechanisms sustain repetition in language models,"repetition, llm",https://arxiv.org/abs/2504.01100,interpretability
56
+ [2504.01871] Interpreting Emergent Planning in Model-Free Reinforcement Learning,"rl, llm, planning",https://arxiv.org/abs/2504.01871,interpretability
57
+ [2504.02620] Efficient Model Editing with Task-Localized Sparse Fine-tuning,"edit, llm, sparsity",https://arxiv.org/abs/2504.02620,interpretability
58
+ [2504.02708] The Hidden Space of Safety: Understanding Preference-Tuned LLMs in Multilingual context,"safety, multilingual",https://arxiv.org/abs/2504.02708,interpretability
59
+ [2504.02732] Why do LLMs attend to the first token?,"attention, llm, attention sink",https://arxiv.org/abs/2504.02732,interpretability
60
+ [2504.02821] Sparse Autoencoders Learn Monosemantic Features in Vision-Language Models,"sae, llm, vision, mllm",https://arxiv.org/abs/2504.02821,interpretability
61
+ [2504.02862] Towards Understanding How Knowledge Evolves in Large Vision-Language Models,"knowledge, vision, llm, mllm",https://arxiv.org/abs/2504.02862,interpretability
62
+ "[2504.02904] How Post-Training Reshapes LLMs: A Mechanistic View on Knowledge, Truthfulness, Refusal, and Confidence","llm, post-training",https://arxiv.org/abs/2504.02904,interpretability
63
+ [2504.02922] Robustly identifying concepts introduced during chat fine-tuning using crosscoders,"chat, llm, crosscoder",https://arxiv.org/abs/2504.02922,interpretability
64
+ [2504.02956] Understanding Aha Moments: from External Observations to Internal Mechanisms,"r1, o1, tts, aha",https://arxiv.org/abs/2504.02956,interpretability
65
+ [2504.03022] The Dual-Route Model of Induction,"induction, llm",https://arxiv.org/abs/2504.03022,interpretability
66
+ [2504.03635] Do Larger Language Models Imply Better Reasoning? A Pretraining Scaling Law for Reasoning,"scaling law, reasoning, llm",https://arxiv.org/abs/2504.03635,interpretability
67
+ [2504.03889] Using Attention Sinks to Identify and Evaluate Dormant Heads in Pretrained LLMs,"attention sink, llm, attention head",https://arxiv.org/abs/2504.03889,interpretability
68
+ [2504.03933] Language Models Are Implicitly Continuous,"llm, continuous",https://arxiv.org/abs/2504.03933,interpretability
69
+ [2504.04215] Towards Understanding and Improving Refusal in Compressed Models via Mechanistic interpretability,"refusal, mi",https://arxiv.org/abs/2504.04215,interpretability
70
+ [2504.04238] Sensitivity Meets Sparsity: The Impact of Extremely Sparse Parameter Patterns on Theory-of-Mind of Large Language Models,"llm, sparsity",https://arxiv.org/abs/2504.04238,interpretability
71
+ [2504.04264] Lost in Multilinguality: Dissecting Cross-lingual Factual Inconsistency in Transformer Language Models,"mllm, llm",https://arxiv.org/abs/2504.04264,interpretability
72
+ [2504.04635] Steering off Course: Reliability Challenges in Steering Language Models,"steer, llm",https://arxiv.org/abs/2504.04635,interpretability
73
+ [2504.04994] Following the Whispers of Values: Unraveling Neural Mechanisms Behind Value-Oriented Behaviors in LLMs,"neural mechanism, llm",https://arxiv.org/abs/2504.04994,interpretability
74
+ [2504.14218] Understanding the Repeat Curse in Large Language Models from a Feature Perspective,"repetition, llm, sae, feature",https://arxiv.org/abs/2504.14218,interpretability
75
+ [2504.14496] Functional Abstraction of Knowledge Recall in Large Language Models,"knowledge, recall, llm",https://arxiv.org/abs/2504.14496,interpretability
76
+ [2504.15133] EasyEdit2: An Easy-to-use Steering Framework for Editing Large Language Models,"steer, llm, edit",https://arxiv.org/abs/2504.15133,interpretability
77
+ [2504.15471] Bigram Subnetworks: Mapping to Next Tokens in Transformer Language Models,"next token, transformer",https://arxiv.org/abs/2504.15471,interpretability
78
+ [2504.15473] Emergence and Evolution of Interpretable Concepts in Diffusion Models,"diffusion, concept",https://arxiv.org/abs/2504.15473,interpretability
79
+ [2504.15630] Exploiting Contextual Knowledge in LLMs through V-usable Information based Layer Enhancement,"knowledge, llm",https://arxiv.org/abs/2504.15630,interpretability
80
+ [2504.16871] Exploring How LLMs Capture and Represent Domain-Specific Knowledge,"knowledge, llm",https://arxiv.org/abs/2504.16871,interpretability
81
+ [2505.13514] Induction Head Toxicity Mechanistically Explains Repetition Curse in Large Language Models,"induction head, repetition curse, llm",https://arxiv.org/abs/2505.13514,interpretability
82
+ [2505.13737] Causal Head Gating: A Framework for Interpreting Roles of Attention Heads in Transformers,"causal head gating, attention heads, transformers",https://arxiv.org/abs/2505.13737,interpretability
83
+ [2505.13763] Language Models Are Capable of Metacognitive Monitoring and Control of Their Internal Activations,"metacognition, internal activations, llm",https://arxiv.org/abs/2505.13763,interpretability
84
+ [2505.13898] Do Language Models Use Their Depth Efficiently?,"model depth, efficiency, language models",https://arxiv.org/abs/2505.13898,interpretability
85
+ [2505.14158] Temporal Alignment of Time Sensitive Facts with Activation Engineering,"temporal alignment, activation engineering",https://arxiv.org/abs/2505.14158,interpretability
86
+ [2505.14158] Temporal Alignment of Time Sensitive Facts with Activation Engineering,"temporal alignment, activation engineering",https://arxiv.org/abs/2505.14158,interpretability
87
+ [2505.14178] Tokenization Constraints in LLMs: A Study of Symbolic and Arithmetic Reasoning Limits,"tokenization, llm, reasoning limits",https://arxiv.org/abs/2505.14178,interpretability
88
+ [2505.14185] Safety Subspaces are Not Distinct: A Fine-Tuning Case Study,"safety subspaces, fine-tuning",https://arxiv.org/abs/2505.14185,interpretability
89
+ [2505.14233] Mechanistic Fine-tuning for In-context Learning,"mechanistic fine-tuning, in-context learning",https://arxiv.org/abs/2505.14233,interpretability
90
+ [2505.14257] Aligning Attention Distribution to Information Flow for Hallucination Mitigation in Large Vision-Language Models,"attention alignment, hallucination mitigation, lvlm",https://arxiv.org/abs/2505.14257,interpretability
91
+ [2505.14352] Towards eliciting latent knowledge from LLMs with mechanistic interpretability,"latent knowledge, llm, mechanistic interpretability",https://arxiv.org/abs/2505.14352,interpretability
92
+ "[2505.14406] Pierce the Mists, Greet the Sky: Decipher Knowledge Overshadowing via Knowledge Circuit Analysis","knowledge overshadowing, circuit analysis",https://arxiv.org/abs/2505.14406,interpretability
93
+ [2505.14467] Void in Language Models,"void, language models",https://arxiv.org/abs/2505.14467,interpretability
94
+ [2505.14536] Breaking Bad Tokens: Detoxification of LLMs Using Sparse Autoencoders,"detoxification, llm, sparse autoencoders",https://arxiv.org/abs/2505.14536,interpretability
95
+ [2505.14685] Language Models use Lookbacks to Track Beliefs,"lookbacks, belief tracking, language models",https://arxiv.org/abs/2505.14685,interpretability
96
+ [2505.17630] GIM: Improved interpretability for Large Language Models,"llm, interp",https://arxiv.org/abs/2505.17630,interpretability
97
+ [2505.17936] Understanding Gated Neurons in Transformers from Their Input-Output Functionality,"transformers, interp",https://arxiv.org/abs/2505.17936,interpretability
98
+ [2505.17760] But what is your honest answer? Aiding LLM-judges with honest alternatives using steering vectors,"steer, llm, honest",https://arxiv.org/abs/2505.17760,interpretability
99
+ [2505.17322] From Compression to Expansion: A Layerwise Analysis of In-Context Learning,"in-context learning, llm",https://arxiv.org/abs/2505.17322,interpretability
100
+ [2505.17073] Mechanistic interpretability of GPT-like Models on Summarization Tasks,"mechanistic interp, llm, summarization",https://arxiv.org/abs/2505.17073,interpretability
101
+ [2505.17697] Activation Control for Efficiently Eliciting Long Chain-of-thought Ability of Language Models,"activation, CoT, llm, steer",https://arxiv.org/abs/2505.17697,interpretability
102
+ [2505.17712] Understanding How Value Neurons Shape the Generation of Specified Values in LLMs,"interp, value neuron, llm",https://arxiv.org/abs/2505.17712,interpretability
103
+ [2505.17812] Seeing It or Not? Interpretable Vision-aware Latent Steering to Mitigate Object Hallucinations,"interp, vision, steer, llm",https://arxiv.org/abs/2505.17812,interpretability
104
+ [2505.17769] Inference-Time Decomposition of Activations (ITDA): A Scalable Approach to Interpreting Large Language Models,"interp, activation, llm",https://arxiv.org/abs/2505.17769,interpretability
105
+ [2505.17260] The Rise of Parameter Specialization for Knowledge Storage in Large Language Models,"knowledge, llm",https://arxiv.org/abs/2505.17260,interpretability
106
+ [2505.17863] The emergence of sparse attention: impact of data distribution and benefits of repetition,"attention, sparse, llm",https://arxiv.org/abs/2505.17863,interpretability
107
+ [2505.17071] What's in a prompt? Language models encode literary style in prompt embeddings,"prompt, llm, interp",https://arxiv.org/abs/2505.17071,interpretability
108
+ [2505.17646] Understanding Pre-training and Fine-tuning from Loss Landscape Perspectives,"pre-training, fine-tuning, llm",https://arxiv.org/abs/2505.17646,interpretability
109
+ [2505.17078] GloSS over Toxicity: Understanding and Mitigating Toxicity in LLMs via Global Toxic Subspace,"toxicity, llm, interp",https://arxiv.org/abs/2505.17078,interpretability
110
+ "[2505.19440] The Birth of Knowledge: Emergent Features across Time, Space, and Scale in Large Language Models","emergent features, llm",https://arxiv.org/abs/2505.19440,interpretability
111
+ [2505.18672] Does Representation Intervention Really Identify Desired Concepts and Elicit Alignment?,"representation intervention, alignment",https://arxiv.org/abs/2505.18672,interpretability
112
+ [2505.18752] Unifying Attention Heads and Task Vectors via Hidden State Geometry in In-Context Learning,"attention heads, task vectors, in-context learning",https://arxiv.org/abs/2505.18752,interpretability
113
+ [2505.18933] REACT: Representation Extraction And Controllable Tuning to Overcome Overfitting in LLM Knowledge Editing,"representation extraction, controllable tuning, knowledge editing",https://arxiv.org/abs/2505.18933,interpretability
114
+ [2505.18588] Safety Alignment via Constrained Knowledge Unlearning,"safety alignment, knowledge unlearning",https://arxiv.org/abs/2505.18588,interpretability
115
+ [2505.18706] Steering LLM Reasoning Through Bias-Only Adaptation,"steering, llm, bias adaptation",https://arxiv.org/abs/2505.18706,interpretability
116
+ [2505.19488] Understanding Transformer from the Perspective of Associative Memory,"transformer, associative memory",https://arxiv.org/abs/2505.19488,interpretability
117
+ "[2505.20076] Grokking ExPLAIND: Unifying Model, Data, and Training Attribution to Study Model Behavior","grokking, model attribution, data attribution",https://arxiv.org/abs/2505.20076,interpretability
118
+ [2505.18235] The Origins of Representation Manifolds in Large Language Models,"representation manifolds, llm",https://arxiv.org/abs/2505.18235,interpretability
119
+ [2505.20063] SAEs Are Good for Steering -- If You Select the Right Features,"SAE, steering",https://arxiv.org/abs/2505.20063,interpretability
120
+ [2505.16178] Understanding Fact Recall in Language Models: Why Two-Stage Training Encourages Memorization but Mixed Training Teaches Knowledge,"llm, memorization, knowledge",https://arxiv.org/abs/2505.16178,interpretability
121
+ [2505.22586] Precise In-Parameter Concept Erasure in Large Language Models,"llm, erasure, interp",https://arxiv.org/abs/2505.22586,interpretability
122
+ [2505.22630] Stochastic Chameleons: Irrelevant Context Hallucinations Reveal Class-Based (Mis)Generalization in LLMs,"llm, hallucination, generalization",https://arxiv.org/abs/2505.22630,interpretability
123
+ [2505.21800] From Directions to Cones: Exploring Multidimensional Representations of Propositional Facts in LLMs,"llm, representation, interp",https://arxiv.org/abs/2505.21800,interpretability
124
+ [2505.22411] Mitigating Overthinking in Large Reasoning Models via Manifold Steering,"llm, reasoning, steer",https://arxiv.org/abs/2505.22411,interpretability
125
+ [2505.22572] Fusion Steering: Prompt-Specific Activation Control,"steer, activation, llm",https://arxiv.org/abs/2505.22572,interpretability
126
+ [2505.21772] Calibrating LLM Confidence by Probing Perturbed Representation Stability,"llm, calibration, representation",https://arxiv.org/abs/2505.21772,interpretability