MikaStars39 commited on
Commit
4f8be76
·
verified ·
1 Parent(s): d1cc0ed

Upload folder using huggingface_hub

Browse files
Files changed (2) hide show
  1. README.md +4 -2
  2. all/paper.csv +134 -0
README.md CHANGED
@@ -1,10 +1,12 @@
1
  ---
2
  license: mit
3
  size_categories:
4
- - 100K<n<1M
5
  pretty_name: DailArXivPaper
6
  task_categories:
7
  - text-generation
8
  tags:
9
  - raw
10
- ---
 
 
 
1
  ---
2
  license: mit
3
  size_categories:
4
+ - n<1K
5
  pretty_name: DailArXivPaper
6
  task_categories:
7
  - text-generation
8
  tags:
9
  - raw
10
+ language:
11
+ - en
12
+ ---
all/paper.csv ADDED
@@ -0,0 +1,134 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ title,keywords,url,type
2
+ [2210.01117] Omnigrok: Grokking Beyond Algorithmic Data,"grok, llm, interp",https://arxiv.org/abs/2210.01117,Interpretability
3
+ [2308.10248] Steering Language Models With Activation Engineering,steer,https://arxiv.org/abs/2308.10248,Interpretability
4
+ [2310.15213] Function Vectors in Large Language Models,"llm, icl, function vector",https://arxiv.org/abs/2310.15213,Interpretability
5
+ [2312.00752] Mamba: Linear-Time Sequence Modeling with Selective State Spaces,"linear, mamba",https://arxiv.org/abs/2312.00752,Interpretability
6
+ [2312.06635] Gated Linear Attention Transformers with Hardware-Efficient Training,linear,https://arxiv.org/abs/2312.06635,Interpretability
7
+ [2401.06824] Revisiting Jailbreaking for Large Language Models: A Representation Engineering Perspective,"steer, llm",https://arxiv.org/abs/2401.06824,Interpretability
8
+ [2402.18312] How to think step-by-step: A mechanistic understanding of chain-of-thought reasoning,"CoT, interp, llm",https://arxiv.org/abs/2402.18312,Interpretability
9
+ [2402.18344] Focus on Your Question! Interpreting and Mitigating Toxic CoT Problems in Commonsense Reasoning,"CoT, interp, llm",https://arxiv.org/abs/2402.18344,Interpretability
10
+ [2403.01590] The Hidden Attention of Mamba Models,"linear, mamba, attention",https://arxiv.org/abs/2403.01590,Interpretability
11
+ [2405.12522] Sparse Autoencoders Enable Scalable and Reliable Circuit Identification in Language Models,"SAE, llm",https://arxiv.org/abs/2405.12522,Interpretability
12
+ [2405.14860] Not All Language Model Features Are Linear,"SAE, feature",https://arxiv.org/abs/2405.14860,Interpretability
13
+ [2405.15071] Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization,"grok, llm",https://arxiv.org/abs/2405.15071,Interpretability
14
+ [2405.21060] Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality,"linear, mamba",https://arxiv.org/abs/2405.21060,Interpretability
15
+ [2406.06484] Parallelizing Linear Transformers with the Delta Rule over Sequence Length,linear,https://arxiv.org/abs/2406.06484,Interpretability
16
+ [2406.11944] Transcoders Find Interpretable LLM Feature Circuits,SAE,https://arxiv.org/abs/2406.11944,Interpretability
17
+ [2407.14494] InterpBench: Semi-Synthetic Transformers for Evaluating Mechanistic Interpretability Techniques,"interp, benchmark",https://arxiv.org/abs/2407.14494,Interpretability
18
+ [2409.04185] Residual Stream Analysis with Multi-Layer SAEs,"SAE, match, residual stream",https://arxiv.org/abs/2409.04185,Interpretability
19
+ [2410.04234] Functional Homotopy: Smoothing Discrete Optimization via Continuous Parameters for LLM Jailbreak Attacks,jailbreak,https://arxiv.org/abs/2410.04234,Interpretability
20
+ [2410.06981] Sparse Autoencoders Reveal Universal Feature Spaces Across Large Language Models,"SAE, llm",https://arxiv.org/abs/2410.06981,Interpretability
21
+ [2410.07656] Mechanistic Permutability: Match Features Across Layers,"SAE, match, residual stream",https://arxiv.org/abs/2410.07656,Interpretability
22
+ [2410.16314] Steering Large Language Models using Conceptors: Improving Addition-Based Activation Engineering,steer,https://arxiv.org/abs/2410.16314,Interpretability
23
+ [2411.04330] Scaling Laws for Precision,"quantization, llm, scaling law",https://arxiv.org/abs/2411.04330,Interpretability
24
+ [2411.14257] Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models,"hallucination, llm, knowledge",https://arxiv.org/abs/2411.14257,Interpretability
25
+ [2501.06254] Rethinking Evaluation of Sparse Autoencoders through the Representation of Polysemous Words,SAE,https://arxiv.org/abs/2501.06254,Interpretability
26
+ [2502.03032] Analyze Feature Flow to Enhance Interpretation and Steering in Language Models,"steer, llm, feature",https://arxiv.org/abs/2502.03032,Interpretability
27
+ [2502.09245] You Do Not Fully Utilize Transformer's Representation Capacity,"llm, transformers, representation",https://arxiv.org/abs/2502.09245,Interpretability
28
+ [2502.12179] Identifiable Steering via Sparse Autoencoding of Multi-Concept Shifts,"steer, SAE",https://arxiv.org/abs/2502.12179,Interpretability
29
+ [2502.12446] Multi-Attribute Steering of Language Models via Targeted Intervention,"steer, llm, alignment",https://arxiv.org/abs/2502.12446,Interpretability
30
+ "[2502.13490] What are Models Thinking about? Understanding Large Language Model Hallucinations ""Psychology"" through Model Inner State Analysis","llm, hallucination",https://arxiv.org/abs/2502.13490,Interpretability
31
+ [2502.13632] Concept Layers: Enhancing Interpretability and Intervenability via LLM Conceptualization,"concept, llm, interp",https://arxiv.org/abs/2502.13632,Interpretability
32
+ [2502.13913] How Do LLMs Perform Two-Hop Reasoning in Context?,"llm, reasoning, interp",https://arxiv.org/abs/2502.13913,Interpretability
33
+ [2502.13946] Why Safeguarded Ships Run Aground? Aligned Large Language Models' Safety Mechanisms Tend to Be Anchored in The Template Region,"llm, alignment, safety",https://arxiv.org/abs/2502.13946,Interpretability
34
+ [2502.14010] Which Attention Heads Matter for In-Context Learning?,"llm, icl, attention head",https://arxiv.org/abs/2502.14010,Interpretability
35
+ [2502.14258] Does Time Have Its Place? Temporal Heads: Where Language Models Recall Time-specific Information,"attention head, time",https://arxiv.org/abs/2502.14258,Interpretability
36
+ [2502.14888] The Multi-Faceted Monosemanticity in Multimodal Representations,"mllm, monosemanticity",https://arxiv.org/abs/2502.14888,Interpretability
37
+ [2502.15277] Analyzing the Inner Workings of Transformers in Compositional Generalization,"llm, interp",https://arxiv.org/abs/2502.15277,Interpretability
38
+ [2502.15603] Do Multilingual LLMs Think In English?,"mllm, think",https://arxiv.org/abs/2502.15603,Interpretability
39
+ [2502.17355] On Relation-Specific Neurons in Large Language Models,"neuron, llm, relation",https://arxiv.org/abs/2502.17355,Interpretability
40
+ [2502.17420] The Geometry of Refusal in Large Language Models: Concept Cones and Representational Independence,"safety, refusal, llm",https://arxiv.org/abs/2502.17420,Interpretability
41
+ [2502.19964] Do Sparse Autoencoders Generalize? A Case Study of Answerability,"SAE, generalize",https://arxiv.org/abs/2502.19964,Interpretability
42
+ [2503.02078] Superscopes: Amplifying Internal Feature Representations for Language Model Interpretation,"feature, llm",https://arxiv.org/abs/2503.02078,Interpretability
43
+ [2503.02989] Effectively Steer LLM To Follow Preference via Building Confident Directions,"steer, llm",https://arxiv.org/abs/2503.02989,Interpretability
44
+ [2503.03862] Not-Just-Scaling Laws: Towards a Better Understanding of the Downstream Impact of Language Model Design Decisions,"scaling law, llm, downstream",https://arxiv.org/abs/2503.03862,Interpretability
45
+ [2503.07572] Optimizing Test-Time Compute via Meta Reinforcement Fine-Tuning,"test time scaling, RL, finetuning",https://arxiv.org/abs/2503.07572,Interpretability
46
+ [2503.09573] Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models,"diffusion, llm, autogressive",https://arxiv.org/abs/2503.09573,Interpretability
47
+ [2503.21073] Shared Global and Local Geometry of Language Model Embeddings,"geometry, llm",https://arxiv.org/abs/2503.21073,Interpretability
48
+ "[2503.21676] How do language models learn facts? Dynamics, curricula and hallucinations","knowledge, hallucinations, llm",https://arxiv.org/abs/2503.21676,Interpretability
49
+ [2503.22720] Why Representation Engineering Works: A Theoretical and Empirical Study in Vision-Language Models,"representation, vision, llm",https://arxiv.org/abs/2503.22720,Interpretability
50
+ [2503.23084] The Reasoning-Memorization Interplay in Language Models Is Mediated by a Single Direction,"reasoning, memory, llm, direction",https://arxiv.org/abs/2503.23084,Interpretability
51
+ [2503.23306] Focus Directions Make Your Language Models Pay More Attention to Relevant Contexts,"direction, llm, attention",https://arxiv.org/abs/2503.23306,Interpretability
52
+ [2503.24071] From Colors to Classes: Emergence of Concepts in Vision Transformers,"concept, llm, vision",https://arxiv.org/abs/2503.24071,Interpretability
53
+ [2503.24277] Evaluating and Designing Sparse Autoencoders by Approximating Quasi-Orthogonality,"sae, evaluation",https://arxiv.org/abs/2503.24277,Interpretability
54
+ [2504.00194] Identifying Sparsely Active Circuits Through Local Loss Landscape Decomposition,"sparsity, circuits",https://arxiv.org/abs/2504.00194,Interpretability
55
+ [2504.01100] Repetitions are not all alike: distinct mechanisms sustain repetition in language models,"repetition, llm",https://arxiv.org/abs/2504.01100,Interpretability
56
+ [2504.01871] Interpreting Emergent Planning in Model-Free Reinforcement Learning,"rl, llm, planning",https://arxiv.org/abs/2504.01871,Interpretability
57
+ [2504.02620] Efficient Model Editing with Task-Localized Sparse Fine-tuning,"edit, llm, sparsity",https://arxiv.org/abs/2504.02620,Interpretability
58
+ [2504.02708] The Hidden Space of Safety: Understanding Preference-Tuned LLMs in Multilingual context,"safety, multilingual",https://arxiv.org/abs/2504.02708,Interpretability
59
+ [2504.02732] Why do LLMs attend to the first token?,"attention, llm, attention sink",https://arxiv.org/abs/2504.02732,Interpretability
60
+ [2504.02821] Sparse Autoencoders Learn Monosemantic Features in Vision-Language Models,"sae, llm, vision, mllm",https://arxiv.org/abs/2504.02821,Interpretability
61
+ [2504.02862] Towards Understanding How Knowledge Evolves in Large Vision-Language Models,"knowledge, vision, llm, mllm",https://arxiv.org/abs/2504.02862,Interpretability
62
+ "[2504.02904] How Post-Training Reshapes LLMs: A Mechanistic View on Knowledge, Truthfulness, Refusal, and Confidence","llm, post-training",https://arxiv.org/abs/2504.02904,Interpretability
63
+ [2504.02922] Robustly identifying concepts introduced during chat fine-tuning using crosscoders,"chat, llm, crosscoder",https://arxiv.org/abs/2504.02922,Interpretability
64
+ [2504.02956] Understanding Aha Moments: from External Observations to Internal Mechanisms,"r1, o1, tts, aha",https://arxiv.org/abs/2504.02956,Interpretability
65
+ [2504.03022] The Dual-Route Model of Induction,"induction, llm",https://arxiv.org/abs/2504.03022,Interpretability
66
+ [2504.03635] Do Larger Language Models Imply Better Reasoning? A Pretraining Scaling Law for Reasoning,"scaling law, reasoning, llm",https://arxiv.org/abs/2504.03635,Interpretability
67
+ [2504.03889] Using Attention Sinks to Identify and Evaluate Dormant Heads in Pretrained LLMs,"attention sink, llm, attention head",https://arxiv.org/abs/2504.03889,Interpretability
68
+ [2504.03933] Language Models Are Implicitly Continuous,"llm, continuous",https://arxiv.org/abs/2504.03933,Interpretability
69
+ [2504.04215] Towards Understanding and Improving Refusal in Compressed Models via Mechanistic Interpretability,"refusal, mi",https://arxiv.org/abs/2504.04215,Interpretability
70
+ [2504.04238] Sensitivity Meets Sparsity: The Impact of Extremely Sparse Parameter Patterns on Theory-of-Mind of Large Language Models,"llm, sparsity",https://arxiv.org/abs/2504.04238,Interpretability
71
+ [2504.04264] Lost in Multilinguality: Dissecting Cross-lingual Factual Inconsistency in Transformer Language Models,"mllm, llm",https://arxiv.org/abs/2504.04264,Interpretability
72
+ [2504.04635] Steering off Course: Reliability Challenges in Steering Language Models,"steer, llm",https://arxiv.org/abs/2504.04635,Interpretability
73
+ [2504.04994] Following the Whispers of Values: Unraveling Neural Mechanisms Behind Value-Oriented Behaviors in LLMs,"neural mechanism, llm",https://arxiv.org/abs/2504.04994,Interpretability
74
+ [2504.14218] Understanding the Repeat Curse in Large Language Models from a Feature Perspective,"repetition, llm, sae, feature",https://arxiv.org/abs/2504.14218,Interpretability
75
+ [2504.14496] Functional Abstraction of Knowledge Recall in Large Language Models,"knowledge, recall, llm",https://arxiv.org/abs/2504.14496,Interpretability
76
+ [2504.15133] EasyEdit2: An Easy-to-use Steering Framework for Editing Large Language Models,"steer, llm, edit",https://arxiv.org/abs/2504.15133,Interpretability
77
+ [2504.15471] Bigram Subnetworks: Mapping to Next Tokens in Transformer Language Models,"next token, transformer",https://arxiv.org/abs/2504.15471,Interpretability
78
+ [2504.15473] Emergence and Evolution of Interpretable Concepts in Diffusion Models,"diffusion, concept",https://arxiv.org/abs/2504.15473,Interpretability
79
+ [2504.15630] Exploiting Contextual Knowledge in LLMs through V-usable Information based Layer Enhancement,"knowledge, llm",https://arxiv.org/abs/2504.15630,Interpretability
80
+ [2504.16871] Exploring How LLMs Capture and Represent Domain-Specific Knowledge,"knowledge, llm",https://arxiv.org/abs/2504.16871,Interpretability
81
+ [2505.13514] Induction Head Toxicity Mechanistically Explains Repetition Curse in Large Language Models,"induction head, repetition curse, llm",https://arxiv.org/abs/2505.13514,Interpretability
82
+ [2505.13737] Causal Head Gating: A Framework for Interpreting Roles of Attention Heads in Transformers,"causal head gating, attention heads, transformers",https://arxiv.org/abs/2505.13737,Interpretability
83
+ [2505.13763] Language Models Are Capable of Metacognitive Monitoring and Control of Their Internal Activations,"metacognition, internal activations, llm",https://arxiv.org/abs/2505.13763,Interpretability
84
+ [2505.13898] Do Language Models Use Their Depth Efficiently?,"model depth, efficiency, language models",https://arxiv.org/abs/2505.13898,Interpretability
85
+ [2505.14158] Temporal Alignment of Time Sensitive Facts with Activation Engineering,"temporal alignment, activation engineering",https://arxiv.org/abs/2505.14158,Interpretability
86
+ [2505.14158] Temporal Alignment of Time Sensitive Facts with Activation Engineering,"temporal alignment, activation engineering",https://arxiv.org/abs/2505.14158,Interpretability
87
+ [2505.14178] Tokenization Constraints in LLMs: A Study of Symbolic and Arithmetic Reasoning Limits,"tokenization, llm, reasoning limits",https://arxiv.org/abs/2505.14178,Interpretability
88
+ [2505.14185] Safety Subspaces are Not Distinct: A Fine-Tuning Case Study,"safety subspaces, fine-tuning",https://arxiv.org/abs/2505.14185,Interpretability
89
+ [2505.14233] Mechanistic Fine-tuning for In-context Learning,"mechanistic fine-tuning, in-context learning",https://arxiv.org/abs/2505.14233,Interpretability
90
+ [2505.14257] Aligning Attention Distribution to Information Flow for Hallucination Mitigation in Large Vision-Language Models,"attention alignment, hallucination mitigation, lvlm",https://arxiv.org/abs/2505.14257,Interpretability
91
+ [2505.14352] Towards eliciting latent knowledge from LLMs with mechanistic interpretability,"latent knowledge, llm, mechanistic interpretability",https://arxiv.org/abs/2505.14352,Interpretability
92
+ "[2505.14406] Pierce the Mists, Greet the Sky: Decipher Knowledge Overshadowing via Knowledge Circuit Analysis","knowledge overshadowing, circuit analysis",https://arxiv.org/abs/2505.14406,Interpretability
93
+ [2505.14467] Void in Language Models,"void, language models",https://arxiv.org/abs/2505.14467,Interpretability
94
+ [2505.14536] Breaking Bad Tokens: Detoxification of LLMs Using Sparse Autoencoders,"detoxification, llm, sparse autoencoders",https://arxiv.org/abs/2505.14536,Interpretability
95
+ [2505.14685] Language Models use Lookbacks to Track Beliefs,"lookbacks, belief tracking, language models",https://arxiv.org/abs/2505.14685,Interpretability
96
+ [2505.17122] Shallow Preference Signals: Large Language Model Aligns Even Better with Truncated Data?,"llm, alignment, preference",https://arxiv.org/abs/2505.17122,Agent/RL
97
+ "[2505.17923] Language models can learn implicit multi-hop reasoning, but only if they have lots of training data","llm, reasoning, multi-hop",https://arxiv.org/abs/2505.17923,Agent/RL
98
+ [2505.17630] GIM: Improved Interpretability for Large Language Models,"llm, interp",https://arxiv.org/abs/2505.17630,Interpretability
99
+ [2505.17936] Understanding Gated Neurons in Transformers from Their Input-Output Functionality,"transformers, interp",https://arxiv.org/abs/2505.17936,Interpretability
100
+ [2505.17760] But what is your honest answer? Aiding LLM-judges with honest alternatives using steering vectors,"steer, llm, honest",https://arxiv.org/abs/2505.17760,Interpretability
101
+ [2505.17322] From Compression to Expansion: A Layerwise Analysis of In-Context Learning,"in-context learning, llm",https://arxiv.org/abs/2505.17322,Interpretability
102
+ [2505.17073] Mechanistic Interpretability of GPT-like Models on Summarization Tasks,"mechanistic interp, llm, summarization",https://arxiv.org/abs/2505.17073,Interpretability
103
+ [2505.17697] Activation Control for Efficiently Eliciting Long Chain-of-thought Ability of Language Models,"activation, CoT, llm, steer",https://arxiv.org/abs/2505.17697,Interpretability
104
+ [2505.17712] Understanding How Value Neurons Shape the Generation of Specified Values in LLMs,"interp, value neuron, llm",https://arxiv.org/abs/2505.17712,Interpretability
105
+ [2505.17812] Seeing It or Not? Interpretable Vision-aware Latent Steering to Mitigate Object Hallucinations,"interp, vision, steer, llm",https://arxiv.org/abs/2505.17812,Interpretability
106
+ [2505.17769] Inference-Time Decomposition of Activations (ITDA): A Scalable Approach to Interpreting Large Language Models,"interp, activation, llm",https://arxiv.org/abs/2505.17769,Interpretability
107
+ [2505.17260] The Rise of Parameter Specialization for Knowledge Storage in Large Language Models,"knowledge, llm",https://arxiv.org/abs/2505.17260,Interpretability
108
+ [2505.17863] The emergence of sparse attention: impact of data distribution and benefits of repetition,"attention, sparse, llm",https://arxiv.org/abs/2505.17863,Interpretability
109
+ [2505.17071] What's in a prompt? Language models encode literary style in prompt embeddings,"prompt, llm, interp",https://arxiv.org/abs/2505.17071,Interpretability
110
+ [2505.17646] Understanding Pre-training and Fine-tuning from Loss Landscape Perspectives,"pre-training, fine-tuning, llm",https://arxiv.org/abs/2505.17646,Interpretability
111
+ [2505.17078] GloSS over Toxicity: Understanding and Mitigating Toxicity in LLMs via Global Toxic Subspace,"toxicity, llm, interp",https://arxiv.org/abs/2505.17078,Interpretability
112
+ "[2505.19440] The Birth of Knowledge: Emergent Features across Time, Space, and Scale in Large Language Models","emergent features, llm",https://arxiv.org/abs/2505.19440,Interpretability
113
+ [2505.18672] Does Representation Intervention Really Identify Desired Concepts and Elicit Alignment?,"representation intervention, alignment",https://arxiv.org/abs/2505.18672,Interpretability
114
+ [2505.18752] Unifying Attention Heads and Task Vectors via Hidden State Geometry in In-Context Learning,"attention heads, task vectors, in-context learning",https://arxiv.org/abs/2505.18752,Interpretability
115
+ [2505.18933] REACT: Representation Extraction And Controllable Tuning to Overcome Overfitting in LLM Knowledge Editing,"representation extraction, controllable tuning, knowledge editing",https://arxiv.org/abs/2505.18933,Interpretability
116
+ [2505.18588] Safety Alignment via Constrained Knowledge Unlearning,"safety alignment, knowledge unlearning",https://arxiv.org/abs/2505.18588,Interpretability
117
+ [2505.18706] Steering LLM Reasoning Through Bias-Only Adaptation,"steering, llm, bias adaptation",https://arxiv.org/abs/2505.18706,Interpretability
118
+ [2505.19488] Understanding Transformer from the Perspective of Associative Memory,"transformer, associative memory",https://arxiv.org/abs/2505.19488,Interpretability
119
+ "[2505.20076] Grokking ExPLAIND: Unifying Model, Data, and Training Attribution to Study Model Behavior","grokking, model attribution, data attribution",https://arxiv.org/abs/2505.20076,Interpretability
120
+ [2505.18235] The Origins of Representation Manifolds in Large Language Models,"representation manifolds, llm",https://arxiv.org/abs/2505.18235,Interpretability
121
+ [2505.20063] SAEs Are Good for Steering -- If You Select the Right Features,"SAE, steering",https://arxiv.org/abs/2505.20063,Interpretability
122
+ [2505.20045] Uncertainty-Aware Attention Heads: Efficient Unsupervised Uncertainty Quantification for LLMs,"attention heads, uncertainty quantification, llm",https://arxiv.org/abs/2505.20045,Efficiency
123
+ [2505.16178] Understanding Fact Recall in Language Models: Why Two-Stage Training Encourages Memorization but Mixed Training Teaches Knowledge,"llm, memorization, knowledge",https://arxiv.org/abs/2505.16178,Interpretability
124
+ [2505.16284] Only Large Weights (And Not Skip Connections) Can Prevent the Perils of Rank Collapse,"rank collapse, weights",https://arxiv.org/abs/2505.16284,Efficiency
125
+ [2505.22586] Precise In-Parameter Concept Erasure in Large Language Models,"llm, erasure, interp",https://arxiv.org/abs/2505.22586,Interpretability
126
+ [2505.22630] Stochastic Chameleons: Irrelevant Context Hallucinations Reveal Class-Based (Mis)Generalization in LLMs,"llm, hallucination, generalization",https://arxiv.org/abs/2505.22630,Interpretability
127
+ [2505.21800] From Directions to Cones: Exploring Multidimensional Representations of Propositional Facts in LLMs,"llm, representation, interp",https://arxiv.org/abs/2505.21800,Interpretability
128
+ [2505.21785] Born a Transformer -- Always a Transformer?,"transformer, architecture",https://arxiv.org/abs/2505.21785,Efficiency
129
+ [2505.22506] Sparsification and Reconstruction from the Perspective of Representation Geometry,"sparsification, representation, geometry",https://arxiv.org/abs/2505.22506,Efficiency
130
+ [2505.22411] Mitigating Overthinking in Large Reasoning Models via Manifold Steering,"llm, reasoning, steer",https://arxiv.org/abs/2505.22411,Interpretability
131
+ [2505.22572] Fusion Steering: Prompt-Specific Activation Control,"steer, activation, llm",https://arxiv.org/abs/2505.22572,Interpretability
132
+ [2505.21772] Calibrating LLM Confidence by Probing Perturbed Representation Stability,"llm, calibration, representation",https://arxiv.org/abs/2505.21772,Interpretability
133
+ [2505.22255] Train Sparse Autoencoders Efficiently by Utilizing Features Correlation,"SAE, efficiency, training",https://arxiv.org/abs/2505.22255,Efficiency
134
+ [2505.22617] The Entropy Mechanism of Reinforcement Learning for Reasoning Language Models,"rl, llm, reasoning",https://arxiv.org/abs/2505.22617,Agent/RL