TensorLens: End-to-End Transformer Analysis via High-Order Attention Tensors Paper • 2601.17958 • Published 2 days ago
Revisiting LRP: Positional Attribution as the Missing Ingredient for Transformer Explainability Paper • 2506.02138 • Published Jun 2, 2025 • 1
Overflow Prevention Enhances Long-Context Recurrent LLMs Paper • 2505.07793 • Published May 12, 2025 • 3